View Chapter

Chapter 38 — Grasping

Domenico Prattichizzo and Jeffrey C. Trinkle

This chapter introduces fundamental models of grasp analysis. The overall model is a coupling of models that define contact behavior with widely used models of rigid-body kinematics and dynamics. The contact model essentially boils down to the selection of components of contact force and moment that are transmitted through each contact. Mathematical properties of the complete model naturally give rise to five primary grasp types whose physical interpretations provide insight for grasp and manipulation planning.

After introducing the basic models and types of grasps, this chapter focuses on the most important grasp characteristic: complete restraint. A grasp with complete restraint prevents loss of contact and thus is very secure. Two primary restraint properties are form closure and force closure. A form closure grasp guarantees maintenance of contact as long as the links of the hand and the object are well-approximated as rigid and as long as the joint actuators are sufficiently strong. As will be seen, the primary difference between form closure and force closure grasps is the latter’s reliance on contact friction. This translates into requiring fewer contacts to achieve force closure than form closure.

The goal of this chapter is to give a thorough understanding of the all-important grasp properties of form and force closure. This will be done through detailed derivations of grasp models and discussions of illustrative examples. For an indepth historical perspective and a treasure-trove bibliography of papers addressing a wide range of topics in grasping, the reader is referred to [38.1].

Grasp analysis using the MATLAB toolbox SynGrasp

Author  Monica Malvezzi, Guido Gioioso, Gionata Salvietti, Domenico Prattichizzo

Video ID : 551

In this video a examples of few grasp analysis are documented and reported. The analysis is performed using SynGrasp, a MATLAB toolbox for grasp analysis. It provides a graphical user interface (GUI) which the user can adopt to easily load a hand and an object, and a series of functions that the user can assemble and modify to exploit all the toolbox features. The video shows how to use SynGrasp to model and analyze grasping, and, in particular it shows how users can select and load in the GUI a hand model, then choose an object and place it in the workspace selecting its position w.r.t. the hand. The grasp is obtained closing the hand from an initial configuration, which can be set by the users acting on hand joints. Once the grasp is defined, it can be analyzed by evaluating grasp quality measures available in the toolbox. Grasps can be described either using the provided grasp planner or directly defining contact points on the hand with the respective contact normal directions. SynGrasp can model both fully and underactuated robotic hands. An important role in grasp analysis, in particular with underactuated hands, is played by system compliance. SynGrasp can model the stiffness at contact points, at the joints or in the actuation system including transmission. A wide set of analytical functions, continuously increasing with new features and capabilities, has been developed to investigate the main grasp properties: controllable forces and object displacement, manipulability analysis, grasp stiffness and different measures of grasp quality. A set of functions for the graphical representation of the hand, the object, and the main analysis results is provided. The toolbox is freely available at http://syngrasp.dii.unisi.it.

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

Biologically-inspired climbing with a hexapedal robot

Author  Matthew J. Spenko, Galen C. Haynes, Jeffrey A. Saunders, Mark R. Cutkosky, Alfred A. Rizzi, Robert J. Full, Daniel E. Koditschek

Video ID : 390

A climbing robot that grasps the microtexture of the surface using special feet and special motions. The development team includes researchers from U Penn, Stanford, Berkeley, Carnegie Mellon and Boston Dynamics.

Chapter 58 — Robotics in Hazardous Applications

James Trevelyan, William R. Hamel and Sung-Chul Kang

Robotics researchers have worked hard to realize a long-awaited vision: machines that can eliminate the need for people to work in hazardous environments. Chapter 60 is framed by the vision of disaster response: search and rescue robots carrying people from burning buildings or tunneling through collapsed rock falls to reach trapped miners. In this chapter we review tangible progress towards robots that perform routine work in places too dangerous for humans. Researchers still have many challenges ahead of them but there has been remarkable progress in some areas. Hazardous environments present special challenges for the accomplishment of desired tasks depending on the nature and magnitude of the hazards. Hazards may be present in the form of radiation, toxic contamination, falling objects or potential explosions. Technology that specialized engineering companies can develop and sell without active help from researchers marks the frontier of commercial feasibility. Just inside this border lie teleoperated robots for explosive ordnance disposal (EOD) and for underwater engineering work. Even with the typical tenfold disadvantage in manipulation performance imposed by the limits of today’s telepresence and teleoperation technology, in terms of human dexterity and speed, robots often can offer a more cost-effective solution. However, most routine applications in hazardous environments still lie far beyond the feasibility frontier. Fire fighting, remediating nuclear contamination, reactor decommissioning, tunneling, underwater engineering, underground mining and clearance of landmines and unexploded ordnance still present many unsolved problems.

DALMATINO

Author  James P. Trevelyan

Video ID : 575

This is another smaller, remotely-operated, mine-clearance vehicle similar in principle to the BOZENA machine described in Video 574. This video clearly shows the vegetation removal capability of these machines.

Chapter 41 — Active Manipulation for Perception

Anna Petrovskaya and Kaijen Hsiao

This chapter covers perceptual methods in which manipulation is an integral part of perception. These methods face special challenges due to data sparsity and high costs of sensing actions. However, they can also succeed where other perceptual methods fail, for example, in poor-visibility conditions or for learning the physical properties of a scene.

The chapter focuses on specialized methods that have been developed for object localization, inference, planning, recognition, and modeling in activemanipulation approaches.We concludewith a discussion of real-life applications and directions for future research.

Touch-based, door-handle localization and manipulation

Author  Anna Petrovskaya

Video ID : 723

The harmonic arm robot localizes the door handle by touching it. 3-DOF localization is performed in this video. Once the localization is complete, the robot is able to grasp and manipulate the handle. The mobile platform is teleoperated, whereas the robotic arm motions are autonomous. A 2-D model of the door and handle was constructed from hand measurements for this experiment.

Chapter 63 — Medical Robotics and Computer-Integrated Surgery

Russell H. Taylor, Arianna Menciassi, Gabor Fichtinger, Paolo Fiorini and Paolo Dario

The growth of medical robotics since the mid- 1980s has been striking. From a few initial efforts in stereotactic brain surgery, orthopaedics, endoscopic surgery, microsurgery, and other areas, the field has expanded to include commercially marketed, clinically deployed systems, and a robust and exponentially expanding research community. This chapter will discuss some major themes and illustrate them with examples from current and past research. Further reading providing a more comprehensive review of this rapidly expanding field is suggested in Sect. 63.4.

Medical robotsmay be classified in many ways: by manipulator design (e.g., kinematics, actuation); by level of autonomy (e.g., preprogrammed versus teleoperation versus constrained cooperative control), by targeted anatomy or technique (e.g., cardiac, intravascular, percutaneous, laparoscopic, microsurgical); or intended operating environment (e.g., in-scanner, conventional operating room). In this chapter, we have chosen to focus on the role of medical robots within the context of larger computer-integrated systems including presurgical planning, intraoperative execution, and postoperative assessment and follow-up.

First, we introduce basic concepts of computerintegrated surgery, discuss critical factors affecting the eventual deployment and acceptance of medical robots, and introduce the basic system paradigms of surgical computer-assisted planning, execution, monitoring, and assessment (surgical CAD/CAM) and surgical assistance. In subsequent sections, we provide an overview of the technology ofmedical robot systems and discuss examples of our basic system paradigms, with brief additional discussion topics of remote telesurgery and robotic surgical simulators. We conclude with some thoughts on future research directions and provide suggested further reading.

A micro-robot operating inside an eye

Author  ETHZ, Zurich, Switzerland - Prof. Bradley Nelson

Video ID : 835

A micro-robot with remote magnetic propulsion for surgery inside an eye.

Chapter 17 — Limbed Systems

Shuuji Kajita and Christian Ott

A limbed system is a mobile robot with a body, legs and arms. First, its general design process is discussed in Sect. 17.1. Then we consider issues of conceptual design and observe designs of various existing robots in Sect. 17.2. As an example in detail, the design of a humanoid robot HRP-4C is shown in Sect. 17.3. To design a limbed system of good performance, it is important to take into account of actuation and control, like gravity compensation, limit cycle dynamics, template models, and backdrivable actuation. These are discussed in Sect. 17.4.

In Sect. 17.5, we overview divergence of limbed systems. We see odd legged walkers, leg–wheel hybrid robots, leg–arm hybrid robots, tethered walking robots, and wall-climbing robots. To compare limbed systems of different configurations,we can use performance indices such as the gait sensitivity norm, the Froude number, and the specific resistance, etc., which are introduced in Sect. 17.6.

Cockroach-like hexapod

Author  Roger D. Quinn

Video ID : 521

A biologically inspired insect-like hexapod developed by Dr. Nelson, Dr. Bachmann, Dr. Quinn, Dr. Watson and Dr. Ritzmann.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Discrimination of objects through sensory-motor coordination

Author  Stefano Nolfi

Video ID : 116

A Khepera robot provided with infrared sensors is evolved for the ability to find and remain close to a cylindrical object randomly located in the environment. The discrimination of the two types of objects (walls and cylinders) is realized by exploiting the limit-cycle oscillatory behavio,r which is produced by the robot near the cylinder and which emerges from the robot/environmental interactions (i.e., by the interplay between the way in which the robot react to sensory stimuli and the perceptual consequences of the robot actions).

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

An autonomous cucumber harvester

Author  Elder J. van Henten, Jochen Hemming, Bart A.J. van Tuijl, J.G. Kornet, Jan Meuleman, Jan Bontsema, Erik A. van Os

Video ID : 308

The video demonstrates an autonomous cucumber harvester developed at Wageningen University and Research Centre, Wageningen, The Netherlands. The machine consists of a mobile platform which runs on rails, which are commonly used in greenhouses in The Netherlands for the purpose of internal transport, but they are also used as a hot- water heating system for the greenhouse. Harvesting requires functional steps such as the detection and localization of the fruit and assessment of its ripeness. In the case of the cucumber harvester, the different reflection properties in the near infrared spectrum are exploited to detect green cucumbers in the green environment. Whether the cucumber was ready for harvest was identified based on an estimation of its weight. Since cucumbers consist 95% of water, the weight estimation was achieved by estimating the volume of each fruit. Stereo-vision principles were then used to locate the fruits to be harvested in the 3-D environment. For that purpose, the camera was shifted 50 mm on a linear slide and two images of the same scene were taken and processed. A Mitsubishi RV-E2 manipulator was used to steer the gripper-cutter mechanism to the fruit and transport the harvested fruit back to a storage crate. Collision-free motion planning based on the A* algorithm was used to steer the manipulator during the harvesting operation. The cutter consisted of a parallel gripper that grabbed the peduncle of the fruit, i.e., the stem segment that connects the fruit to the main stem of the plant. Then the action of a suction cup immobilized the fruit in the gripper. A special thermal cutting device was used to separate the fruit from the plant. The high temperature of the cutting device also prevented the potential transport of viruses from one plant to the other during the harvesting process. For each successful cucumber harvested, this machine needed 65.2 s on average. The average success rate was 74.4%. It was found to be a great advantage that the system was able to perform several harvest attempts on a single cucumber from different harvest positions of the robot. This improved the success rate considerably. Since not all attempts were successful, a cycle time of 124 s per harvested cucumber was measured under practical circumstances.

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

An assistive, decision-and-control architecture for force-sensitive, hand–arm systems driven by human–machine interfaces (MM3)

Author  Jörn Vogel, Sami Haddadin, John D. Simeral, Daniel Bacher , Beata Jarosiewicz, Leigh R. Hochberg, John P. Donoghue, Patrick van der Smagt

Video ID : 621

This video shows a 3-D reach and grasp experiment using the Braingate2 neural interface system. The robot is controlled through a multipriority Cartesian impedance controller and its behavior is extended with collision detection and reflex reaction. Furthermore, virtual workspaces are added to ensure safety. On top of this a decision-and-control architecture, which uses sensory information available from the robotic system to evaluate the current state of task execution, is employed. Available assistive skills of the robotic system are not actively helping in this task but they are used to evaluate task success.

Chapter 6 — Model Identification

John Hollerbach, Wisama Khalil and Maxime Gautier

This chapter discusses how to determine the kinematic parameters and the inertial parameters of robot manipulators. Both instances of model identification are cast into a common framework of least-squares parameter estimation, and are shown to have common numerical issues relating to the identifiability of parameters, adequacy of the measurement sets, and numerical robustness. These discussions are generic to any parameter estimation problem, and can be applied in other contexts.

For kinematic calibration, the main aim is to identify the geometric Denavit–Hartenberg (DH) parameters, although joint-based parameters relating to the sensing and transmission elements can also be identified. Endpoint sensing or endpoint constraints can provide equivalent calibration equations. By casting all calibration methods as closed-loop calibration, the calibration index categorizes methods in terms of how many equations per pose are generated.

Inertial parameters may be estimated through the execution of a trajectory while sensing one or more components of force/torque at a joint. Load estimation of a handheld object is simplest because of full mobility and full wrist force-torque sensing. For link inertial parameter estimation, restricted mobility of links nearer the base as well as sensing only the joint torque means that not all inertial parameters can be identified. Those that can be identified are those that affect joint torque, although they may appear in complicated linear combinations.

Dynamic identification of a parallel robot: Trajectory with load

Author  Maxime Gautier

Video ID : 485

This video shows a trajectory with a known mass payload attached to the platform, used to identify the dynamic parameters and joint drive gains of a parallel prototype robot Orthoglyde. Details and results are given in the paper: S. Briot, M. Gautier: Global identification of joint drive gains and dynamic parameters of parallel robots, Multibody Syst. Dyn. 33(1), 3-26 (2015); doi 10.1007/s11044-013-9403-6