View Chapter

Chapter 78 — Perceptual Robotics

Heinrich Bülthoff, Christian Wallraven and Martin A. Giese

Robots that share their environment with humans need to be able to recognize and manipulate objects and users, perform complex navigation tasks, and interpret and react to human emotional and communicative gestures. In all of these perceptual capabilities, the human brain, however, is still far ahead of robotic systems. Hence, taking clues from the way the human brain solves such complex perceptual tasks will help to design better robots. Similarly, once a robot interacts with humans, its behaviors and reactions will be judged by humans – movements of the robot, for example, should be fluid and graceful, and it should not evoke an eerie feeling when interacting with a user. In this chapter, we present Perceptual Robotics as the field of robotics that takes inspiration from perception research and neuroscience to, first, build better perceptual capabilities into robotic systems and, second, to validate the perceptual impact of robotic systems on the user.

Active in-hand object recognition

Author  Christian Wallraven

Video ID : 569

This video showcases the implementation of active object learning and recognition using the framework proposed in Browatzki et al. [1, 2]. The first phase shows the robot trying to learn the visual representation of several paper cups differing by a few key features. The robot executes a pre-programmed exploration program to look at the cup from all sides. The (very low-resolution) visual input is tracked and so-called key-frames are extracted which represent the (visual) exploration. After learning, the robot tries to recognize cups that have been placed into its hands using a similar exploration program based on visual information - due to the low-resolution input and the highly similar objects, the robot, however, fails to make the correct decision. The video then shows the second, advanced, exploration, which is based on actively seeking the view that is expected to provide maximum information about the object. For this, the robot embeds the learned visual information into a proprioceptive map indexed by the two joint angles of the hand. In this map, the robot now tries to predict the joint-angle combination that provides the most information about the object, given the current state of exploration. The implementation uses particle filtering to track a large number of object (view) hypotheses at the same time. Since the robot now uses a multisensory representation, the subsequent object-recognition trials are all correct, despite poor visual input and highly similar objects. References: [1] B Browatzki, V. Tikhanoff, G. Metta, H.H. Bülthoff, C. Wallraven: Active in-hand object recognition on a humanoid robot, IEEE Trans. Robot. 30(5), 1260-1269 (2014); [2] B. Browatzki, V. Tikhanoff, G. Metta, H.H. Bülthoff, C. Wallraven: Active object recognition on a humanoid robot, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), St. Paul (2012), pp. 2021-2028.

Chapter 1 — Robotics and the Handbook

Bruno Siciliano and Oussama Khatib

Robots! Robots on Mars and in oceans, in hospitals and homes, in factories and schools; robots fighting fires, making goods and products, saving time and lives. Robots today are making a considerable impact on many aspects of modern life, from industrial manufacturing to healthcare, transportation, and exploration of the deep space and sea. Tomorrow, robotswill be as pervasive and personal as today’s personal computers. This chapter retraces the evolution of this fascinating field from the ancient to themodern times through a number of milestones: from the first automated mechanical artifact (1400 BC) through the establishment of the robot concept in the 1920s, the realization of the first industrial robots in the 1960s, the definition of robotics science and the birth of an active research community in the 1980s, and the expansion towards the challenges of the human world of the twenty-first century. Robotics in its long journey has inspired this handbook which is organized in three layers: the foundations of robotics science; the consolidated methodologies and technologies of robot design, sensing and perception, manipulation and interfaces, mobile and distributed robotics; the advanced applications of field and service robotics, as well as of human-centered and life-like robotics.

Robots — The journey continues

Author  Bruno Siciliano, Oussama Khatib, Torsten Kröger

Video ID : 812

Following the 2000 history video entitled robots, a 50 year journey (Video ID 805), this new collection brings some of the most influential robots and their applications developed since the turn of the new Millennium (2000 and 2016). The journey continues to illustrate the remarkable acceleration of the robotics field in the new century.

Chapter 20 — Snake-Like and Continuum Robots

Ian D. Walker, Howie Choset and Gregory S. Chirikjian

This chapter provides an overview of the state of the art of snake-like (backbones comprised of many small links) and continuum (continuous backbone) robots. The history of each of these classes of robot is reviewed, focusing on key hardware developments. A review of the existing theory and algorithms for kinematics for both types of robot is presented, followed by a summary ofmodeling of locomotion for snake-like and continuum mechanisms.

Stenting deployment system

Author  Nabil Simaan

Video ID : 248

A 3-DOF continuum robot for intraocular dexterity and stent placement. The video shows a stent being deployed in a choroallantoic chick membrane which represents the vasculature of the retina [1, 2]. Note that [1] reports an algorithm for assisted telemanipulation and force sensing at the tip of a guide wire using a rapid interpolation map by elliptic integrals. References: [1] W. Wei, N. Simaan: Modeling, force sensing, and control of flexible cannulas for microstent delivery, J. Dyn. Syst. Meas. Control 134(4), 041004 (2012); [2] W. Wei, C. Popplewell, H. Fine, S. Chang, N. Simaan: Enabling technology for micro-vascular stenting in ophthalmic surgery, ASME J. Med. Dev. 4(2), 014503-01 - 014503-06 (2010)

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

SpinybotII: Climbing hard walls with compliant microspines

Author  Sangbae Kim, Alan T. Asbeck, Mark R. Cutkosky, William R. Provancher

Video ID : 388

This climbing robot can scale flat, hard vertical surfaces including those made of concrete, brick, stucco and masonry without using suction or adhesives. It employs arrays of miniature spines that catch opportunistically on surface asperities. The approach is inspired by the mechanisms observed in some climbing insects and spiders.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Atlas walking and manipulation

Author  DRC Team MIT

Video ID : 662

Autonomy demonstration with the MIT Atlas robot which is composed of the execution of a sequence of autonomous sub-tasks. Walking and manipulation plans are computed online with object fitting input from the perception system.

Chapter 50 — Modeling and Control of Robots on Rough Terrain

Keiji Nagatani, Genya Ishigami and Yoshito Okada

In this chapter, we introduce modeling and control for wheeled mobile robots and tracked vehicles. The target environment is rough terrains, which includes both deformable soil and heaps of rubble. Therefore, the topics are roughly divided into two categories, wheeled robots on deformable soil and tracked vehicles on heaps of rubble.

After providing an overview of this area in Sect. 50.1, a modeling method of wheeled robots on a deformable terrain is introduced in Sect. 50.2. It is based on terramechanics, which is the study focusing on the mechanical properties of natural rough terrain and its response to off-road vehicle, specifically the interaction between wheel/track and soil. In Sect. 50.3, the control of wheeled robots is introduced. A wheeled robot often experiences wheel slippage as well as its sideslip while traversing rough terrain. Therefore, the basic approach in this section is to compensate the slip via steering and driving maneuvers. In the case of navigation on heaps of rubble, tracked vehicles have much advantage. To improve traversability in such challenging environments, some tracked vehicles are equipped with subtracks, and one kinematical modeling method of tracked vehicle on rough terrain is introduced in Sect. 50.4. In addition, stability analysis of such vehicles is introduced in Sect. 50.5. Based on such kinematical model and stability analysis, a sensor-based control of tracked vehicle on rough terrain is introduced in Sect. 50.6. Sect. 50.7 summarizes this chapter.

Terradynamics of legged locomotion for traversal in granular media

Author  Chen Li, Tingnan Zhang, Daniel Goldman

Video ID : 186

The experiments in this video evaluate the effect of leg shape on the robot's dynamic behavior on soft sand. Several types of leg shapes have been tested, e.g., from linear shapes to arcs, with varying curvatures.

Chapter 10 — Redundant Robots

Stefano Chiaverini, Giuseppe Oriolo and Anthony A. Maciejewski

This chapter focuses on redundancy resolution schemes, i. e., the techniques for exploiting the redundant degrees of freedom in the solution of the inverse kinematics problem. This is obviously an issue of major relevance for motion planning and control purposes.

In particular, task-oriented kinematics and the basic methods for its inversion at the velocity (first-order differential) level are first recalled, with a discussion of the main techniques for handling kinematic singularities. Next, different firstorder methods to solve kinematic redundancy are arranged in two main categories, namely those based on the optimization of suitable performance criteria and those relying on the augmentation of the task space. Redundancy resolution methods at the acceleration (second-order differential) level are then considered in order to take into account dynamics issues, e.g., torque minimization. Conditions under which a cyclic task motion results in a cyclic joint motion are also discussed; this is a major issue when a redundant manipulator is used to execute a repetitive task, e.g., in industrial applications. The use of kinematic redundancy for fault tolerance is analyzed in detail. Suggestions for further reading are given in a final section.

FlexIRob - Teaching null-space constraints in physical human-robot interaction

Author  AMARSi Consortium

Video ID : 818

The video presents an approach utilizing the physical interaction capabilities of compliant robots with data-driven and model-free learning in a coherent system in order to make fast reconfiguration of redundant robots feasible. Users with no particular robotics knowledge can perform this task in physical interaction with the compliant robot, for example, to reconfigure a work cell due to changes in the environment. For fast and efficient learning of the respective null-space constraints, a reservoir neural network is employed. It is embedded in the motion controller of the system, hence allowing for execution of arbitrary motions in task space. We describe the training, exploration, and the control architecture of the systems and present an evaluation of the KUKA Light-Weight Robot (LWR). The results show that the learned model solves the redundancy resolution problem under the given constraints with sufficient accuracy and generalizes to generate valid joint-space trajectories even in untrained areas of the workspace.

Chapter 37 — Contact Modeling and Manipulation

Imin Kao, Kevin M. Lynch and Joel W. Burdick

Robotic manipulators use contact forces to grasp and manipulate objects in their environments. Fixtures rely on contacts to immobilize workpieces. Mobile robots and humanoids use wheels or feet to generate the contact forces that allow them to locomote. Modeling of the contact interface, therefore, is fundamental to analysis, design, planning, and control of many robotic tasks.

This chapter presents an overview of the modeling of contact interfaces, with a particular focus on their use in manipulation tasks, including graspless or nonprehensile manipulation modes such as pushing. Analysis and design of grasps and fixtures also depends on contact modeling, and these are discussed in more detail in Chap. 38. Sections 37.2–37.5 focus on rigid-body models of contact. Section 37.2 describes the kinematic constraints caused by contact, and Sect. 37.3 describes the contact forces that may arise with Coulomb friction. Section 37.4 provides examples of analysis of multicontact manipulation tasks with rigid bodies and Coulomb friction. Section 37.5 extends the analysis to manipulation by pushing. Section 37.6 introduces modeling of contact interfaces, kinematic duality, and pressure distribution and soft contact interface. Section 37.7 describes the concept of the friction limit surface and illustrates it with an example demonstrating the construction of a limit surface for a soft contact. Finally, Sect. 37.8 discusses how these more accurate models can be used in fixture analysis and design.

Programmable velocity vector fields by 6-DOF vibration

Author  Tom Vose, Matt Turpin, Philip Dames, Paul Umbanhowar, Kevin M. Lynch

Video ID : 804

This video generalizes the idea of transporting parts using horizontal and vertical vibration shown in the previous video and illustrated in Fig. 37.9 in Chap. 37.4.3 of the Springer Handbook of Robotics, 2nd ed (2016). In this video, a rigid supporting plate is vibrated with an arbitrary periodic 6-DOF motion profile. This periodic vibration enables control of the normal forces and horizontal plate velocities as a function of the position on the plate, effectively creating programmable velocity vector fields induced by friction. This video demonstrates five such velocity fields in sequence, each created by a different periodic vibration of the plate.

Chapter 7 — Motion Planning

Lydia E. Kavraki and Steven M. LaValle

This chapter first provides a formulation of the geometric path planning problem in Sect. 7.2 and then introduces sampling-based planning in Sect. 7.3. Sampling-based planners are general techniques applicable to a wide set of problems and have been successful in dealing with hard planning instances. For specific, often simpler, planning instances, alternative approaches exist and are presented in Sect. 7.4. These approaches provide theoretical guarantees and for simple planning instances they outperform samplingbased planners. Section 7.5 considers problems that involve differential constraints, while Sect. 7.6 overviews several other extensions of the basic problem formulation and proposed solutions. Finally, Sect. 7.8 addresses some important andmore advanced topics related to motion planning.

Kinodynamic motion planning for a car-like robot

Author  Caleb Voss

Video ID : 24

In this video, the objective of the car is to reach a goal location by jumping over a ramp and pushing a block out of the way. This problem requires kinodynamic motion planning for a car-like robot using a physics simulator. This video was generated using the software tools OMPL, Blender, and MORSE.

Chapter 41 — Active Manipulation for Perception

Anna Petrovskaya and Kaijen Hsiao

This chapter covers perceptual methods in which manipulation is an integral part of perception. These methods face special challenges due to data sparsity and high costs of sensing actions. However, they can also succeed where other perceptual methods fail, for example, in poor-visibility conditions or for learning the physical properties of a scene.

The chapter focuses on specialized methods that have been developed for object localization, inference, planning, recognition, and modeling in activemanipulation approaches.We concludewith a discussion of real-life applications and directions for future research.

Tactile exploration and modeling using shape primitives

Author  Francesco Mazzini

Video ID : 76

This video shows a robot performing tactile exploration and modeling of a lab-constructed scene that was designed to be similar to those found in interventions for underwater oil spills (leaking pipe). Representing the scene with geometric primitives enables the surface to be described using only sparse tactile data from joint encoders. The robot's movements are chosen to maximize the expected increase in knowledge about the scene.