View Chapter

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

An assistive decision-and-control architecture for force-sensitive, hand-arm systems driven via human-machine interfaces (MM2)

Author  Jörn Vogel, Sami Haddadin, John D. Simeral, Daniel Bacher , Beata Jarosiewicz, Leigh R. Hochberg, John P. Donoghue, Patrick van der Smagt

Video ID : 620

This video shows a 2-D pick and place of an object using the Braingate2 neural interface. The robot is controlled through a multipriority Cartesian impedance controller, and its behavior is extended with collision detection and reflex reaction. Furthermore, virtual workspaces are added to ensure safety. On top of this, a decision-and-control architecture, which uses sensory information available from the robotic system to evaluate the current state of task execution, is employed.

Chapter 21 — Actuators for Soft Robotics

Alin Albu-Schäffer and Antonio Bicchi

Although we do not know as yet how robots of the future will look like exactly, most of us are sure that they will not resemble the heavy, bulky, rigid machines dangerously moving around in old fashioned industrial automation. There is a growing consensus, in the research community as well as in expectations from the public, that robots of the next generation will be physically compliant and adaptable machines, closely interacting with humans and moving safely, smoothly and efficiently - in other terms, robots will be soft.

This chapter discusses the design, modeling and control of actuators for the new generation of soft robots, which can replace conventional actuators in applications where rigidity is not the first and foremost concern in performance. The chapter focuses on the technology, modeling, and control of lumped parameters of soft robotics, that is, systems of discrete, interconnected, and compliant elements. Distributed parameters, snakelike and continuum soft robotics, are presented in Chap. 20, while Chap. 23 discusses in detail the biomimetic motivations that are often behind soft robotics.

Variable impedance actuators: Moving the robots of tomorrow

Author  B. Vanderborght, A. Albu-Schäffer, A. Bicchi, E. Burdet, D. Caldwell, R. Carloni, M. Catalano, Ganesh, Garabini, Grebenstein, Grioli, Haddadin, Jafari, Laffranchi, Lefeber, Petit, Stramigioli, Tsagarakis, Van Damme, Van Ham, Visser, Wolf

Video ID : 456

Most of today's robots have rigid structures and actuators requiring complex software control algorithms and sophisticated sensor systems in order to behave in a compliant and safe way adapted to contact with unknown environments and humans. By studying and constructing variable impedance actuators and their control, we contribute to the development of actuation units that can match the intrinsic safety, motion performance and energy efficiency of biological systems and, in particular, of the humans. As such, this may lead to a new generation of robots that can co-exist and co-operate with people and get closer to the human manipulation and locomotion performance than is possible with current robots.

Chapter 65 — Domestic Robotics

Erwin Prassler, Mario E. Munich, Paolo Pirjanian and Kazuhiro Kosuge

When the first edition of this book was published domestic robots were spoken of as a dream that was slowly becoming reality. At that time, in 2008, we looked back on more than twenty years of research and development in domestic robotics, especially in cleaning robotics. Although everybody expected cleaning to be the killer app for domestic robotics in the first half of these twenty years nothing big really happened. About ten years before the first edition of this book appeared, all of a sudden things started moving. Several small, but also some larger enterprises announced that they would soon launch domestic cleaning robots. The robotics community was anxiously awaiting these first cleaning robots and so were consumers. The big burst, however, was yet to come. The price tag of those cleaning robots was far beyond what people were willing to pay for a vacuum cleaner. It took another four years until, in 2002, a small and inexpensive device, which was not even called a cleaning robot, brought the first breakthrough: Roomba. Sales of the Roomba quickly passed the first million robots and increased rapidly. While for the first years after Roomba’s release, the big players remained on the sidelines, possibly to revise their own designs and, in particular their business models and price tags, some other small players followed quickly and came out with their own products. We reported about theses devices and their creators in the first edition. Since then the momentum in the field of domestics robotics has steadily increased. Nowadays most big appliance manufacturers have domestic cleaning robots in their portfolio. We are not only seeing more and more domestic cleaning robots and lawn mowers on the market, but we are also seeing new types of domestic robots, window cleaners, plant watering robots, tele-presence robots, domestic surveillance robots, and robotic sports devices. Some of these new types of domestic robots are still prototypes or concept studies. Others have already crossed the threshold to becoming commercial products.

For the second edition of this chapter, we have decided to not only enumerate the devices that have emerged and survived in the past five years, but also to take a look back at how it all began, contrasting this retrospection with the burst of progress in the past five years in domestic cleaning robotics. We will not describe and discuss in detail every single cleaning robot that has seen the light of the day, but select those that are representative for the evolution of the technology as well as the market. We will also reserve some space for new types of mobile domestic robots, which will be the success stories or failures for the next edition of this chapter. Further we will look into nonmobile domestic robots, also called smart appliances, and examine their fate. Last but not least, we will look at the recent developments in the area of intelligent homes that surround and, at times, also control the mobile domestic robots and smart appliances described in the preceding sections.

Windoro window-cleaning robot review

Author  Erwin Prassler

Video ID : 734

Video reviews the performance of the robotic window-cleaner Windoro.

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

Salamandra robotica II robot walking and swimming

Author  Alessandro Crespi, Konstantinos Karakasiliotis, Andre Guignard, Auke Jan Ijspeert

Video ID : 395

Salamandra robotica II walking and swimming outdoors and performing the transition from swimming to walking indoors. The transition between two different locomotions and the locomotions themselves is generated by central pattern generation (CPG) and simulation of a mesencephalic locomotor region (MLR). Video from the Ecole Polytechnique Federale de Lausanne Biorobotics Lab.

Chapter 36 — Motion for Manipulation Tasks

James Kuffner and Jing Xiao

This chapter serves as an introduction to Part D by giving an overview of motion generation and control strategies in the context of robotic manipulation tasks. Automatic control ranging from the abstract, high-level task specification down to fine-grained feedback at the task interface are considered. Some of the important issues include modeling of the interfaces between the robot and the environment at the different time scales of motion and incorporating sensing and feedback. Manipulation planning is introduced as an extension to the basic motion planning problem, which can be modeled as a hybrid system of continuous configuration spaces arising from the act of grasping and moving parts in the environment. The important example of assembly motion is discussed through the analysis of contact states and compliant motion control. Finally, methods aimed at integrating global planning with state feedback control are summarized.

Autonomous continuum grasping

Author  Jing Xiao et al.

Video ID : 357

The video shows three example tasks: (1) autonomous grasping and lifting operation of an object, (2) autonomous obstacle avoidance operation, and (3) autonomous operation of grasping and lifting an object while avoiding another object. Note that the grasped object was lifted about 2 inches off the table.

Chapter 35 — Multisensor Data Fusion

Hugh Durrant-Whyte and Thomas C. Henderson

Multisensor data fusion is the process of combining observations from a number of different sensors to provide a robust and complete description of an environment or process of interest. Data fusion finds wide application in many areas of robotics such as object recognition, environment mapping, and localization.

This chapter has three parts: methods, architectures, and applications. Most current data fusion methods employ probabilistic descriptions of observations and processes and use Bayes’ rule to combine this information. This chapter surveys the main probabilistic modeling and fusion techniques including grid-based models, Kalman filtering, and sequential Monte Carlo techniques. This chapter also briefly reviews a number of nonprobabilistic data fusion methods. Data fusion systems are often complex combinations of sensor devices, processing, and fusion algorithms. This chapter provides an overview of key principles in data fusion architectures from both a hardware and algorithmic viewpoint. The applications of data fusion are pervasive in robotics and underly the core problem of sensing, estimation, and perception. We highlight two example applications that bring out these features. The first describes a navigation or self-tracking application for an autonomous vehicle. The second describes an application in mapping and environment modeling.

The essential algorithmic tools of data fusion are reasonably well established. However, the development and use of these tools in realistic robotics applications is still developing.

AnnieWay

Author  Thomas C. Henderson

Video ID : 132

This is a video showing the multisensor autonomous vehicle merging into traffic.

Chapter 10 — Redundant Robots

Stefano Chiaverini, Giuseppe Oriolo and Anthony A. Maciejewski

This chapter focuses on redundancy resolution schemes, i. e., the techniques for exploiting the redundant degrees of freedom in the solution of the inverse kinematics problem. This is obviously an issue of major relevance for motion planning and control purposes.

In particular, task-oriented kinematics and the basic methods for its inversion at the velocity (first-order differential) level are first recalled, with a discussion of the main techniques for handling kinematic singularities. Next, different firstorder methods to solve kinematic redundancy are arranged in two main categories, namely those based on the optimization of suitable performance criteria and those relying on the augmentation of the task space. Redundancy resolution methods at the acceleration (second-order differential) level are then considered in order to take into account dynamics issues, e.g., torque minimization. Conditions under which a cyclic task motion results in a cyclic joint motion are also discussed; this is a major issue when a redundant manipulator is used to execute a repetitive task, e.g., in industrial applications. The use of kinematic redundancy for fault tolerance is analyzed in detail. Suggestions for further reading are given in a final section.

KUKA LBR iiwa - Kinematic Redundancy

Author  KUKA Roboter GmbH

Video ID : 813

The video shows the robot dexterity achieved by kinematic redundancy and illustrates the basic concept of self-motion (here called null-space motion).

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Dancing with Juliet

Author  Oussama Khatib, Kyong-Sok Chang, Oliver Brock, Kazuhito Yokoi, Arancha Casal, Robert Holmberg

Video ID : 820

This video presents experiments in human-robot interaction using the Stanford Mobile Manipulator platforms. Each platform consists of a Puma 560 manipulator mounted on a holonomic mobile base. The experiments shown in this video are the results of the implementation of various methodologies developed for establishing the basic autonomous capabilities needed for robot operations in human environments. The integration of mobility and manipulation is based on a task-oriented control strategy which provides the user with two basic control primitives: end-effector task control and platform self-posture control.

Chapter 13 — Behavior-Based Systems

François Michaud and Monica Nicolescu

Nature is filled with examples of autonomous creatures capable of dealing with the diversity, unpredictability, and rapidly changing conditions of the real world. Such creatures must make decisions and take actions based on incomplete perception, time constraints, limited knowledge about the world, cognition, reasoning and physical capabilities, in uncontrolled conditions and with very limited cues about the intent of others. Consequently, one way of evaluating intelligence is based on the creature’s ability to make the most of what it has available to handle the complexities of the real world. The main objective of this chapter is to explain behavior-based systems and their use in autonomous control problems and applications. The chapter is organized as follows. Section 13.1 overviews robot control, introducing behavior-based systems in relation to other established approaches to robot control. Section 13.2 follows by outlining the basic principles of behavior-based systems that make them distinct from other types of robot control architectures. The concept of basis behaviors, the means of modularizing behavior-based systems, is presented in Sect. 13.3. Section 13.4 describes how behaviors are used as building blocks for creating representations for use by behavior-based systems, enabling the robot to reason about the world and about itself in that world. Section 13.5 presents several different classes of learning methods for behavior-based systems, validated on single-robot and multirobot systems. Section 13.6 provides an overview of various robotics problems and application domains that have successfully been addressed or are currently being studied with behavior-based control. Finally, Sect. 13.7 concludes the chapter.

Experience-based learning of high-level task representations: Reproduction

Author  Monica Nicolescu

Video ID : 28

This is a video recorded in early 2000s, showing a Pioneer robot visiting a number of targets in a certain order based on a demonstration provided by a human user. The robot training stage is also shown in a related video in this chapter. References: 1. M. Nicolescu, M.J. Mataric: Experience-based learning of task representations from human-robot interaction, Proc. IEEE Int. Symp. Comput. Intell. Robot. Autom. , Banff (2001), pp. 463-468; 2. M. Nicolescu, M.J. Mataric: Learning and interacting in human-robot domains, IEEE Trans. Syst. Man Cybernet. A31(5), 419-430 (2001)

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Development of a versatile underwater robot - GTS ROV ALPHA

Author  Georgia Tech Savannah Robotics

Video ID : 790

This underwater vehicle won the award for design elegance at the 2009 MATE International ROV competition. In November 2009, it was deployed from the R/V Savannah for an initial sea trial. In the future, it is intended to serve as a platform for underwater manipulation, mapping, and control experiments.