View Chapter

Chapter 4 — Mechanism and Actuation

Victor Scheinman, J. Michael McCarthy and Jae-Bok Song

This chapter focuses on the principles that guide the design and construction of robotic systems. The kinematics equations and Jacobian of the robot characterize its range of motion and mechanical advantage, and guide the selection of its size and joint arrangement. The tasks a robot is to perform and the associated precision of its movement determine detailed features such as mechanical structure, transmission, and actuator selection. Here we discuss in detail both the mathematical tools and practical considerations that guide the design of mechanisms and actuation for a robot system.

The following sections (Sect. 4.1) discuss characteristics of the mechanisms and actuation that affect the performance of a robot. Sections 4.2–4.6 discuss the basic features of a robot manipulator and their relationship to the mathematical model that is used to characterize its performance. Sections 4.7 and 4.8 focus on the details of the structure and actuation of the robot and how they combine to yield various types of robots. The final Sect. 4.9 relates these design features to various performance metrics.

Raytheon Sarcos exoskeleton

Author  Sarcos

Video ID : 646

Fig. 4.22b Applications of hydraulic actuators to robot: Sarcos exoskeleton (Raytheon).

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Region-pointing gesture

Author  Takayuki Kanda

Video ID : 811

This short video explains what "region pointing" is. While it known that there are a variety of pointing gestures, in region pointing, unlike in other pointing gestures where the pointing arm is fixed, the arm moves as if it depicts a circle, which evokes the region it refers to.

Chapter 9 — Force Control

Luigi Villani and Joris De Schutter

A fundamental requirement for the success of a manipulation task is the capability to handle the physical contact between a robot and the environment. Pure motion control turns out to be inadequate because the unavoidable modeling errors and uncertainties may cause a rise of the contact force, ultimately leading to an unstable behavior during the interaction, especially in the presence of rigid environments. Force feedback and force control becomes mandatory to achieve a robust and versatile behavior of a robotic system in poorly structured environments as well as safe and dependable operation in the presence of humans. This chapter starts from the analysis of indirect force control strategies, conceived to keep the contact forces limited by ensuring a suitable compliant behavior to the end effector, without requiring an accurate model of the environment. Then the problem of interaction tasks modeling is analyzed, considering both the case of a rigid environment and the case of a compliant environment. For the specification of an interaction task, natural constraints set by the task geometry and artificial constraints set by the control strategy are established, with respect to suitable task frames. This formulation is the essential premise to the synthesis of hybrid force/motion control schemes.

Integration of force strategies and natural-admittance control

Author  Brian B. Mathewson, Wyatt S. Newman

Video ID : 685

When mating parts are brought together, small misalignments must be accommodated by responding to contact forces. Using force feedback, a robot may sense contact forces during assembly and invoke a response to guide the parts into their correct mating positions. The proposed approach integrates force-guided strategies into Hogan's impedance control. Stability of both geometric convergence and of contact dynamics are achieved. Geometric convergence is accomplished more reliably than through the use of impedance control alone, and such a convergence is achieved more rapidly than through the use of force-guided strategies alone. This work was published in the ICRA 1995 video proceedings.

Chapter 10 — Redundant Robots

Stefano Chiaverini, Giuseppe Oriolo and Anthony A. Maciejewski

This chapter focuses on redundancy resolution schemes, i. e., the techniques for exploiting the redundant degrees of freedom in the solution of the inverse kinematics problem. This is obviously an issue of major relevance for motion planning and control purposes.

In particular, task-oriented kinematics and the basic methods for its inversion at the velocity (first-order differential) level are first recalled, with a discussion of the main techniques for handling kinematic singularities. Next, different firstorder methods to solve kinematic redundancy are arranged in two main categories, namely those based on the optimization of suitable performance criteria and those relying on the augmentation of the task space. Redundancy resolution methods at the acceleration (second-order differential) level are then considered in order to take into account dynamics issues, e.g., torque minimization. Conditions under which a cyclic task motion results in a cyclic joint motion are also discussed; this is a major issue when a redundant manipulator is used to execute a repetitive task, e.g., in industrial applications. The use of kinematic redundancy for fault tolerance is analyzed in detail. Suggestions for further reading are given in a final section.

FlexIRob - Teaching null-space constraints in physical human-robot interaction

Author  AMARSi Consortium

Video ID : 818

The video presents an approach utilizing the physical interaction capabilities of compliant robots with data-driven and model-free learning in a coherent system in order to make fast reconfiguration of redundant robots feasible. Users with no particular robotics knowledge can perform this task in physical interaction with the compliant robot, for example, to reconfigure a work cell due to changes in the environment. For fast and efficient learning of the respective null-space constraints, a reservoir neural network is employed. It is embedded in the motion controller of the system, hence allowing for execution of arbitrary motions in task space. We describe the training, exploration, and the control architecture of the systems and present an evaluation of the KUKA Light-Weight Robot (LWR). The results show that the learned model solves the redundancy resolution problem under the given constraints with sufficient accuracy and generalizes to generate valid joint-space trajectories even in untrained areas of the workspace.

Chapter 14 — AI Reasoning Methods for Robotics

Michael Beetz, Raja Chatila, Joachim Hertzberg and Federico Pecora

Artificial intelligence (AI) reasoning technology involving, e.g., inference, planning, and learning, has a track record with a healthy number of successful applications. So can it be used as a toolbox of methods for autonomous mobile robots? Not necessarily, as reasoning on a mobile robot about its dynamic, partially known environment may differ substantially from that in knowledge-based pure software systems, where most of the named successes have been registered. Moreover, recent knowledge about the robot’s environment cannot be given a priori, but needs to be updated from sensor data, involving challenging problems of symbol grounding and knowledge base change. This chapter sketches the main roboticsrelevant topics of symbol-based AI reasoning. Basic methods of knowledge representation and inference are described in general, covering both logicand probability-based approaches. The chapter first gives a motivation by example, to what extent symbolic reasoning has the potential of helping robots perform in the first place. Then (Sect. 14.2), we sketch the landscape of representation languages available for the endeavor. After that (Sect. 14.3), we present approaches and results for several types of practical, robotics-related reasoning tasks, with an emphasis on temporal and spatial reasoning. Plan-based robot control is described in some more detail in Sect. 14.4. Section 14.5 concludes.

RoboEarth final demonstrator

Author  Gajamohan Mohanarajah

Video ID : 706

This video made in 2014 summarizes the final demonstrator of the joint project RoboEarth -- A World Wide Web for robots (http://roboearth.org/). The demonstrator includes four robots collaboratively working together to help patients in a hospital. These robots used their common knowledge base and infrastructure in the following ways: 1. a knowledge repository to share and learn from each others' experience, 2. a communication medium to perform collaborative tasks, and 3. a computational resource to offload some of their heavy computational load.

Chapter 71 — Cognitive Human-Robot Interaction

Bilge Mutlu, Nicholas Roy and Selma Šabanović

A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.

Gaze and gesture cues for robots

Author  Bilge Mutlu

Video ID : 128

In human-robot communication, nonverbal cues like gaze and gesture can be a source of important information for starting and maintaining interaction. Gaze, for example, can tell a person about what the robot is attending to, its mental state, and its role in a conversation. Researchers are studying and developing models of nonverbal cues in human-robot interaction to enable more successful collaboration between robots and humans in a variety of domains, including education.

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Torque control for teaching peg-in-hole via physical human-robot interaction

Author  Alin-Albu Schäffer

Video ID : 627

Teaching by demonstration is a typical application for impedance controllers. A practical demonstration was given with the task of teaching for automatic insertion of a piston into a motor block. Teaching is realized by guiding the robot with the human hand. It was initially known that the axes of the holes in the motor block were vertically oriented. In the teaching phase, high stiffness components for the orientations were commanded (150 Nm/rad), while the translational stiffness was set to zero. This allowed only translational movements to be demonstrated by the human operator. In the second phase, the taught trajectory has been automatically reproduced by the robot. In this phase, high values were assigned for the translational stiffness (3000 N/m), while the stiffness for the rotations was low (60 Nm/rad). This enabled the robot to compensate for the remaining position errors. For two pistons, the total time for the assembly was 6 s. In this experiment, the assembly was executed automatically four-times faster than by the human operator holding the robot as an input device in the teaching phase (24 s), while the free-hand execution of the task by a human requires about 4 s.

Chapter 41 — Active Manipulation for Perception

Anna Petrovskaya and Kaijen Hsiao

This chapter covers perceptual methods in which manipulation is an integral part of perception. These methods face special challenges due to data sparsity and high costs of sensing actions. However, they can also succeed where other perceptual methods fail, for example, in poor-visibility conditions or for learning the physical properties of a scene.

The chapter focuses on specialized methods that have been developed for object localization, inference, planning, recognition, and modeling in activemanipulation approaches.We concludewith a discussion of real-life applications and directions for future research.

Modeling articulated objects using active manipulation

Author  Juergen Strum

Video ID : 78

The video illustrates a mobile, manipulation robot that interacts with various articulated objects, such as a fridge and a dishwasher, in a kitchen environment. During interaction, the robot learns their kinematic properties such as the rotation axis and the configuration space. Knowing the kinematic model of these objects improves the performance of the robot and enables motion planning. Service robots operating in domestic environments are typically faced with a variety of objects they have to deal with to fulfill their tasks. Some of these objects are articulated such as cabinet doors and drawers, or room and garage doors. The ability to deal with such articulated objects is relevant for service robots, as, for example, they need to open doors when navigating between rooms and to open cabinets to pick up objects in fetch-and-carry applications. We developed a complete probabilistic framework that enables robots to learn the kinematic models of articulated objects from observations of their motion. We combine parametric and nonparametric models consistently and utilize the advantages of both methods. As a result of our approach, a robot can robustly operate articulated objects in unstructured environments. All software is available open-source (including documentation and tutorials) on http://www.ros.org/wiki/articulation.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

A learning companion robot to foster pre-K vocabulary learning

Author  Cynthia Breazeal

Video ID : 564

This video summarizes a study where a learning-companion robot engages children in a storytelling game over repeated encounters over two months. The learning objective is for pre-K children to learn targeted vocabulary words which the robot introduces in its stories. In each session, the robot first tells a story and then invites the child to tell a story. A storyscape app on a tablet computer facilitates the narration of the story. While the child tells his or her story, the robot behaves as an engaged listener. Two conditions were investigated where the robot either matched the complexity of its stories to the child's language level, or does not. Results show that children successfully learn target vocabulary with the robot in general, and more words are learned when the complexity of the robot's stories matches the language ability of the child.

Chapter 17 — Limbed Systems

Shuuji Kajita and Christian Ott

A limbed system is a mobile robot with a body, legs and arms. First, its general design process is discussed in Sect. 17.1. Then we consider issues of conceptual design and observe designs of various existing robots in Sect. 17.2. As an example in detail, the design of a humanoid robot HRP-4C is shown in Sect. 17.3. To design a limbed system of good performance, it is important to take into account of actuation and control, like gravity compensation, limit cycle dynamics, template models, and backdrivable actuation. These are discussed in Sect. 17.4.

In Sect. 17.5, we overview divergence of limbed systems. We see odd legged walkers, leg–wheel hybrid robots, leg–arm hybrid robots, tethered walking robots, and wall-climbing robots. To compare limbed systems of different configurations,we can use performance indices such as the gait sensitivity norm, the Froude number, and the specific resistance, etc., which are introduced in Sect. 17.6.

Bipedal humanoid robot: WABIAN

Author  Atsuo Takanishi

Video ID : 522

A human-sized bipedal humanoid robot developed by Prof. Hashimoto, Dr. Narita, Dr. Kobayashi, Prof. Takanishi, Dr. Yamaguchi, Prof. Dario, and Dr. Takanobu.