View Chapter

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

A robot that approaches pedestrians

Author  Takayuki Kanda

Video ID : 258

This video illustrates an example of a study in which the social robot's capability for nonverbal interaction was developed. In the study, an anticipation technique was developed, where the robot observes pedestrians' motions and anticipates each pedestrian's future motions thanks to the accumulation of a large amount of data on pedestrian trajectories. Then, it plans its motion to approach a pedestrian from a frontal direction and initiates a conversation with the pedestrian.

Chapter 19 — Robot Hands

Claudio Melchiorri and Makoto Kaneko

Multifingered robot hands have a potential capability for achieving dexterous manipulation of objects by using rolling and sliding motions. This chapter addresses design, actuation, sensing and control of multifingered robot hands. From the design viewpoint, they have a strong constraint in actuator implementation due to the space limitation in each joint. After briefly introducing the overview of anthropomorphic end-effector and its dexterity in Sect. 19.1, various approaches for actuation are provided with their advantages and disadvantages in Sect. 19.2. The key classification is (1) remote actuation or build-in actuation and (2) the relationship between the number of joints and the number of actuator. In Sect. 19.3, actuators and sensors used for multifingered hands are described. In Sect. 19.4, modeling and control are introduced by considering both dynamic effects and friction. Applications and trends are given in Sect. 19.5. Finally, this chapter is closed with conclusions and further reading.

DLR hand

Author  DLR -Robotics and Mechatronics Center

Video ID : 768

A DLR hand

Chapter 58 — Robotics in Hazardous Applications

James Trevelyan, William R. Hamel and Sung-Chul Kang

Robotics researchers have worked hard to realize a long-awaited vision: machines that can eliminate the need for people to work in hazardous environments. Chapter 60 is framed by the vision of disaster response: search and rescue robots carrying people from burning buildings or tunneling through collapsed rock falls to reach trapped miners. In this chapter we review tangible progress towards robots that perform routine work in places too dangerous for humans. Researchers still have many challenges ahead of them but there has been remarkable progress in some areas. Hazardous environments present special challenges for the accomplishment of desired tasks depending on the nature and magnitude of the hazards. Hazards may be present in the form of radiation, toxic contamination, falling objects or potential explosions. Technology that specialized engineering companies can develop and sell without active help from researchers marks the frontier of commercial feasibility. Just inside this border lie teleoperated robots for explosive ordnance disposal (EOD) and for underwater engineering work. Even with the typical tenfold disadvantage in manipulation performance imposed by the limits of today’s telepresence and teleoperation technology, in terms of human dexterity and speed, robots often can offer a more cost-effective solution. However, most routine applications in hazardous environments still lie far beyond the feasibility frontier. Fire fighting, remediating nuclear contamination, reactor decommissioning, tunneling, underwater engineering, underground mining and clearance of landmines and unexploded ordnance still present many unsolved problems.

UNMACA: Demining Afghanistan

Author  James P. Trevelyan

Video ID : 571

This is a high-quality video made partly with the aim of seeking funds to help complete demining projects in Afghanistan. This video has been included because researchers can see plenty of examples of realistic field conditions under which demining is being done in Afghanistan. It is essential for researchers to have an accurate appreciation of the real field conditions before considering expensive research projects. There are plenty of opportunities to see manual mine clearance. Current-generation demining machines don't work here because of the very hard and rocky ground. There is an interesting segment showing the Bamyan site. The sentiments expressed by deminers are genuine, in my experience. I have met many similarly dedicated Afghan deminers, and they are selected for their dedication, attitude to nation-building, courage, and conscientious work ethic. They are justly proud of the work they do, and their uniforms and equipment set them apart from most other Afghans and give them a real sense of respect. Note that winter rains and summer storms wash mud over mines, encasing them in what later turns to hard, cement-like soil. It is hard physical work demanding sensitive hands, care, and attention to detail. For more information see: http://school.mech.uwa.edu.au/~jamest/demining/countries/afghan/minefields-afghan.html

Chapter 20 — Snake-Like and Continuum Robots

Ian D. Walker, Howie Choset and Gregory S. Chirikjian

This chapter provides an overview of the state of the art of snake-like (backbones comprised of many small links) and continuum (continuous backbone) robots. The history of each of these classes of robot is reviewed, focusing on key hardware developments. A review of the existing theory and algorithms for kinematics for both types of robot is presented, followed by a summary ofmodeling of locomotion for snake-like and continuum mechanisms.

Concentric tube robot at TEDMED 2010

Author  Pierre Dupont

Video ID : 252

This video was recorded at TEDMED 2010 in San Diego and features a teleoperated, concentric tube robot with 1 mm- wide forceps solving a miniature version of the puzzle Kanoodle.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

3-D models from 2-D video - automatically

Author  Marc Pollefeys

Video ID : 125

We show how a video is automatically converted into a 3-D model using computer-vision techniques. More details on this approach can be found in: M. Pollefeys, L. Van Gool, M. Vergauwen, F. Verbiest, K. Cornelis, J. Tops, R. Koch: Visual modeling with a hand-held camera, Int. J. Comp. Vis. 59(3), 207-232 (2004).

Chapter 21 — Actuators for Soft Robotics

Alin Albu-Schäffer and Antonio Bicchi

Although we do not know as yet how robots of the future will look like exactly, most of us are sure that they will not resemble the heavy, bulky, rigid machines dangerously moving around in old fashioned industrial automation. There is a growing consensus, in the research community as well as in expectations from the public, that robots of the next generation will be physically compliant and adaptable machines, closely interacting with humans and moving safely, smoothly and efficiently - in other terms, robots will be soft.

This chapter discusses the design, modeling and control of actuators for the new generation of soft robots, which can replace conventional actuators in applications where rigidity is not the first and foremost concern in performance. The chapter focuses on the technology, modeling, and control of lumped parameters of soft robotics, that is, systems of discrete, interconnected, and compliant elements. Distributed parameters, snakelike and continuum soft robotics, are presented in Chap. 20, while Chap. 23 discusses in detail the biomimetic motivations that are often behind soft robotics.

Throwing a ball with the DLR VS-Joint

Author  Sebastian Wolf, Gerd Hirzinger

Video ID : 549

The video shows the difference between a stiff and a flexible actuator in a 1-DOF throwing demonstration. The variable stiffness actuator (VS-joint) can store potential energy in a strike out movement and release it by accelerating the lever and ball. Additional energy is transferred to the lever by stiffening up during the forward motion.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Introduction to evolutionary robotics at EPFL

Author  Dario Floreano

Video ID : 119

Method for evolving the neural network of a robot. Valid gene sequences are extracted (magnifying lens) from a binary string representing the genome of the robot. Those genes are translated into neurons of different types (colors) according to the genetic specifications, such as sensory, motor, excitatory, or inhibitory neurons. The corresponding neural network is connected to the sensors and motors of the robot and the resulting behavior of the robot is measured according to the fitness function. The genomes of the individuals that had the worst performance are discarded from the population (symbolically thrown into a dustbin) whereas the genomes of the best individuals are paired and crossed over with small random mutations to generate new offspring (the process of selective reproduction is symbolically shown to occur in a mother robot). After several generations of selective reproductions with mutations, robots display better or novel behaviors.

Chapter 67 — Humanoids

Paul Fitzpatrick, Kensuke Harada, Charles C. Kemp, Yoshio Matsumoto, Kazuhito Yokoi and Eiichi Yoshida

Humanoid robots selectively immitate aspects of human form and behavior. Humanoids come in a variety of shapes and sizes, from complete human-size legged robots to isolated robotic heads with human-like sensing and expression. This chapter highlights significant humanoid platforms and achievements, and discusses some of the underlying goals behind this area of robotics. Humanoids tend to require the integration ofmany of the methods covered in detail within other chapters of this handbook, so this chapter focuses on distinctive aspects of humanoid robotics with liberal cross-referencing.

This chapter examines what motivates researchers to pursue humanoid robotics, and provides a taste of the evolution of this field over time. It summarizes work on legged humanoid locomotion, whole-body activities, and approaches to human–robot communication. It concludes with a brief discussion of factors that may influence the future of humanoid robots.

Whole-body "pivoting" manipulation

Author  Eiichi Yoshida

Video ID : 595

The humanoid robot performs "pivoting" manipulation to carry a bulky object without lifting. A coarse path of the object towards its goal position is first planned to compute the trajectory of the hands which perform the manipulation. Then foot positions are determined along the object path, from which the COM trajectory is derived using the dynamic walking-pattern generator. Those tasks are provided to the inverse kinematics to generate the coordinated arm and leg motion for this complex manipulation. The second video shows the motion planning combining pivoting manipulation and free walking motion in a more complex environment.

Chapter 47 — Motion Planning and Obstacle Avoidance

Javier Minguez, Florant Lamiraux and Jean-Paul Laumond

This chapter describes motion planning and obstacle avoidance for mobile robots. We will see how the two areas do not share the same modeling background. From the very beginning of motion planning, research has been dominated by computer sciences. Researchers aim at devising well-grounded algorithms with well-understood completeness and exactness properties.

The challenge of this chapter is to present both nonholonomic motion planning (Sects. 47.1–47.6) and obstacle avoidance (Sects. 47.7–47.10) issues. Section 47.11 reviews recent successful approaches that tend to embrace the whole problemofmotion planning and motion control. These approaches benefit from both nonholonomic motion planning and obstacle avoidance methods.

A ride in the Google self-driving car

Author  Google Self-Driving Car Project

Video ID : 710

The maturity of the tools developed for mobile-robot navigation and explained in this chapter have enabled Google to integrate them into an experimental vehicle. This video demonstrates Google's self-driving technology on the road.

Chapter 8 — Motion Control

Wan Kyun Chung, Li-Chen Fu and Torsten Kröger

This chapter will focus on the motion control of robotic rigid manipulators. In other words, this chapter does not treat themotion control ofmobile robots, flexible manipulators, and manipulators with elastic joints. The main challenge in the motion control problem of rigid manipulators is the complexity of their dynamics and uncertainties. The former results from nonlinearity and coupling in the robot manipulators. The latter is twofold: structured and unstructured. Structured uncertainty means imprecise knowledge of the dynamic parameters and will be touched upon in this chapter, whereas unstructured uncertainty results from joint and link flexibility, actuator dynamics, friction, sensor noise, and unknown environment dynamics, and will be treated in other chapters. In this chapter, we begin with an introduction to motion control of robot manipulators from a fundamental viewpoint, followed by a survey and brief review of the relevant advanced materials. Specifically, the dynamic model and useful properties of robot manipulators are recalled in Sect. 8.1. The joint and operational space control approaches, two different viewpoints on control of robot manipulators, are compared in Sect. 8.2. Independent joint control and proportional– integral–derivative (PID) control, widely adopted in the field of industrial robots, are presented in Sects. 8.3 and 8.4, respectively. Tracking control, based on feedback linearization, is introduced in Sect. 8.5. The computed-torque control and its variants are described in Sect. 8.6. Adaptive control is introduced in Sect. 8.7 to solve the problem of structural uncertainty, whereas the optimality and robustness issues are covered in Sect. 8.8. To compute suitable set point signals as input values for these motion controllers, Sect. 8.9 introduces reference trajectory planning concepts. Since most controllers of robotmanipulators are implemented by using microprocessors, the issues of digital implementation are discussed in Sect. 8.10. Finally, learning control, one popular approach to intelligent control, is illustrated in Sect. 8.11.

JediBot - Experiments in human-robot sword-fighting

Author  Torsten Kröger, Ken Oslund, Tim Jenkins, Dan Torczynski, Nicholas Hippenmeyer, Radu Bogdan Rusu, Oussama Khatib

Video ID : 759

Real-world sword-fighting between human opponents requires extreme agility, fast reaction time and dynamic perception capabilities. This video shows experimental results achieved with a 3-D vision system and a highly reactive control architecture which allowfs a robot to sword fight against human opponents. An online trajectory generator is used as an intermediate layer between low-level trajectory-following controllers and high-level visual perception. This architecture enables robots to react nearly instantaneously to the unpredictable human motions perceived by the vision system as well as to sudden sword contacts detected by force and torque sensors. Results show how smooth and highly dynamic motions are generated on-the-fly while using the vision and force/torque sensor signals in the feedback loops of the robot-motion controller. Reference: T. Kröger, K. Oslund, T. Jenkins, D. Torczynski, N. Hippenmeyer, R. B. Rusu, O. Khatib: JediBot - Experiments in human-robot sword-fighting, Proc. Int. Symp. Exp. Robot., Québec City (2012)