View Chapter

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Probabilistic encoding of motion in a subspace of reduced dimensionality

Author  Keith Grochow, Steven Martin, Aaron Hertzmann, Zoran Popovic

Video ID : 102

Probabilistic encoding of motion in a subspace of reduced dimensionality. Reference: K. Grochow, S. L. Martin, A. Hertzmann, Z. Popovic: Style-based inverse kinematics, Proc. ACM Int. Conf. Comput. Graphics Interact. Tech. (SIGGRAPH), 522–531 (2004); URL: http://grail.cs.washington.edu/projects/styleik/ .

Chapter 37 — Contact Modeling and Manipulation

Imin Kao, Kevin M. Lynch and Joel W. Burdick

Robotic manipulators use contact forces to grasp and manipulate objects in their environments. Fixtures rely on contacts to immobilize workpieces. Mobile robots and humanoids use wheels or feet to generate the contact forces that allow them to locomote. Modeling of the contact interface, therefore, is fundamental to analysis, design, planning, and control of many robotic tasks.

This chapter presents an overview of the modeling of contact interfaces, with a particular focus on their use in manipulation tasks, including graspless or nonprehensile manipulation modes such as pushing. Analysis and design of grasps and fixtures also depends on contact modeling, and these are discussed in more detail in Chap. 38. Sections 37.2–37.5 focus on rigid-body models of contact. Section 37.2 describes the kinematic constraints caused by contact, and Sect. 37.3 describes the contact forces that may arise with Coulomb friction. Section 37.4 provides examples of analysis of multicontact manipulation tasks with rigid bodies and Coulomb friction. Section 37.5 extends the analysis to manipulation by pushing. Section 37.6 introduces modeling of contact interfaces, kinematic duality, and pressure distribution and soft contact interface. Section 37.7 describes the concept of the friction limit surface and illustrates it with an example demonstrating the construction of a limit surface for a soft contact. Finally, Sect. 37.8 discusses how these more accurate models can be used in fixture analysis and design.

Horizontal transport by 2-DOF vibration

Author  Kevin M. Lynch, Paul Umbanhowar

Video ID : 803

This video demonstrates the use of vertical and horizontal vibration of a supporting bar to cause the object on top to slide one way or the other. Upward acceleration of the bar increases the normal force, thereby increasing the tangential friction force during sliding. With periodic vibration, the object achieves a limit-cycle motion. By choosing the phasing of the vertical and horizontal vibration, the net motion during a limit cycle can be to the left or right. Video shown at 1/20 actual speed. This video is related to the example shown in Fig. 37.9 in Chap. 37.4.3 of the Springer Handbook of Robotics, 2nd ed (2016).

Chapter 18 — Parallel Mechanisms

Jean-Pierre Merlet, Clément Gosselin and Tian Huang

This chapter presents an introduction to the kinematics and dynamics of parallel mechanisms, also referred to as parallel robots. As opposed to classical serial manipulators, the kinematic architecture of parallel robots includes closed-loop kinematic chains. As a consequence, their analysis differs considerably from that of their serial counterparts. This chapter aims at presenting the fundamental formulations and techniques used in their analysis.

Quadrupteron robot

Author  Clément Gosselin

Video ID : 52

This video demonstrates a 4-DOF partially decoupled scara-type parallel robot (Quadrupteron). References: 1. P.L. Richard, C. Gosselin, X. Kong: Kinematic analysis and prototyping of a partially decoupled 4-DOF 3T1R parallel manipulator, ASME J. Mech. Des. 129(6), 611-616 (2007); 2. X. Kong, C. Gosselin: Forward displacement analysis of a quadratic 4-DOF 3T1R parallel manipulator: The Quadrupteron, Meccanica 46(1), 147-154 (2011); 3. C. Gosselin: Compact dynamic models for the tripteron and quadrupteron parallel manipulators, J. Syst. Control Eng. 223(I1), 1-11 (2009)

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

3-D models from 2-D video - automatically

Author  Marc Pollefeys

Video ID : 125

We show how a video is automatically converted into a 3-D model using computer-vision techniques. More details on this approach can be found in: M. Pollefeys, L. Van Gool, M. Vergauwen, F. Verbiest, K. Cornelis, J. Tops, R. Koch: Visual modeling with a hand-held camera, Int. J. Comp. Vis. 59(3), 207-232 (2004).

Chapter 0 — Preface

Bruno Siciliano, Oussama Khatib and Torsten Kröger

The preface of the Second Edition of the Springer Handbook of Robotics contains three videos about the creation of the book and using its multimedia app on mobile devices.

The handbook — The story continues

Author  Bruno Siciliano

Video ID : 845

This video illustrates the joyful mood of the big team of the Springer Handbook of Robotics at the completion of the Second Edition.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

Parallel tracking and mapping for small AR workspaces (PTAM)

Author  Georg Klein, David Murray

Video ID : 123

Video results for an augmented-reality tracking system. A computer tracks a camera and works out a map of the environment in real time, and this can be used to overlay virtual graphics. Presented at the ISMAR 2007 conference.

Chapter 58 — Robotics in Hazardous Applications

James Trevelyan, William R. Hamel and Sung-Chul Kang

Robotics researchers have worked hard to realize a long-awaited vision: machines that can eliminate the need for people to work in hazardous environments. Chapter 60 is framed by the vision of disaster response: search and rescue robots carrying people from burning buildings or tunneling through collapsed rock falls to reach trapped miners. In this chapter we review tangible progress towards robots that perform routine work in places too dangerous for humans. Researchers still have many challenges ahead of them but there has been remarkable progress in some areas. Hazardous environments present special challenges for the accomplishment of desired tasks depending on the nature and magnitude of the hazards. Hazards may be present in the form of radiation, toxic contamination, falling objects or potential explosions. Technology that specialized engineering companies can develop and sell without active help from researchers marks the frontier of commercial feasibility. Just inside this border lie teleoperated robots for explosive ordnance disposal (EOD) and for underwater engineering work. Even with the typical tenfold disadvantage in manipulation performance imposed by the limits of today’s telepresence and teleoperation technology, in terms of human dexterity and speed, robots often can offer a more cost-effective solution. However, most routine applications in hazardous environments still lie far beyond the feasibility frontier. Fire fighting, remediating nuclear contamination, reactor decommissioning, tunneling, underwater engineering, underground mining and clearance of landmines and unexploded ordnance still present many unsolved problems.

“Sukura” robot developed for reconnaissance missions inside nuclear reactor buildings

Author  James P. Trevelyan

Video ID : 584

This video shows a robot "Sakura" or "Cherry Blossom" developed by researchers at the Chiba Institute of Technology, creators of the successful "Quince" robot.

Chapter 64 — Rehabilitation and Health Care Robotics

H.F. Machiel Van der Loos, David J. Reinkensmeyer and Eugenio Guglielmelli

The field of rehabilitation robotics considers robotic systems that 1) provide therapy for persons seeking to recover their physical, social, communication, or cognitive function, and/or that 2) assist persons who have a chronic disability to accomplish activities of daily living. This chapter will discuss these two main domains and provide descriptions of the major achievements of the field over its short history and chart out the challenges to come. Specifically, after providing background information on demographics (Sect. 64.1.2) and history (Sect. 64.1.3) of the field, Sect. 64.2 describes physical therapy and exercise training robots, and Sect. 64.3 describes robotic aids for people with disabilities. Section 64.4 then presents recent advances in smart prostheses and orthoses that are related to rehabilitation robotics. Finally, Sect. 64.5 provides an overview of recent work in diagnosis and monitoring for rehabilitation as well as other health-care issues. The reader is referred to Chap. 73 for cognitive rehabilitation robotics and to Chap. 65 for robotic smart home technologies, which are often considered assistive technologies for persons with disabilities. At the conclusion of the present chapter, the reader will be familiar with the history of rehabilitation robotics and its primary accomplishments, and will understand the challenges the field may face in the future as it seeks to improve health care and the well being of persons with disabilities.

Indego

Author  Parker Hannifin

Video ID : 510

Indego is a powered orthosis developed by Vanderbilt University and commercialized by Parker Hannifin, which is designed to help individuals with paralysis to walk.

Chapter 79 — Robotics for Education

David P. Miller and Illah Nourbakhsh

Educational robotics programs have become popular in most developed countries and are becoming more and more prevalent in the developing world as well. Robotics is used to teach problem solving, programming, design, physics, math and even music and art to students at all levels of their education. This chapter provides an overview of some of the major robotics programs along with the robot platforms and the programming environments commonly used. Like robot systems used in research, there is a constant development and upgrade of hardware and software – so this chapter provides a snapshot of the technologies being used at this time. The chapter concludes with a review of the assessment strategies that can be used to determine if a particular robotics program is benefitting students in the intended ways.

New Mexico Elementary Botball 2014 - Teagan's first-ever run.

Author  Jtlboys3

Video ID : 635

This video shows some elementary-school students running their line-following code (written in C) on a robot at the local Junior Botball Challenge event. Details from: https://www.juniorbotballchallenge.org .

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Adaptive force/velocity control for opening unknown doors

Author  Yiannis Karayiannidis, Colin Smith, Francisco E. Vina, Petter Ogren, Danica Kragic

Video ID : 675

We propose a method that can open doors without prior knowledge of the door's kinematics. The method consists of a velocity controller that uses force measurements and estimates of the radial direction based on adaptive estimates of the position of the door hinge. The control action is decomposed into an estimated radial and tangential direction, following the concept of hybrid force/motion control.