View Chapter

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Evolved walking in octopod

Author  Phil Husbands

Video ID : 372

Evolved-walking behaviors on an octopod robot. Multiple gaits and obstacle avoidance can be observed. The behavior was evolved in a minimal simulation by Nick Jakobi at Sussex University and is successfully transferred to the real world as is evident from the video.

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

Snake robot climbs a ree

Author  Cornell Wright, Austin Buchan, Ben Brown, Jason Geist, Michael Schwerin, David Rollinson, Matthew Tesch, Howie Choset

Video ID : 393

From the Biorobotics Lab at Carnegie Mellon University, a snake robot (Snakebot) demonstrates how it can climb a tree and look around. Please keep in mind that this robot climbed a specific tree with a specific trunk width to a height about 1 meter off the ground. The researchers working to design, build and program these robots still have much work to do to get these bots to climb taller trees of various sizes and to navigate over branches and wires.

Chapter 51 — Modeling and Control of Underwater Robots

Gianluca Antonelli, Thor I. Fossen and Dana R. Yoerger

This chapter deals with modeling and control of underwater robots. First, a brief introduction showing the constantly expanding role of marine robotics in oceanic engineering is given; this section also contains some historical backgrounds. Most of the following sections strongly overlap with the corresponding chapters presented in this handbook; hence, to avoid useless repetitions, only those aspects peculiar to the underwater environment are discussed, assuming that the reader is already familiar with concepts such as fault detection systems when discussing the corresponding underwater implementation. Themodeling section is presented by focusing on a coefficient-based approach capturing the most relevant underwater dynamic effects. Two sections dealing with the description of the sensor and the actuating systems are then given. Autonomous underwater vehicles require the implementation of mission control system as well as guidance and control algorithms. Underwater localization is also discussed. Underwater manipulation is then briefly approached. Fault detection and fault tolerance, together with the coordination control of multiple underwater vehicles, conclude the theoretical part of the chapter. Two final sections, reporting some successful applications and discussing future perspectives, conclude the chapter. The reader is referred to Chap. 25 for the design issues.

Adaptive L1 depth control of a ROV

Author  Divine Maalouf, Vincent Creuze, Ahmed Chemori

Video ID : 267

This video illustrates the ability of the L1 adaptive controller to deal with parameter changes (buoyancy) and to reject disturbances (impacts, tether movements, etc.). This controller is implemented on a modified version of the AC-ROV underwater vehicle to perform depth regulation. This work was conducted at LIRMM (University Montpellier 2 / CNRS) in collaboration with Tecnalia France.

Chapter 64 — Rehabilitation and Health Care Robotics

H.F. Machiel Van der Loos, David J. Reinkensmeyer and Eugenio Guglielmelli

The field of rehabilitation robotics considers robotic systems that 1) provide therapy for persons seeking to recover their physical, social, communication, or cognitive function, and/or that 2) assist persons who have a chronic disability to accomplish activities of daily living. This chapter will discuss these two main domains and provide descriptions of the major achievements of the field over its short history and chart out the challenges to come. Specifically, after providing background information on demographics (Sect. 64.1.2) and history (Sect. 64.1.3) of the field, Sect. 64.2 describes physical therapy and exercise training robots, and Sect. 64.3 describes robotic aids for people with disabilities. Section 64.4 then presents recent advances in smart prostheses and orthoses that are related to rehabilitation robotics. Finally, Sect. 64.5 provides an overview of recent work in diagnosis and monitoring for rehabilitation as well as other health-care issues. The reader is referred to Chap. 73 for cognitive rehabilitation robotics and to Chap. 65 for robotic smart home technologies, which are often considered assistive technologies for persons with disabilities. At the conclusion of the present chapter, the reader will be familiar with the history of rehabilitation robotics and its primary accomplishments, and will understand the challenges the field may face in the future as it seeks to improve health care and the well being of persons with disabilities.

Lokomat

Author  Hocoma, A.G.

Video ID : 503

The Lokomat was one of the first robotic gait-training devices and is now one of the most widely-used robotic therapy devices.

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

Extended Kalman-filter SLAM

Author  John Leonard

Video ID : 455

This video shows an illustration of Kalman filter SLAM, as described in Chap. 46.3.1, Springer Handbook of Robotics, 2nd edn (2016). References: J.J. Leonard, H. Feder: A computationally efficient method for large-scale concurrent mapping and localization, Proc. Int. Symp. Robot. Res. (ISRR), Salt Lake City (2000), pp. 169–176.

Chapter 36 — Motion for Manipulation Tasks

James Kuffner and Jing Xiao

This chapter serves as an introduction to Part D by giving an overview of motion generation and control strategies in the context of robotic manipulation tasks. Automatic control ranging from the abstract, high-level task specification down to fine-grained feedback at the task interface are considered. Some of the important issues include modeling of the interfaces between the robot and the environment at the different time scales of motion and incorporating sensing and feedback. Manipulation planning is introduced as an extension to the basic motion planning problem, which can be modeled as a hybrid system of continuous configuration spaces arising from the act of grasping and moving parts in the environment. The important example of assembly motion is discussed through the analysis of contact states and compliant motion control. Finally, methods aimed at integrating global planning with state feedback control are summarized.

Autonomous continuum grasping

Author  Jing Xiao et al.

Video ID : 357

The video shows three example tasks: (1) autonomous grasping and lifting operation of an object, (2) autonomous obstacle avoidance operation, and (3) autonomous operation of grasping and lifting an object while avoiding another object. Note that the grasped object was lifted about 2 inches off the table.

Learning to place new objects

Author  Yun Jiang et al.

Video ID : 370

The video shows how to a robot learns to place objects stably in preferred locations. Four different tasks are performed: 1) loading a refrigerator, 2) loading a bookshelf, 3) cleaning a table, and 4) loading dish-racks.

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

A robot for harvesting sweet peppers in greenhouses

Author  Jochen Hemming, Wouter Bac, Bart van Tuijl, Ruud Barth, Eldert van Henten, Jan Bontsema, Erik Pekkeriet

Video ID : 304

This video shows robotic harvesting of sweet-pepper fruits in a commercial Dutch greenhouse in June 2014. The base of the robot consists of two carrier modules. On the first are located the manipulator (nine degrees-of-freedom), specifically developed for this project, the control electronics and the computers. On the sensor carrier module, two 5 megapixel color cameras (comprising a small baseline stereo setup) and a time-of-flight (TOF) camera are installed. Around the sensors, a light grid is placed to illuminate the scene. The sensor system is mounted on a linear motorized slide and can be horizontally moved in and out of the workspace of the manipulator. Machine-vision software localizes ripe fruits and obstacles in 3D. Two different types of end-effectors were designed and tested. The fin-ray gripper features a combined grip and cut mechanism. This end-effector first grips the fruit and after that the peduncle of the fruit is cut. The lip-type end-effector first stabilizes the fruit using a suction cup after which two rings enclose the fruit and cut the peduncle of the fruit. Both end-effectors have a miniature RGB and a TOF camera for refining the fruit position and to determine the fruit pose. This robot demonstrator is one of the results of the EU project CROPS, Clever Robots for Crops (www.crops-robots.eu).

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

LIBVISO: Visual odometry for intelligent vehicles

Author  Andreas Geiger

Video ID : 122

This video demonstrates a visual-odometry algorithm on the performance of the vehicle Annieway (VW Passat). Visual odometry is the estimation of a video camera's 3-D motion and orientation, which is purely based on stereo vision in this case. The blue trajectory is the motion estimated by visual odometry, and the red trajectory is the ground truth by a high-precision OXTS RT3000 GPS+IMU system. The software is available from http://www.cvlibs.net/

Chapter 6 — Model Identification

John Hollerbach, Wisama Khalil and Maxime Gautier

This chapter discusses how to determine the kinematic parameters and the inertial parameters of robot manipulators. Both instances of model identification are cast into a common framework of least-squares parameter estimation, and are shown to have common numerical issues relating to the identifiability of parameters, adequacy of the measurement sets, and numerical robustness. These discussions are generic to any parameter estimation problem, and can be applied in other contexts.

For kinematic calibration, the main aim is to identify the geometric Denavit–Hartenberg (DH) parameters, although joint-based parameters relating to the sensing and transmission elements can also be identified. Endpoint sensing or endpoint constraints can provide equivalent calibration equations. By casting all calibration methods as closed-loop calibration, the calibration index categorizes methods in terms of how many equations per pose are generated.

Inertial parameters may be estimated through the execution of a trajectory while sensing one or more components of force/torque at a joint. Load estimation of a handheld object is simplest because of full mobility and full wrist force-torque sensing. For link inertial parameter estimation, restricted mobility of links nearer the base as well as sensing only the joint torque means that not all inertial parameters can be identified. Those that can be identified are those that affect joint torque, although they may appear in complicated linear combinations.

Dynamic identification of Staubli TX40 : Trajectory with load

Author  Maxime Gautier

Video ID : 481

This video shows a trajectory with a known payload mass of 4.5 kg attached to the end effector of an industrial Staubli TX 40 manipulator. Joint position and current reference data are collected on this short-time (8s) trajectory and used with data collected on a trajectory without load to identify all the dynamic parameters of the links, load and joint drive chain in a single global LS procedure. Details and results are given in the paper : M. Gautier, S. Briot: Global identification of joint drive gains and dynamic parameters of robots, ASME J. Dyn. Syst. Meas. Control 136(5), 051025̶ 051025-9 (2014); doi:10.1115/1.4027506