View Chapter

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

Snake robot in the water

Author  Shigeo Hirose

Video ID : 394

A snake-like robot swims in the water. Thanks to dust sealing and waterproofing, the robot can crawl on land with snake-like locomotion and sinuously swim in water. The robot is composed of compact modules with small passive wheels along the outer edges of their fins.

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

Pose graph compression for laser-based SLAM

Author  Cyrill Stachniss

Video ID : 449

This video illustrates pose graph compression, a technique for achieving long-term SLAM, as discussed in Chap. 46.5, Springer Handbook of Robotics, 2nd edn (2016). Reference: H. Kretzschmar, C. Stachniss: Information-theoretic compression of pose graphs for laser-based SLAM, Int. J. Robot. Res. 31(11), 1219--1230 (2012).

Chapter 19 — Robot Hands

Claudio Melchiorri and Makoto Kaneko

Multifingered robot hands have a potential capability for achieving dexterous manipulation of objects by using rolling and sliding motions. This chapter addresses design, actuation, sensing and control of multifingered robot hands. From the design viewpoint, they have a strong constraint in actuator implementation due to the space limitation in each joint. After briefly introducing the overview of anthropomorphic end-effector and its dexterity in Sect. 19.1, various approaches for actuation are provided with their advantages and disadvantages in Sect. 19.2. The key classification is (1) remote actuation or build-in actuation and (2) the relationship between the number of joints and the number of actuator. In Sect. 19.3, actuators and sensors used for multifingered hands are described. In Sect. 19.4, modeling and control are introduced by considering both dynamic effects and friction. Applications and trends are given in Sect. 19.5. Finally, this chapter is closed with conclusions and further reading.

DLR hand

Author  DLR -Robotics and Mechatronics Center

Video ID : 768

A DLR hand

Chapter 60 — Disaster Robotics

Robin R. Murphy, Satoshi Tadokoro and Alexander Kleiner

Rescue robots have been used in at least 28 disasters in six countries since the first deployment to the 9/11 World Trade Center collapse. All types of robots have been used (land, sea, and aerial) and for all phases of a disaster (prevention, response, and recovery). This chapter will cover the basic characteristics of disasters and their impact on robotic design, and describe the robots actually used in disasters to date, with a special focus on Fukushima Daiichi, which is providing a rich proving ground for robotics. The chapter covers promising robot designs (e.g., snakes, legged locomotion) and concepts (e.g., robot teams or swarms, sensor networks), as well as progress and open issues in autonomy. The methods of evaluation in benchmarks for rescue robotics are discussed and the chapter concludes with a discussion of the fundamental problems and open issues facing rescue robotics, and their evolution from an interesting idea to widespread adoption.

Assistive mapping during teleoperation

Author  Alexander Kleiner, Christian Dornhege, Andreas Ciossek

Video ID : 140

This video shows a commercial mapping system that has been developed by the University of Freiburg (A. Kleiner and C. Dornhege) and the telerob GmbH (A. Ciossek) in Germany. The video first shows the physical integration of the mapping system on the telemax bomb-disposal robot. Then, the real-time output of the mapping system superimposed on the video output of the robot's camera is shown.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

State-representation learning for robotics

Author  Rico Jonschkowski, Oliver Brock

Video ID : 670

State-representation learning for robotics using prior knowledge about interacting with the physical world.

Chapter 13 — Behavior-Based Systems

François Michaud and Monica Nicolescu

Nature is filled with examples of autonomous creatures capable of dealing with the diversity, unpredictability, and rapidly changing conditions of the real world. Such creatures must make decisions and take actions based on incomplete perception, time constraints, limited knowledge about the world, cognition, reasoning and physical capabilities, in uncontrolled conditions and with very limited cues about the intent of others. Consequently, one way of evaluating intelligence is based on the creature’s ability to make the most of what it has available to handle the complexities of the real world. The main objective of this chapter is to explain behavior-based systems and their use in autonomous control problems and applications. The chapter is organized as follows. Section 13.1 overviews robot control, introducing behavior-based systems in relation to other established approaches to robot control. Section 13.2 follows by outlining the basic principles of behavior-based systems that make them distinct from other types of robot control architectures. The concept of basis behaviors, the means of modularizing behavior-based systems, is presented in Sect. 13.3. Section 13.4 describes how behaviors are used as building blocks for creating representations for use by behavior-based systems, enabling the robot to reason about the world and about itself in that world. Section 13.5 presents several different classes of learning methods for behavior-based systems, validated on single-robot and multirobot systems. Section 13.6 provides an overview of various robotics problems and application domains that have successfully been addressed or are currently being studied with behavior-based control. Finally, Sect. 13.7 concludes the chapter.

Natural interaction design of a humanoid robot

Author  François Michaud

Video ID : 418

Demonstration of the use of HBBA, hybrid behavior-based architecture, to implement three interactional capabilities on IRL-1. Reference: F. Ferland, D. Létourneau, M.-A. Legault, M. Lauria, F. Michaud: Natural interaction design of a humanoid robot, J. Human-Robot Interact. 1(2), 118-134 (2012)

Chapter 55 — Space Robotics

Kazuya Yoshida, Brian Wilcox, Gerd Hirzinger and Roberto Lampariello

In the space community, any unmanned spacecraft can be called a robotic spacecraft. However, Space Robots are considered to be more capable devices that can facilitate manipulation, assembling, or servicing functions in orbit as assistants to astronauts, or to extend the areas and abilities of exploration on remote planets as surrogates for human explorers.

In this chapter, a concise digest of the historical overview and technical advances of two distinct types of space robotic systems, orbital robots and surface robots, is provided. In particular, Sect. 55.1 describes orbital robots, and Sect. 55.2 describes surface robots. In Sect. 55.3, the mathematical modeling of the dynamics and control using reference equations are discussed. Finally, advanced topics for future space exploration missions are addressed in Sect. 55.4.

DLR DEOS demonstration mission simulation

Author  Roberto Lampariello, Gerd Hirzinger

Video ID : 339

This video simulation shows an intended task in DLR's DEOS project for grasping an uncooperative, tumbling target satellite (left) by means of a free-flying robot (right, servicer satellite and robot manipulator). The task consists of approaching a predefined point on the target with the robot end-effector, tracking the same point with the robot end-effector while homing in onto it, closing the grasp, and stabilizing the relative motion between the two spacecraft. Following this, the robot performs a berthing task to secure the target in a dedicated docking port on the servicer. The servicer's GNC system is switched off during the entire duration of the grasping maneuver, giving rise to free-floating dynamic behavior of the manipulator. The complete robot trajectory is provided by a motion planner in order to guarantee feasibility with respect to motion constraints, such as the the field of view of the end-effector camera, etc.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Evolved group coordination

Author  Phil Husbands

Video ID : 376

Identical evolved robots are required to coordinate by coming together and moving off in the same direction. No roles are pre-assigned. The robots must evolve to coordinate such that one robot takes on the role of leader and the others follow. Only minimal sensing is available (proximity IR sensing) and no dedicated communication channels. The robot neural-network controllers are evolved using a minimal simualtion and, as can be seen, these successfully transfer to reality. Work by Matt Quinn, Giles Mayley, Linc Smith and Phil Husbands at Sussex University.

Chapter 6 — Model Identification

John Hollerbach, Wisama Khalil and Maxime Gautier

This chapter discusses how to determine the kinematic parameters and the inertial parameters of robot manipulators. Both instances of model identification are cast into a common framework of least-squares parameter estimation, and are shown to have common numerical issues relating to the identifiability of parameters, adequacy of the measurement sets, and numerical robustness. These discussions are generic to any parameter estimation problem, and can be applied in other contexts.

For kinematic calibration, the main aim is to identify the geometric Denavit–Hartenberg (DH) parameters, although joint-based parameters relating to the sensing and transmission elements can also be identified. Endpoint sensing or endpoint constraints can provide equivalent calibration equations. By casting all calibration methods as closed-loop calibration, the calibration index categorizes methods in terms of how many equations per pose are generated.

Inertial parameters may be estimated through the execution of a trajectory while sensing one or more components of force/torque at a joint. Load estimation of a handheld object is simplest because of full mobility and full wrist force-torque sensing. For link inertial parameter estimation, restricted mobility of links nearer the base as well as sensing only the joint torque means that not all inertial parameters can be identified. Those that can be identified are those that affect joint torque, although they may appear in complicated linear combinations.

Dynamic identification of Kuka KR270 : Trajectory without load

Author  Maxime Gautier

Video ID : 486

This video shows a trajectory without load used to identify the dynamic parameters of the links, load, joint drive gains and gravity compensator of a heavy industrial Kuka KR 270 manipulator. Details and results are given in the paper: A. Jubien, M. Gautier: Global identification of spring balancer, dynamic parameters and drive gains of heavy industrial robots, IEEE/RSJ Int. Conf. Intel. Robot. Syst. (IROS), Tokyo (2013) pp. 1355-1360

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Mobile robot helper

Author  Kazuhiro Kosuge, Manabu Sato, Norihide Kazamura

Video ID : 788

The mobile robot helper has two 7-DOF arms, force/torque sensors. Named Mr. Helper, it helps people to move objects, using FT sensor and impedance control system.