View Chapter

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

Ichthus

Author  Gi-Hun Yang, Kyung-Sik Kim, Sang-Hyo Lee, Chullhee Cho, Youngsun Ryuh

Video ID : 432

This video study captures a stage in the development of a robotic fish called ‘Ichthus’ which can be used in water-quality sensing systems. The robotic fish ‘Ichthus’ has a 3-DOF serial link-mechanism for its propulsion, which was developed at KITECH.

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

A robot for harvesting sweet peppers in greenhouses

Author  Jochen Hemming, Wouter Bac, Bart van Tuijl, Ruud Barth, Eldert van Henten, Jan Bontsema, Erik Pekkeriet

Video ID : 304

This video shows robotic harvesting of sweet-pepper fruits in a commercial Dutch greenhouse in June 2014. The base of the robot consists of two carrier modules. On the first are located the manipulator (nine degrees-of-freedom), specifically developed for this project, the control electronics and the computers. On the sensor carrier module, two 5 megapixel color cameras (comprising a small baseline stereo setup) and a time-of-flight (TOF) camera are installed. Around the sensors, a light grid is placed to illuminate the scene. The sensor system is mounted on a linear motorized slide and can be horizontally moved in and out of the workspace of the manipulator. Machine-vision software localizes ripe fruits and obstacles in 3D. Two different types of end-effectors were designed and tested. The fin-ray gripper features a combined grip and cut mechanism. This end-effector first grips the fruit and after that the peduncle of the fruit is cut. The lip-type end-effector first stabilizes the fruit using a suction cup after which two rings enclose the fruit and cut the peduncle of the fruit. Both end-effectors have a miniature RGB and a TOF camera for refining the fruit position and to determine the fruit pose. This robot demonstrator is one of the results of the EU project CROPS, Clever Robots for Crops (www.crops-robots.eu).

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Demonstrations and reproduction of the task of juicing an orange

Author  Florent D'Halluin, Aude Billard

Video ID : 29

Human demonstrations of the task of juicing an orange, and reproductions by the robot in new situations where the objects are located in positions not seen in the demonstrations. URL: http://www.scholarpedia.org/article/Robot_learning_by_demonstration

Chapter 43 — Telerobotics

Günter Niemeyer, Carsten Preusche, Stefano Stramigioli and Dongjun Lee

In this chapter we present an overview of the field of telerobotics with a focus on control aspects. To acknowledge some of the earliest contributions and motivations the field has provided to robotics in general, we begin with a brief historical perspective and discuss some of the challenging applications. Then, after introducing and classifying the various system architectures and control strategies, we emphasize bilateral control and force feedback. This particular area has seen intense research work in the pursuit of telepresence. We also examine some of the emerging efforts, extending telerobotic concepts to unconventional systems and applications. Finally,we suggest some further reading for a closer engagement with the field.

Semi-autonomous teleoperation of multiple UAVs: Passing a narrow gap

Author  Antonio Franchi, Paolo Robuffo Giordano

Video ID : 71

This video shows the bilateral teleoperation of a group of four quadrotors UAVs navigating in a cluttered environment. The human operator provides velocity-level motion commands and receives force-feedback information on the UAV interaction with the environment (e.g., presence of obstacles and external disturbances).

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

Treemap: An O(log n) algorithm for indoor simultaneous localization and mapping

Author  Udo Frese

Video ID : 441

This video provides an illustration of graph-based SLAM, described in Chap. 46.3.3, Springer Handbook of Robotics, 2nd edn (2016). Reference: U. Frese: Treemap: An O(log n) algorithm for indoor simultaneous localization and mapping, Auton. Robot. 21(2), 103–122 (2006).

Chapter 62 — Intelligent Vehicles

Alberto Broggi, Alex Zelinsky, Ümit Özgüner and Christian Laugier

This chapter describes the emerging robotics application field of intelligent vehicles – motor vehicles that have autonomous functions and capabilities. The chapter is organized as follows. Section 62.1 provides a motivation for why the development of intelligent vehicles is important, a brief history of the field, and the potential benefits of the technology. Section 62.2 describes the technologies that enable intelligent vehicles to sense vehicle, environment, and driver state, work with digital maps and satellite navigation, and communicate with intelligent transportation infrastructure. Section 62.3 describes the challenges and solutions associated with road scene understanding – a key capability for all intelligent vehicles. Section 62.4 describes advanced driver assistance systems, which use the robotics and sensing technologies described earlier to create new safety and convenience systems for motor vehicles, such as collision avoidance, lane keeping, and parking assistance. Section 62.5 describes driver monitoring technologies that are being developed to mitigate driver fatigue, inattention, and impairment. Section 62.6 describes fully autonomous intelligent vehicles systems that have been developed and deployed. The chapter is concluded in Sect. 62.7 with a discussion of future prospects, while Sect. 62.8 provides references to further reading and additional resources.

Motion prediction using the Bayesian-occupancy-filter approach (Inria)

Author  Christian Laugier, E-Motion Team

Video ID : 420

This video illustrates the prediction capabilities of the Bayesian-occupancy-filter approach which is able to maintain an updated record and estimate of the relatives positions and velocities of an autonomous vehicle and of a detected-and-tracked moving obstacle (e.g., a pedestrian in the video). The approach still works despite temporary obstructions. The method has been patented in, and commercialized since, 2005. More details in [62.60].

Chapter 20 — Snake-Like and Continuum Robots

Ian D. Walker, Howie Choset and Gregory S. Chirikjian

This chapter provides an overview of the state of the art of snake-like (backbones comprised of many small links) and continuum (continuous backbone) robots. The history of each of these classes of robot is reviewed, focusing on key hardware developments. A review of the existing theory and algorithms for kinematics for both types of robot is presented, followed by a summary ofmodeling of locomotion for snake-like and continuum mechanisms.

Bimanual dissection

Author  Pierre Dupont

Video ID : 249

This 2011 video demonstrates bimanual, teleoperated tissue dissection using a CO2 laser and 1 mm-wide forceps at the Pediatric Cardiac Bioengeneering Lab at Boston Children's Hospital.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Yale Aerial Manipulator - Dollar Grasp Lab

Author  Paul E. I. Pounds, Daniel R. Bersak, Aaron M. Dollar

Video ID : 656

Aaron Dollar's Aerial Manipulator integrates a gripper that is able to directly grasp and transport objects.

Chapter 19 — Robot Hands

Claudio Melchiorri and Makoto Kaneko

Multifingered robot hands have a potential capability for achieving dexterous manipulation of objects by using rolling and sliding motions. This chapter addresses design, actuation, sensing and control of multifingered robot hands. From the design viewpoint, they have a strong constraint in actuator implementation due to the space limitation in each joint. After briefly introducing the overview of anthropomorphic end-effector and its dexterity in Sect. 19.1, various approaches for actuation are provided with their advantages and disadvantages in Sect. 19.2. The key classification is (1) remote actuation or build-in actuation and (2) the relationship between the number of joints and the number of actuator. In Sect. 19.3, actuators and sensors used for multifingered hands are described. In Sect. 19.4, modeling and control are introduced by considering both dynamic effects and friction. Applications and trends are given in Sect. 19.5. Finally, this chapter is closed with conclusions and further reading.

A high-speed hand

Author  Ishikawa Komuro Lab

Video ID : 755

Ishikawa Komuro Lab's high-speed robot hand performing impressive acts of dexterity and skillful manipulation.

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

MonoSLAM: Real-time single camera SLAM

Author  Andrew Davison

Video ID : 453

This video describes MonoSLAM, an influential early real-time, single-camera, visual SLAM system, described in Chap. 46.4, Springer Handbook of Robotics, 2nd edn (2016). Reference: A.J. Davison, I. Reid, N. Molton, O. Stasse: MonoSLAM: Real-time single camera SLAM, IEEE Trans. Pattern Anal. Mach. Intel. 29(6), 1052-1067 (2007).