View Chapter

Chapter 64 — Rehabilitation and Health Care Robotics

H.F. Machiel Van der Loos, David J. Reinkensmeyer and Eugenio Guglielmelli

The field of rehabilitation robotics considers robotic systems that 1) provide therapy for persons seeking to recover their physical, social, communication, or cognitive function, and/or that 2) assist persons who have a chronic disability to accomplish activities of daily living. This chapter will discuss these two main domains and provide descriptions of the major achievements of the field over its short history and chart out the challenges to come. Specifically, after providing background information on demographics (Sect. 64.1.2) and history (Sect. 64.1.3) of the field, Sect. 64.2 describes physical therapy and exercise training robots, and Sect. 64.3 describes robotic aids for people with disabilities. Section 64.4 then presents recent advances in smart prostheses and orthoses that are related to rehabilitation robotics. Finally, Sect. 64.5 provides an overview of recent work in diagnosis and monitoring for rehabilitation as well as other health-care issues. The reader is referred to Chap. 73 for cognitive rehabilitation robotics and to Chap. 65 for robotic smart home technologies, which are often considered assistive technologies for persons with disabilities. At the conclusion of the present chapter, the reader will be familiar with the history of rehabilitation robotics and its primary accomplishments, and will understand the challenges the field may face in the future as it seeks to improve health care and the well being of persons with disabilities.

HandSOME exoskeleton

Author  Peter Lum

Video ID : 568

A stroke patient's ability to pick up objects is immediately improved after donning the HandSOME orthosis. Springs provide a customized assistance profile that increases the active range of motion with only minimal decreases in grip force.

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

A single-motor-actuated, miniature, steerable jumping robot

Author  Jianguo Zhao, Jing Xu, Bingtuan Gao, Ning Xi, Fernando J. Cintron, Matt W. Mutka, Li Xiao

Video ID : 280

The contents of the video are divided into three parts. The first part illustrates the individual functions of the robot such as jumping, self-righting and steering. The second part demonstrates the robot's locomotion capability in indoor environments. Scenarios such as jumping from the floor, jumping in an office and jumping over stairs are included. The third part shows the robot's locomotion capability in outdoor environments. Experiments on uneven ground, ground with small gravels and ground with grass are included.

Smooth vertical surface climbing with directional adhesion

Author  Sangbae Kim, Mark R. Cutkosky

Video ID : 389

Stickybot is a bioinspired robot that climbs smooth vertical surfaces such as those made of glass, plastic, and ceramic tile at 4 cm/s. The robot employs several design principles adapted from the gecko, including a hierarchy of compliant structures and directional adhesion. At the finest scale, the undersides of Stickybot’s toes are covered with arrays of small, angled polymer stalks.

Chapter 41 — Active Manipulation for Perception

Anna Petrovskaya and Kaijen Hsiao

This chapter covers perceptual methods in which manipulation is an integral part of perception. These methods face special challenges due to data sparsity and high costs of sensing actions. However, they can also succeed where other perceptual methods fail, for example, in poor-visibility conditions or for learning the physical properties of a scene.

The chapter focuses on specialized methods that have been developed for object localization, inference, planning, recognition, and modeling in activemanipulation approaches.We concludewith a discussion of real-life applications and directions for future research.

Tactile exploration and modeling using shape primitives

Author  Francesco Mazzini

Video ID : 76

This video shows a robot performing tactile exploration and modeling of a lab-constructed scene that was designed to be similar to those found in interventions for underwater oil spills (leaking pipe). Representing the scene with geometric primitives enables the surface to be described using only sparse tactile data from joint encoders. The robot's movements are chosen to maximize the expected increase in knowledge about the scene.

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

Distributed manipulation with mobile robots

Author  Bruce Donald, Jim Jennings, Daniela Rus

Video ID : 208

This video demonstrates cooperative robot pushing without explicit communication.

Chapter 36 — Motion for Manipulation Tasks

James Kuffner and Jing Xiao

This chapter serves as an introduction to Part D by giving an overview of motion generation and control strategies in the context of robotic manipulation tasks. Automatic control ranging from the abstract, high-level task specification down to fine-grained feedback at the task interface are considered. Some of the important issues include modeling of the interfaces between the robot and the environment at the different time scales of motion and incorporating sensing and feedback. Manipulation planning is introduced as an extension to the basic motion planning problem, which can be modeled as a hybrid system of continuous configuration spaces arising from the act of grasping and moving parts in the environment. The important example of assembly motion is discussed through the analysis of contact states and compliant motion control. Finally, methods aimed at integrating global planning with state feedback control are summarized.

Control pre-imaging for multifingered grasp synthesis

Author  Jefferson A. Coelho Jr. et al.

Video ID : 363

The video demonstrates sensory-motor control for multifingered manipulation. The first part of the video shows a top and a lateral grasp of rectangular blocks synthesized by the proposed controller. The second part shows dexterous manipulation tests, controlling stable multiple fingers to walk over the surface of an object while grasping the object.

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

An automated mobile platform for orchard scanning and for soil, yield, and flower mapping

Author  James Underwood, Calvin Hung, Suchet Bargoti, Mark Calleija, Robert Fitch, Juan Nieto, Salah Sukkarieh

Video ID : 306

This video shows an end-to-end system for acquiring high-resolution information to support precision agriculture in almond orchards. The robot drives along the orchard rows autonomously, gathering LIDAR and camera data while passing the trees. Each tree is automatically identified and photographed. Image classification is performed on the photos to estimate flower and fruit densities per tree. The information can be stored in a database, compared throughout the season and from one year to the next, and mapped and displayed visually to assist growers in managing and optimizing production.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

DTAM: Dense tracking and mapping in real-time

Author  Richard A. Newcombe, Steven J. Lovegrove, Andrew J. Davison

Video ID : 124

This video demonstrates the system described in the paper, "DTAM: Dense Tracking and Mapping in Real-Time" by Richard Newcombe, Steven Lovegrove and Andrew Davison for ICCV 2011.

Chapter 15 — Robot Learning

Jan Peters, Daniel D. Lee, Jens Kober, Duy Nguyen-Tuong, J. Andrew Bagnell and Stefan Schaal

Machine learning offers to robotics a framework and set of tools for the design of sophisticated and hard-to-engineer behaviors; conversely, the challenges of robotic problems provide both inspiration, impact, and validation for developments in robot learning. The relationship between disciplines has sufficient promise to be likened to that between physics and mathematics. In this chapter, we attempt to strengthen the links between the two research communities by providing a survey of work in robot learning for learning control and behavior generation in robots. We highlight both key challenges in robot learning as well as notable successes. We discuss how contributions tamed the complexity of the domain and study the role of algorithms, representations, and prior knowledge in achieving these successes. As a result, a particular focus of our chapter lies on model learning for control and robot reinforcement learning. We demonstrate how machine learning approaches may be profitably applied, and we note throughout open questions and the tremendous potential for future research.

Learning motor primitives

Author  Jens Kober, Jan Peters

Video ID : 355

The video shows recent success in robot learning for two basic motor tasks, namely, ball-in-a-cup and ball paddling. The video illustrates Section 15.3.5 -- Policy Search, of the Springer Handbook of Robotics, 2nd edn (2016). Reference: J. Kober, J. Peters: Imitation and reinforcement learning - Practical algorithms for motor primitive learning in robotics, IEEE Robot. Autom. Mag. 17(2), 55-62 (2010)

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Learning dexterous grasps that generalize to novel objects by combining hand and contact models

Author  Marek Kopicki, Renaud Detry, Florian Schmidt, Christoph Borst, Rustam Stolkin, Jeremy Wyatt

Video ID : 650

We show how a robot learns grasps for high-DOF hands that generalize to novel objects, given as little as one demonstrated grasp. During grasp learning two types of probability density are learned that model the demonstrated grasp. The first density type (the contact model) models the relationship of an individual finger part to local surface features at its contact point. The second density type (the hand configuration model) models the whole hand configuration during the approach to grasp.