View Chapter

Chapter 61 — Robot Surveillance and Security

Wendell H. Chun and Nikolaos Papanikolopoulos

This chapter introduces the foundation for surveillance and security robots for multiple military and civilian applications. The key environmental domains are mobile robots for ground, aerial, surface water, and underwater applications. Surveillance literallymeans to watch fromabove,while surveillance robots are used to monitor the behavior, activities, and other changing information that are gathered for the general purpose of managing, directing, or protecting one’s assets or position. In a practical sense, the term surveillance is taken to mean the act of observation from a distance, and security robots are commonly used to protect and safeguard a location, some valuable assets, or personal against danger, damage, loss, and crime. Surveillance is a proactive operation,while security robots are a defensive operation. The construction of each type of robot is similar in nature with amobility component, sensor payload, communication system, and an operator control station.

After introducing the major robot components, this chapter focuses on the various applications. More specifically, Sect. 61.3 discusses the enabling technologies of mobile robot navigation, various payload sensors used for surveillance or security applications, target detection and tracking algorithms, and the operator’s robot control console for human–machine interface (HMI). Section 61.4 presents selected research activities relevant to surveillance and security, including automatic data processing of the payload sensors, automaticmonitoring of human activities, facial recognition, and collaborative automatic target recognition (ATR). Finally, Sect. 61.5 discusses future directions in robot surveillance and security, giving some conclusions and followed by references.

Camera control from gaze

Author  Fabien Spindler

Video ID : 702

Visual-servoing techniques consist of using the data provided by one or several cameras in order to control the motion of a robotic security or surveillance system. A large variety of positioning or target tracking tasks can be implemented by controlling from one to all degrees of freedom of the system.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Demonstrations and reproduction of the task of juicing an orange

Author  Florent D'Halluin, Aude Billard

Video ID : 29

Human demonstrations of the task of juicing an orange, and reproductions by the robot in new situations where the objects are located in positions not seen in the demonstrations. URL: http://www.scholarpedia.org/article/Robot_learning_by_demonstration

Chapter 9 — Force Control

Luigi Villani and Joris De Schutter

A fundamental requirement for the success of a manipulation task is the capability to handle the physical contact between a robot and the environment. Pure motion control turns out to be inadequate because the unavoidable modeling errors and uncertainties may cause a rise of the contact force, ultimately leading to an unstable behavior during the interaction, especially in the presence of rigid environments. Force feedback and force control becomes mandatory to achieve a robust and versatile behavior of a robotic system in poorly structured environments as well as safe and dependable operation in the presence of humans. This chapter starts from the analysis of indirect force control strategies, conceived to keep the contact forces limited by ensuring a suitable compliant behavior to the end effector, without requiring an accurate model of the environment. Then the problem of interaction tasks modeling is analyzed, considering both the case of a rigid environment and the case of a compliant environment. For the specification of an interaction task, natural constraints set by the task geometry and artificial constraints set by the control strategy are established, with respect to suitable task frames. This formulation is the essential premise to the synthesis of hybrid force/motion control schemes.

Experiments of spatial impedance control

Author  Fabrizio Caccavale, Ciro Natale, Bruno Siciliano, Luigi Villani

Video ID : 686

The videod results of an experimental study of impedance control schemes for a robot manipulator in contact with the environment are presented. Six-DOF interaction tasks are considered that require the implementation of a spatial impedance described in terms of both its translational and its rotational parts. Two representations of end-effector orientation are adopted, namely, Euler angles and quaternions, and the implications for the choice of different orientation displacements are discussed. The controllers are tested on an industrial robot with open-control architecture in a number of case studies. This work was published in A. Casals, A.T. de Almeida (Eds.): Experimental Robotics V, Lect. Note. Control Inform. Sci. 232 (Springer, Berlin, Heidelberg 1998)

Chapter 54 — Industrial Robotics

Martin Hägele, Klas Nilsson, J. Norberto Pires and Rainer Bischoff

Much of the technology that makes robots reliable, human friendly, and adaptable for numerous applications has emerged from manufacturers of industrial robots. With an estimated installation base in 2014 of about 1:5million units, some 171 000 new installations in that year and an annual turnover of the robotics industry estimated to be US$ 32 billion, industrial robots are by far the largest commercial application of robotics technology today.

The foundations for robot motion planning and control were initially developed with industrial applications in mind. These applications deserve special attention in order to understand the origin of robotics science and to appreciate the many unsolved problems that still prevent the wider use of robots in today’s agile manufacturing environments. In this chapter, we present a brief history and descriptions of typical industrial robotics applications and at the same time we address current critical state-of-the-art technological developments. We show how robots with differentmechanisms fit different applications and how applications are further enabled by latest technologies, often adopted from technological fields outside manufacturing automation.

We will first present a brief historical introduction to industrial robotics with a selection of contemporary application examples which at the same time refer to a critical key technology. Then, the basic principles that are used in industrial robotics and a review of programming methods will be presented. We will also introduce the topic of system integration particularly from a data integration point of view. The chapter will be closed with an outlook based on a presentation of some unsolved problems that currently inhibit wider use of industrial robots.

SMErobotics Demonstrator D3 assembly with sensitive compliant robot arms

Author  Martin Haegele, Thilo Zimmermann, Björn Kahl

Video ID : 382

SMErobotics: Europe's leading robot manufacturers and research institutes have teamed up with the European Robotics Initiative for Strengthening the Competitiveness of SMEs in Manufacturing - to make the vision of cognitive robotics a reality in a key segment of EU manufacturing. Funded by the European Union 7th Framework Programme under GA number 287787. Project runtime: 01.01.2012 - 30.06.2016 For a general introduction, please also watch the general SMErobotics project video (ID 260). About this video: Chapter 1: Introduction (0:00); Chapter 2: Work cell description and configuration (00:29); Chapter 3: Selection of the job (00:50); Chapter 4: Preparation step (01:09); Chapter 5: Riveting (01:44); Chapter 6: Error handling with automatic solution (02:17); Chapter 7: Finalise workflow (02:34); Chapter 8: Statement (03:09); Chapter 9: Outro (03:40); Chapter 10: The Consortium (03:54). For details, please visit: http://www.smerobotics.org/project/video-of-demonstrator-d3.html

Chapter 63 — Medical Robotics and Computer-Integrated Surgery

Russell H. Taylor, Arianna Menciassi, Gabor Fichtinger, Paolo Fiorini and Paolo Dario

The growth of medical robotics since the mid- 1980s has been striking. From a few initial efforts in stereotactic brain surgery, orthopaedics, endoscopic surgery, microsurgery, and other areas, the field has expanded to include commercially marketed, clinically deployed systems, and a robust and exponentially expanding research community. This chapter will discuss some major themes and illustrate them with examples from current and past research. Further reading providing a more comprehensive review of this rapidly expanding field is suggested in Sect. 63.4.

Medical robotsmay be classified in many ways: by manipulator design (e.g., kinematics, actuation); by level of autonomy (e.g., preprogrammed versus teleoperation versus constrained cooperative control), by targeted anatomy or technique (e.g., cardiac, intravascular, percutaneous, laparoscopic, microsurgical); or intended operating environment (e.g., in-scanner, conventional operating room). In this chapter, we have chosen to focus on the role of medical robots within the context of larger computer-integrated systems including presurgical planning, intraoperative execution, and postoperative assessment and follow-up.

First, we introduce basic concepts of computerintegrated surgery, discuss critical factors affecting the eventual deployment and acceptance of medical robots, and introduce the basic system paradigms of surgical computer-assisted planning, execution, monitoring, and assessment (surgical CAD/CAM) and surgical assistance. In subsequent sections, we provide an overview of the technology ofmedical robot systems and discuss examples of our basic system paradigms, with brief additional discussion topics of remote telesurgery and robotic surgical simulators. We conclude with some thoughts on future research directions and provide suggested further reading.

Da Vinci surgery on a grape

Author  Edward Hospital, Naperville, Illinois

Video ID : 823

The movie shows the peeling of a grape by using the robotic tools of the Da Vinci robot: Precision, dexterity and motion scaling are impressive.

Chapter 68 — Human Motion Reconstruction

Katsu Yamane and Wataru Takano

This chapter presents a set of techniques for reconstructing and understanding human motions measured using current motion capture technologies. We first review modeling and computation techniques for obtaining motion and force information from human motion data (Sect. 68.2). Here we show that kinematics and dynamics algorithms for articulated rigid bodies can be applied to human motion data processing, with help from models based on knowledge in anatomy and physiology. We then describe methods for analyzing human motions so that robots can segment and categorize different behaviors and use them as the basis for human motion understanding and communication (Sect. 68.3). These methods are based on statistical techniques widely used in linguistics. The two fields share the common goal of converting continuous and noisy signal to discrete symbols, and therefore it is natural to apply similar techniques. Finally, we introduce some application examples of human motion and models ranging from simulated human control to humanoid robot motion synthesis.

The Crystal Ball: Predicting future motions

Author  Katsu Yamane

Video ID : 764

This video shows a demonstration of The Crystal Ball, a system that predicts future motions based on a graphical motion model. The rightmost figure represents the current motion, while the other figures represent the predicted motions.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Social learning applied to task execution

Author  Cynthia Breazeal

Video ID : 562

This is a video demonstration of the Leonardo robot integrating learning via tutelage, self motivated learning and preference learning to perform a tangram-like task. First the robot learns a policy for how to operate a remote-control box to reveal key shapes needed for the next task, integrating self-motivated exploration with tutelage. The human can shape what the robot learns through a variety of social means. Once Leo has learned a policy, the robot begins the tangram task, which is to make a sailboat figure out of the colored blocks on the virtual workspace. During this interaction, the person has a preference for which block colors to use (yellow and blue), which he conveys through nonverbal means. The robot learns this preference rule from observing these nonverbal cues. During the task, the robot needs blocks of a certain shape and color and which are not readily available on the workspace, but can be accessed by operating the remote-control box to reveal those shapes. Leo evokes those recently learned policies to access those shapes to achieve the goal of making the sailboat figure.

Chapter 30 — Sonar Sensing

Lindsay Kleeman and Roman Kuc

Sonar or ultrasonic sensing uses the propagation of acoustic energy at higher frequencies than normal hearing to extract information from the environment. This chapter presents the fundamentals and physics of sonar sensing for object localization, landmark measurement and classification in robotics applications. The source of sonar artifacts is explained and how they can be dealt with. Different ultrasonic transducer technologies are outlined with their main characteristics highlighted.

Sonar systems are described that range in sophistication from low-cost threshold-based ranging modules to multitransducer multipulse configurations with associated signal processing requirements capable of accurate range and bearing measurement, interference rejection, motion compensation, and target classification. Continuous-transmission frequency-modulated (CTFM) systems are introduced and their ability to improve target sensitivity in the presence of noise is discussed. Various sonar ring designs that provide rapid surrounding environmental coverage are described in conjunction with mapping results. Finally the chapter ends with a discussion of biomimetic sonar, which draws inspiration from animals such as bats and dolphins.

Side-looking sonar system traveling down a hallway (camera view)

Author  Roman Kuc

Video ID : 314

A camera view from a mobile robot sonar traveling down a hallway past a cinder-block wall and then along the wall, passing a doorway and a window. When scanned with side-looking sonar, the door jamb and window jamb form retro-reflectors that produce echo waveforms that are distinguishable from the cinder block surface.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Playing triadic games with KASPAR

Author  Kerstin Dautenhahn

Video ID : 220

The video illustrates (using researchers taking the roles of children) the system developed by Joshua Wainer as part of his PhD research at University of Hertfordshire. In this study, KASPAR was developed to fully autonomously play games with pairs of children with autism. The robot provides encouragement, motivation and feedback, and 'joins in the game'. The system was evaluated in long-term studies with children with autism (J. Wainer et al. 2014). Results show that KASPAR encourages collaborative skills in children with autism.

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

VisualGPS – High accuracy localization for forestry machinery

Author  Juergen Rossmann, Michael Schluse, Arno Buecken, Christian Schlette, Markus Emde

Video ID : 96

Developments in space robotics continue to find their way into our everyday lives. These advances, for instance, include novel methods to increase localization accuracy in determining one's position in comparison to conventional GPS systems. The example here is the "VisualGPS" approach that helps to estimate the position of forestry machinery, such as harvesters in the woods, with high accuracy. For "VisualGPS", harvesters are equipped with laser scanners. The sensors scan the surrounding area to generate landmarks from the tree positions. The tree positions are combined into a local, single-tree map. By comparing the local, single-tree map with a map generated from aerial survey data, the current machine position can be calculated with an accuracy of 0.5 m.