Nov 18, 2015
Banatao Auditorium, UC Berkeley
Welcome to the Algorithms for HRI Workshop! This full-day event will bring together researchers interested in the computational aspects of enabling robots to interact with and learn from humans. The tentative program is below, with links to speaker abstracts and bios. See below for information on parking and getting to the workshop, wifi, restaurant recommendations, and a list of speaker abstracts and bios.
Attendance is free of charge, but please register online.
Program:
9:00-9:10 Opening remarks from John Canny and Anca Dragan
.
9:10-10:40 Session I: Learning
9:10-9:20 Stuart Russell (UC Berkeley): Cooperative inverse reinforcement learning 9:20-9:40 Yisong Yue (Caltech): The Dueling Bandits Problem 9:40-10:00 Manuel Lopes (INRIA): Interactive Learning for Cooperative Human-Robot Tasks 10:00-10:10 Michael Laskey (UC Berkeley): Reducing Human Burden in Robot Learning From Demonstration 10:10-10:20 Animesh Garg, Sanjay Krishnan (UC Berkeley): Unsupervised Task Segmentation For Learning from Demonstrations .
10:20-10:50 Coffee break
.
10:50-11:50 Session II: Vision
10:50-11:10 Jean Ponce (INRIA/ENS): Weakly-supervised image and video interpretation 11:10-11:30 Cordelia Schmid (INRIA): Human action recognition 11:30-11:40 Trevor Darrell (UC Berkeley): Adaptive, Articulate, and Actionable Deep Learning 11:40-11:50 Chelsea Finn (UC Berkeley): Learning visuomotor skills .
11:50-13:15 Lunch
Lunch will be on your own. See recommended nearby restaurants. .
13:20-15:20 Session III: Collaboration & Planning
13:20-13:40 Kirk Nichols (Stanford): A Multilateral Manipulation Framework for Human-Robot Collaboration in Surgical Tasks 13:40-13:50 Jessica Hamrick (UC Berkeley): Mental Simulation in Humans and Robots 13:50-14:00 Jaime Fisac (UC Berkeley): Safety for Robots and Humans: Learning to Work and Play Together 14:00-14:10 Claire Tomlin (UC Berkeley): Human-Centered Automation in UAV systems 14:10-14:20 Aaron Bestick (UC Berkeley): Personalized Modeling for Human-Robot Collaboration 14:20-14:30 Katie Driggs-Campbell (UC Berkeley):Driver Modeling for Autonomous Vehicles in Mixed Environments 14:30-14:50 Brenna Argall (Northwestern): Adaptive Interactions with Assistive Robots that Replace Lost Function 14:50-15:00 Anca Dragan (UC Berkeley): Intent inference and adaptation 15:00-15:20 Laurel Riek (Notre Dame): Synchronous Coordination Mechanisms for Human Robot Teams .
15:20-15:50 Coffee Break
.
15:50-17:00 Session IV: Interaction Channels
15:50 – 16:10 Karen Kaushansky (Zoox): Designing for Human Vehicle Interactions 16:10 – 16:30 Masayoshi Tomizuka (UC Berkeley): Algorithmic Safety Measures for Intelligent Industrial Co-Robots 16:30 – 16:40 Sean Trott (ICSI): Natural Language Understanding and Communication for Multi-Agent Systems 16:40 – 16:50 Jacob Andreas (UC Berkeley): Language Understanding as Guided Planning 16:50 – 17:10 James Landay (Stanford): From On Body to Out of Body User Experience 17:10 – 17:30 Nik Martelaro, David Sirkin (Stanford): Empirical Studies of HRI in the Lab and the Field
Parking and Directions:
The workshop will be held in the Banatao Auditorium in Sutardja Dai Hall. For directions and parking information, see SDH Public Parking.
Wifi:
Network ID: CalVisitor (no password or login required). More information.
Organizers:
John Canny, Lauren Miller, Anca Dragan
Speaker Abstracts and bios:
Yisong Yue: The Dueling Bandits Problem
In this talk, I will present the Dueling Bandits Problem, which is an online learning framework tailored towards real-time learning from subjective human feedback. In particular, the Dueling Bandits Problem only requires pairwise comparisons, which are shown to be reliably inferred in a variety of subjective feedback settings such as for information retrieval and recommender systems. I will provide an overview of the Dueling Bandits Problem with basic algorithmic results. I will then conclude by briefly discussing some recent results in ongoing research directions in applications to personalized medicine as well as theoretical connections to algorithmic game theory. This is joint work with Josef Broder, Bobby Kleinberg, and Thorsten Joachims.
Bio:
Yisong Yue is an assistant professor in the Computing and Mathematical Sciences Department at the California Institute of Technology. He was previously a research scientist at Disney Research. Before that, he was a postdoctoral researcher in the Machine Learning Department and the iLab at Carnegie Mellon University. He received a Ph.D. from Cornell University and a B.S. from the University of Illinois at Urbana-Champaign.
Yisong’s research interests lie primarily in the theory and application of statistical machine learning. He is particularly interested in developing novel methods for spatiotemporal reasoning, structured prediction, interactive learning systems, and learning with humans in the loop. In the past, his research has been applied to information retrieval, recommender systems, text classification, learning from rich user interfaces, analyzing implicit human feedback, data-driven animation, sports analytics, policy learning in robotics, and adaptive routing & allocation problems.
Stuart Russell: Cooperative inverse reinforcement learning
I will briefly describe ways in which a machine can learn, from observation of human behavior, the system of values that drives that behavior. This is a cooperative game-theoretic generalization of the inverse reinforcement learning problem, with possibly significant implications for current debates over the long-term future of AI.
Bio:
Stuart Russell received his B.A. with first-class honours in physics from Oxford University in 1982 and his Ph.D. in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences and holder of the Smith-Zadeh Chair in Engineering. He is also an Adjunct Professor of Neurological Surgery at UC San Francisco and Vice-Chair of the World Economic Forum’s Council on AI and Robotics. He is a recipient of the Presidential Young Investigator Award of the National Science Foundation, the IJCAI Computers and Thought Award, the Mitchell Prize of the American Statistical Association and the International Society for Bayesian Analysis, and the ACM Karlstrom Outstanding Educator Award. In 1998, he gave the Forsythe Memorial Lectures at Stanford University and from 2012 to 2014 he held the Chaire Blaise Pascal in Paris. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. He has published over 150 papers on a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, and global seismic monitoring. His books include “The Use of Knowledge in Analogy and Induction”, “Do the Right Thing: Studies in Limited Rationality” (with Eric Wefald), and “Artificial Intelligence: A Modern Approach” (with Peter Norvig).
Manuel Lopes: Interactive Learning for Cooperative Human-Robot Tasks
We want to improve how a collaborative task is programmed, learned and executed. An efficient and intuitive collaboration will require flexible protocols, multiple sources of information, interactive learning, and team awareness. We consider that robot programming can be made more efficient, precise and intuitive if we leverage the advantages of complementary approaches such as learning from demonstration, learning from feedback and knowledge transfer. We further extend the approach for collaborative tasks that include concurrent actions made bu the robot and the operator.
Bio:
Dr. Manuel Lopes received the Ph.D. degree in computer science and systems engineering, from the Technical University of Lisbon, Portugal. He was a post-doctoral researcher at VTT, Finland, and a lecturer at the University of Plymouth. He is currently a permanent research scientist at Inria. His main research interests are in the area of autonomous learning systems with the goal of understanding the fundamental mechanisms of learning in animals and machines. He has made important contributions in human-robot collaboration, social learning, robotics and intelligent tutoring systems. He has published several international journal and conference papers in the main venues of robotics, AI and machine learning. He has participated in several national and European projects such as ROBOTCUB, HANDLE, MIRROR, CONTACT. He is currently coordinating the european project 3rdhand.
Michael Laskey: Reducing Human Burden in Robot Learning From Demonstration
Online learning from demonstration algorithms, such as DAgger, can learn policies for problems where the system dynamics and the cost function are unknown. However, during learning, they impose a burden on supervisors to respond to queries each time the robot encounters new states while executing its current best policy. Algorithms such as MMD-IL reduce supervisor burden by filtering queries with insufficient discrepancy in distribution and maintaining multiple policies. We introduce the SHIV algorithm (Svm-based reduction in Human InterVention), which converges to a single policy and reduces supervisor burden in non-stationary high dimensional state distributions. To facilitate scaling and outlier rejection, filtering is based on distance to an approximate level set boundary defined by a One Class support vector machine. We report on experiments in three contexts: 1) a driving simulator with a 27,936 dimensional visual feature space, 2) a push-grasping in clutter simulation with a 22 dimensional state space, and 3) physical surgical needle insertion with a 16 dimensional state space. Results suggest that SHIV can efficiently learn policies with equivalent performance requiring up to 70% fewer queries.
Bio:
Michael is a Graduate Research Fellow at the University of California, Berkeley, pursing a PhD in Electrical Engineering and Computer Science. He researches the application of sequential decision making applied to the robotics context in collaboration with the Automation Science Lab. Michael previously worked on fabrication of nano-fluidic devices with femto-second lasers at the University of Michigan, Ann Arbor, where he received a B.S. in Electrical Engineering. His other research interests include human-robot collaboration, learning from demonstration, reinforcement learning and the application of Bayesian non-parametric techniques for robotics.
Animesh Garg & Sanjay Krishnan: Unsupervised Task Segmentation For Learning from Demonstrations
The growth of robot-assisted minimally invasive surgery has led to sizeable datasets of fixed-camera video and kinematic recordings of surgical subtasks. A key step is to segment these multi-modal trajectories into meaningful contiguous sections in the presence of significant variations in spatial and temporal motion, noise, and looping (repetitive attempts). Manual, or supervised, segmentation can be prone to error and impractical for large datasets. We present Transition State Clustering, a new unsupervised algorithm that leverages video and kinematic data for task-level segmentation, and finds regions of the feature space that mark transition events. Our results suggest that when demonstrations are corrupted with noise and temporal variations, TSC finds up to a 20% more accurate segmentation than GMM-based alternatives. On 67 recordings of surgical needle passing and suturing tasks from the Johns Hopkins JIGSAWS surgical training dataset, TSC finds 83% of needle passing segments and 73% of the suturing segments found by human experts.
Jean Ponce: Weakly-supervised image and video interpretation
This talk addresses the problem of understanding the visual content of images and videos using a weak form of supervision, such as the fact that multiple images contain instances of the same objects, or the textual information available in television or film scripts. I will discuss several instances of this problem, including multi-class image cosegmentation, the joint localization and identification of movie characters and their actions, and the assignment of action labels to video frames using temporal ordering constraints. All these problems can be tackled using a discriminative clustering framework, and I will present the underlying models, appropriate relaxations of the corresponding combinatorial optimization problems associated with learning these models, and efficient algorithms for solving the corresponding convex optimization problems. I will also present experimental results on standard image benchmarks and feature-length films.
Bio:
Jean Ponce is a professor and at Ecole Normale Superieure, where he heads the Department of Computer Science. He received the Doctorat de Troisieme Cycle and Doctorat d’Etat degrees in Computer Science from the University of Paris Orsay. Dr. Ponce is an IEEE Fellow and the recipient of two US patents, as well as an Advanced ERC grant. He is the
co-author of Computer Vision: A Modern Approach, a textbook that has been translated in Chinese, Japanese, and Russian.
Cordelia Schmid: Human action recognition
In this talk we present some recent results on human action recognition in videos. We start by presenting a recent approach for human pose estimation in videos. We, then, show how to use the estimated pose for action recognition. To this end we propose a new pose-based convolutional neural network descriptor for action recognition, which aggregates motion and appearance information along tracks of human body parts. Finally, we present an approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and then tracks high-scoring proposals in the video. Our tracker relies simultaneously on instance-level and class-level detectors. Action are localized in time with a sliding window approach at the track level.
Bio:
Cordelia Schmid is a research director at Inria Grenoble, where she directs the Inria team called LEAR for LEArning and Recognition in Vision. She is an IEEE fellow and a recipient of an ERC advanced grant. In 2006 and 2014, she was awarded the Longuet-Higgins prize for fundamental contributions in computer vision that have withstood the test of time. She holds a M.S. degree in Computer Science from the University of Karlsruhe and a Doctorate from the Institut National Polytechnique de Grenoble.
Trevor Darrell: Adaptive, Articulate, and Actionable Deep Learning
Bio:
Prof. Darrell is on the faculty of the CS Division of the EECS Department at UC Berkeley and he is also appointed at the UC-affiliated International Computer Science Institute (ICSI). Darrell’s group develops algorithms for large-scale perceptual learning, including object and activity recognition and detection, for a variety of applications including multimodal interaction with robots and mobile devices. His interests include computer vision, machine learning, computer graphics, and perception-based human computer interfaces. Prof. Darrell was previously on the faculty of the MIT EECS department from 1999-2008, where he directed the Vision Interface Group. He was a member of the research staff at Interval Research Corporation from 1996-1999, and received the S.M., and PhD. degrees from MIT in 1992 and 1996, respectively. He obtained the B.S.E. degree from the University of Pennsylvania in 1988, having started his career in computer vision as an undergraduate researcher in Ruzena Bajcsy’s GRASP lab.
Chelsea Finn: Learning visuomotor skills
Reinforcement learning provides a powerful and flexible framework for acquiring robotic motor skills. However, using such techniques requires a detailed representation of the state and a reward function encompassing the goal of the task. In this talk, I will present an approach that automates state-space construction and reward function learning by learning a state representation directly from camera images. Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of free-moving objects, and then learns visuomotor skills with these feature points using an efficient reinforcement learning method. The resulting controller reacts continuously to the learned feature points, allowing the robot to dynamically manipulate objects in the world with closed-loop control. I will demonstrate the method by showing a PR2 robot learning a number of visuomotor manipulation skills, including pusing a free-standing toy block, picking up a bag of rice using a spatula, and hanging a loop of rope on a hook at various positions. I will also discuss ongoing work of learning more complex reward functions from human demonstration.
Chelsea Finn is a PhD student in Computer Science at UC Berkeley. Her research is at the intersection of machine learning, perception, and control for robotics. In particular, she is interested in how learning algorithms can enable robots to autonomously acquire complex sensorimotor skills. Previously she received her B.S. in Electrical Engineering and Computer Science at MIT.
Kirk Nichols: A Multilateral Manipulation Framework for Human-Robot Collaboration in Surgical Tasks
In robot-assisted surgery, exploration and manipulation tasks can be achieved through collaboration among robotic and human agents. Collaboration can potentially include multiple agents working towards a shared objective — a scenario referred to as multilateral manipulation. In a joint project with University of California, Berkeley, Johns Hopkins University, University of Washington, and University of California, Los Angeles, we are developing a new methodology for human-robot collaboration that provides design guidelines and safety and stability guarantees, as well as allows for convenient and effective combination of human and robot skills. The talk focuses on a flexible software framework, called the Multilateral Manipulation Software Framework (MMSF), which expedites the development of various multilateral manipulation strategies. We demonstrate the effectiveness of implementing the MMSF in mock surgical tasks. We built autonomous agents capable of completing these tasks, and developed human-robot collaboration models using these autonomous agents. Example human-robot collaboration models tested include (1) fully autonomous task execution, (2) shared control between a human and robotic agent, (3) supervised control where the operator dictates commands to the robot, (4) traded control between the two agents, and (5) bilateral teleoperation. I will describe a user study that compares the performance of these collaboration models in an inclusion segmentation task, in which the system is used to palpate artificial tissue and find the boundaries of a hard lump (inclusion) that simulates a tumor. We implemented the MMSF on the RAVEN-II Surgical Robot and the da Vinci Research Kit.
Bio:
Kirk A. Nichols received the B.S. and M.S. degrees in electrical and computer engineering and the B.S. in applied mathematics degree from the University of Colorado at Boulder in 2010. He is currently a PhD candidate in mechanical engineering at Stanford University. He was also recently a contractor at Intuitive Surgical, working in the developing technologies division. His research focus is developing strategies for multilateral manipulation between autonomous and human agents in a surgical setting.
Jessica Hamrick: Mental Simulation in Humans and Robots
“Mental simulation” is the ability of humans to run hypothetical simulations of the world in our minds. There is evidence that people use mental simulation in a wide variety of scenarios, ranging from predictions of physical events to inferences about the behavior of social agents. In this talk, I will discuss how people use mental simulation, how robots can benefit from taking mental simulation into account when interacting with people, and how mental simulation strategies can inform robot computational strategies.
Bio:
Jessica Hamrick is a Ph.D. student in Tom Griffith’s lab in the Department of Psychology at the University of California, Berkeley. She is interested in how generative and simulation-based computations are utilized by the mind to make predictions and inferences about the world. Previously, Jessica obtained her B.S. and M.Eng. in Computer Science from MIT.
Jaime Fisac: Safety for Robots and Humans: Learning to Work and Play Together
A key to introducing robots into the human space is being able to guarantee their safe operation around people. This necessarily requires the ability to reliably predict and adapt to human actions. Safe learning combines the benefits of data-driven methods with guarantees from control theory, allowing robots to improve their performance within a “safe set” while additionally refining their notion of safety online. We will be looking at how this approach can be widely used to improve both performance and safety in human-robot interaction.
Bio:
Jaime F. Fisac is a third-year PhD student in the area of Control, Intelligent Systems and Robotics at UC Berkeley, working with Prof. Shankar Sastry, and additionally collaborating with Profs. Claire Tomlin (EECS) and Tom Griffiths (Psych). He is interested in multiagent systems, autonomous vehicles, and the problem of providing safety guarantees for human-automation systems. Before coming to Berkeley, he received a BS/MS in Engineering at the Universidad Politécnica de Madrid (Spain) and an MSc in Autonomous Vehicles at Cranfield University (UK). He is a recipient of the La Caixa Foundation fellowship.
Claire Tomlin: Human-Centered Automation in UAV systems
A great deal of research in recent years has focused on the synthesis of controllers for hybrid systems. For safety specifications on the hybrid system, namely to design a controller that steers the system away from unsafe states, we will present a synthesis and computational technique based on optimal control and game theory, that can incorporate human behavior as an input to the system. We demonstrate our methods on some problems in UAV system and traffic design.
Bio:
Claire Tomlin is a Professor of Electrical Engineering and Computer Sciences at Berkeley, where she holds the Charles A. Desoer Chair in Engineering. She held the positions of Assistant, Associate, and Full Professor at Stanford from 1998-2007, and in 2005 joined Berkeley. She has been an Affiliate at LBL in the Life Sciences Division since January 2012. Claire is an IEEE Fellow, and she received the Erlander Professorship of the Swedish Research Council in 2010, a MacArthur Fellowship in 2006, and the Eckman Award of the American Automatic Control Council in 2003. She works in hybrid systems and control, with applications to robotics and automation, biology, and air traffic systems.
Aaron Bestick: Personalized Modeling for Human-Robot Collaboration
When two humans perform a collaborative manipulation task, they use numerous signals to coordinate the interaction: explicit verbal communication, implicit visual communication, interaction forces, and most interesting for us, an intuitive understanding of which motions are natural and safe for their interaction partner. Ultimately, we’d like to use personalized mechanical models of humans to endow robots with this same “intuition” about human ergonomics and safety. I’ll describe a few of our past results using such models to plan human-robot object handoffs, as well as our current work, where we’re considering not just just the ergonomics of the handoff itself, but the ease with which the human can accomplish the ultimate goal of the task.
Bio:
Aaron Bestick is an Electrical Engineering and Computer Sciences PhD student at Berkeley working with Ruzena Bajcsy. His work is in human-robot interaction, with a focus on the control of direct, physical collaboration tasks to optimize the safety and comfort of individual human collaborators.
Katie Driggs-Campbell: Driver Modeling for Autonomous Vehicles in Mixed Environments
Recently, multiple car companies have announced that autonomous vehicles will be available to the public in the next few years. While a great deal of progress has been made in autonomous systems, they often lack the flexibility and the realism that safe drivers exhibit. To address this, there has been an increased focus on identifying human-inspired approaches for driver assistance and autonomous systems to understand, predict, and/or mimic human behavior. We present our work in driver modeling algorithms that identify likely driver behavior and intent and aim to capture how humans interpret and interact with dynamic environments. The resulting models are able to identify behaviors precisely and accurately and, by applying a hybrid control framework, the resulting (semi-) autonomous system can be shown to be minimally invasive and capture the flexibility and adaptability of adept drivers.
Bio:
Katie Driggs-Campbell was born and raised in Phoenix, Arizona, and attended Arizona State University, graduating with a B.S.E. in Electrical Engineering with honors in 2012. She is now pursuing a PhD in Electrical Engineering and Computer Science under the guidance of Professor Ruzena Bajcsy. Her current research focuses on developing experimental testbeds, models of human behaviors, and control algorithms for robotic systems that safely interact with humans in everyday life. Specifically, she considers the interaction between drivers and autonomous vehicles by developing driver models and analyzing networks of heterogeneous vehicles. Outside of work, she enjoys fun facts and being involved in the EE Graduate Student Association and Women in Computer Science and Electrical Engineering organization.
Brenna Argall: Adaptive Interactions with Assistive Robots that Replace Lost Function
For decades, the potential for automation to aid those with motor, or cognitive, impairments has been recognized. It is a paradox that often the more severe a person’s motor impairment, the more challenging it is for them to operate the very assistive machines which might enhance their quality of life. My lab addresses this confound by incorporating robotics autonomy and intelligence into assistive machines—turning the machine into a kind of robot, and offloading some of the control burden from the user. The human-robot team in this case is a very particular one: the robot is physically supporting the human, and replacing or enhancing lost or diminished function. The dynamics between this human-robot team accordingly also is very particular—and getting the control sharing between them right is crucial. A fundamental question that arises time and again in my lab’s work is how exactly to share control between the robot and the human user. We believe that appropriate control sharing, that adapts over time with the user’s preferences and abilities, will be fundamental to the adoption of assistive robots in larger society.
Bio:
Brenna Argall is the June and Donald Brewer Junior Professor of Electrical Engineering & Computer Science at Northwestern University, and an assistant professor in the Department of Physical Medicine & Rehabilitation. Her research lies at the intersection of robotics, machine learning and human rehabilitation. She is director of the assistive & rehabilitation robotics laboratory (argallab) at the Rehabilitation Institute of Chicago (RIC), the nation’s premier rehabilitation hospital. The mission of the argallab is to advance human ability by leveraging robotics autonomy. Prior to joining Northwestern and RIC, she was a postdoctoral fellow (2009-2011) at the École Polytechnique Fédérale de Lausanne (EPFL), and prior to graduate school she held a Computational Biology position at the National Institutes of Health (NIH). Her Ph.D. in Robotics (2009) was received from the Robotics Institute at Carnegie Mellon University, as well as her M.S. in Robotics (2006) and B.S. in Mathematics (2002).
Laurel Riek: Synchronous Coordination Mechanisms for Human Robot Teams
Robots operating in human environments need the ability to dynamically and quickly interpret human activities, understand context, and act appropriately. The goal of our research is to build robots that can work fluently and contingently with human teams. In this talk, I will describe several projects we are exploring in this domain of modeling and synthesizing coordination dynamics. We have designed new nonlinear dynamical methods to automatically model and detect synchronous coordinated action in human teams, and synthesize coordinated behaviors on robots. We are also designing new algorithms for robots to understand context, which can work robustly across highly-noisy human environments, to enable contingent activity. This work has applications for multi-robot systems and computer vision, as well as health care and manufacturing.
Bio:
Dr. Laurel Riek is the Clare Boothe Luce Assistant Professor of Computer Science and Engineering at the University of Notre Dame, and holds an affiliate appointment in Bioengineering. Riek received a Ph.D. in Computer Science from the University of Cambridge, and B.S. in Logic and Computation from Carnegie Mellon University. From 2000-2007, she worked as a Senior Artificial Intelligence Engineer and Roboticist at The MITRE Corporation. Riek’s research interests include robotics, social signal processing, and intelligent health technology. She focuses on designing autonomous robots able to sense, respond, and adapt to human behavior. Her work also tackles real-world problems in healthcare, by designing robotics and sensing technology to improve patient safety. Prof. Riek has received the NSF CAREER Award, several best paper awards, and in 2014 was named as one of ASEE’s 20 Faculty under 40. She serves on the editorial boards of IEEE Transactions on Human Machine Systems and IEEE Access, the Steering Committee of the ACM/IEEE Conference on Human-Robot Interaction (HRI), and numerous conference program committees.
Karen Kaushansky, Director of Experience, Zoox: Designing for Human Vehicle Interactions
Complex algorithms for perception, machine vision, path planning, controls, and an enormous amount of data are all needed to make autonomous vehicles possible. Regardless of our technology, autonomous vehicles will be judged based on the experience they create. How do we ensure we reach our end goal: the acceptance of these robots into society and into our lives? This talk will look at Human Vehicle Interactions from a user’s perspective, examining the experiences we’ll need to create.
Bio:
Karen Kaushansky is Director of Experience for Zoox, building an autonomous vehicle for the future. Named one of the top 75 Designers in Technology from Business Insider, Karen creates meaningful and connected experiences in the physical world spanning hardware and software. She worked for Jawbone designing interactive and audio experiences on devices such as Big Jambox, Mini Jambox Icon and UP, and has worked on other smart connected products such as the Cinder Sensing Cooker and Sensilk Smart Clothing. Karen worked for many years as a voice user interface designer designing complex speech recognition systems such as Ford Sync and Tellme for Windows Mobile.
Masayoshi Tomizuka: Algorithmic Safety Measures for Intelligent Industrial Co-Robots
In factories of the future, humans and robots are expected to be co-workers and co-inhabitants in the flexible production lines. It is important to ensure that humans and robots do not harm each other. This paper is concerned with functional issues to ensure safe and efficient interactions among human workers and the next generation intelligent industrial co-robots. The robot motion planning and control problem in a human involved environment is posed as a constrained optimal control problem. A modularized parallel controller structure is proposed to solve the problem online, which includes a baseline controller that ensures efficiency, and a safety controller that addresses real time safety by making a safe set invariant. The design considerations of each module are discussed. Simulation studies which reproduce realistic scenarios are performed on a planar robot arm and a 6 DoF robot arm. The simulation results confirm the effectiveness of the method.
Bio:
Masayoshi Tomizuka received his B.S. and M.S. degrees in Mechanical Engineering from Keio University, Japan and his Ph. D. degree in Mechanical Engineering from MIT in 1974. He joined the Department of Mechanical Engineering at the University of California at Berkeley in 1974, where he currently is the Cheryl and John Neerhout, Jr., Distinguished Professor. He teaches courses in dynamic systems and controls and conducts research on optimal and adaptive control, digital control, motion control, and their applications to robotics, manufacturing, information storage devices and vehicles. He served as Program Director of the Dynamic Systems and Control Program of the Civil and Mechanical Systems Division of NSF (2002-2004). He was Technical Editor of the ASME Journal of Dynamic Systems, Measurement and Control (1988-93), and Editor-in-Chief of the IEEE/ASME Transactions on Mechatronics (1997-99),. He is a Fellow of ASME, IEEE, the International Federation of Automatic Control (IFAC) and the Society of Manufacturing Engineers. He is the recipient of the Charles Russ Richards Memorial Award (ASME, 1997), the Rufus Oldenburger Medal (ASME, 2002) and the John R. Ragazzini Award (American Automatic Control Council, 2006).
Sean Trott: Natural Language Understanding and Communication for Multi-Agent Systems
Natural Language Understanding (NLU) studies machine language comprehension and action without human inter-vention. We describe an implemented system that supports deep semantic NLU for controlling systems with multiple simulated robot agents. The system supports bidirectional communication for both human-agent and agent-agent inter-action. This interaction is achieved with the use of N-tuples, a novel form of Agent Communication Language using shared protocols with content expressing actions or intentions. The system’s portability and flexibility is facilitated by its division into unchanging “core” and “application-specific” components.
Jacob Andreas: Language Understanding as Guided Planning
We often want to use natural language to instruct a robot or virtual agent. Language might provide a full specification of desired behavior (“drive forward three meters”), a goal (“get to the end of the hall”) or perhaps just a constraint on a pre-specified plan (“avoid the table to your left”). In this talk, I will describe a general framework for learning to map from sequences of instructions to time-localized scoring potentials that can be used to guide a planner. I’ll present recent work on using this framework to achieve state-of-the-art results on a diverse set of instruction-following tasks in simple virtual environments, and outline extensions of this work to more challenging visuomotor control problems.
Bio:
Jacob Andreas is PhD student in the Computer Science Division at UC Berkeley. His research focuses on natural language semantics, particularly on learning models of sentence and discourse meaning grounded in perception and control. Jacob received a BS in computer science from Columbia and an MPhil from Cambridge.
James Landay: From On Body to Out of Body User Experience
There are many urgent problems facing the planet: a degrading environment, a healthcare system in crisis, and educational systems that are failing to produce creative, innovative thinkers to solve tomorrow’s problems. I will illustrate how we are addressing these grand challenges in our research by building systems that balance innovative on-body user interfaces with novel activity inference technology. These systems have helped individuals stay fit, led families to be more sustainable in their everyday lives, and will support learners in developing their curiosity. I will also show how new user interfaces we are designing take a radically different approach, moving the interface off of the human body and into the space around them.
Bio:
James Landay is a Professor of Computer Science at Stanford University specializing in human-computer interaction. Previously, James was a Professor of Information Science at Cornell Tech in New York City and prior to that he was a Professor of Computer Science & Engineering at the University of Washington. His current research interests include Technology to Support Behavior Change, Demonstrational Interfaces, Mobile & Ubiquitous Computing, and User Interface Design Tools. He is the founder and co-director of the World Lab, a joint research and educational effort with Tsinghua University in Beijing.
Landay received his BS in EECS from UC Berkeley in 1990 and MS and PhD in Computer Science from Carnegie Mellon University in 1993 and 1996, respectively. His PhD dissertation was the first to demonstrate the use of sketching in user interface design tools. From 2003 through 2006 he was the Laboratory Director of Intel Labs Seattle, a university affiliated research lab that explored the new usage models, applications, and technology for ubiquitous computing. He was also the chief scientist and co-founder of NetRaker, which was acquired by KeyNote Systems in 2004. From 1997 through 2003 he was a professor in EECS at UC Berkeley. He was named to the ACM SIGCHI Academy in 2011. He currently serves on the NSF CISE Advisory Committee.
Nikolas Martelaro & David Sirkin: Empirical Studies of HRI in the Lab and the Field
Developing appropriate interactions for HRI requires robot designers to understand a wide set of potential uses, settings and users’ mental models. We employ exploratory lab and field experiments to elicit and capture such interactions, live. In this talk, we present studies of robotic furniture, autonomous cars, tutors and radios. We present our work as one path toward informing models that underlie future robot behavior.
Bios:
David Sirkin is a Postdoctoral Researcher at Stanford’s Center for Design Research, where he focuses on designing interactions with robotic everyday objects, as well as autonomous cars and their interfaces. He is also a Lecturer in Electrical Engineering, where he teaches interactive device design. David holds a Ph.D. from Stanford in Mechanical Engineering Design, and Master’s degrees in Management Science and in Electrical Engineering and Computer Science from MIT.
Nikolas is a PhD student in Mechanical Engineering at Stanford’s Center for Design Research DesignX Group. His current work focuses on how computationally-aware, physical products can elicit meaningful interactions with users and how to record and present these interactions back to designers and engineers to help develop better experiences. Nik holds a Master’s Degree in Mechanical Engineering: Design from Stanford university and a Bachelors of Science in Engineering Design from Olin College.