ICRA 2014| 2014 IEEE Internation Conference on Robotics and Automation| Hong Kong, May 31 - June 7, 2014 

Pecha Kucha Night of Robotics

In addition to the traditional form of plenary talks at ICRA 2014, we are going to open things up, so to speak, with a special PechaKucha sessions during the conference.
As explained below, PechaKucha presentations are short, fast paced, and entertaining presentations (20 slides/pictures for 20 seconds each) intended to not only inform but to have fun. Presenters have the freedom to create entertainment anyway they wish, for example including those unusual laboratory results we have all experienced that were unexpected and humorous.

Date: June 2, 2014 (Mon)
Time: 17:30-18:30
Venue: Convention Hall
Chips, beer and soft drink will be provide during the event

Exploring Soft Robots: “Tales from the RoboSoft Community”

Dr. Laura Margheri

The BioRobotics Institute
Scuola Superiore Sant’Anna
Pisa, Italy
ABSTRACT
Soft robotics, intended as the use of soft materials in robotics, is a young research field, going to overcome the basic assumptions of conventional rigid robotics and its solid theories and techniques, developed over the last 50 years. Using soft materials to apply forces on the environment, as expected in a soft robot able to locomote, grasp, and perform other tasks, poses new problems at the level of the different components as well as at the whole system level. Theories and technologies have not yet been defined in a general form and research activities are still exploring new ways, as well as their potential for the development of a radically new generation of robots. RoboSoft (http://www.robosoftca.eu/) is a Coordination Action for Soft Robotics with the objective of creating and consolidating the scientific community in the field of soft robotics, with common places and events for gathering and for exchange of ideas and experiences. The presentation will show through videos and pictures major experiments, curious tests, non-conventional approaches and techniques, raised up from the Soft Robotics Community Members during the studies of the frontiers of soft robotics. The experience will let people know more about soft robotics research community and the risks related to the development of new theories and techniques, counter-balanced by their very high potential impact.
BIOGRAPHY
Laura Margheri is Post-Doc fellow at the BioRobotics Institute of the Scuola Superiore Sant’Anna (Pisa, Italy). She received her Master Degree in Biomedical Engineering (with Honours) from the University of Pisa in July 2008 and her PhD in Biorobotics (with Honours) at The BioRobotics Institute in April 2012. Her research interests are in the fields of robotics and biorobotics, soft robotics, biomimetics and biology. She has worked in the OCTOPUS Integrating Project (FP7, ICT 2007.8.5, FET, Embodied Intelligence, http://www.octopus-project.eu/) for scientific research, design and management activities, and she is now working as scientific secretariat and project management in the FET Open Coordination Action RoboSoft (http://www.robosoftca.eu/). She has been IEEE-RAS (Robotics and Automation Society) AdCom Student Member and IEEE-RAS Student Activities Committee Chair in 2012-2013, and she is currently IEEE-RAS Women in Engineering (WIE) liaison. Website and contact: http://sssa.bioroboticsinstitute.it/user/91, laura.margheri@sssup.it.
   

Robotic Pick-Place at Small Scales

Professor Yu Sun

Department of Mechanical and Industrial Engineering
Institute of Biomaterials and Biomedical Engineering
Department of Electrical and Computer Engineering
Faculty of Applied Science & Engineering
University of Toronto, Toronto, Canada
ABSTRACT
Robotic pick and place has revolutionized the manufacturing process in automotive and pharmaceutical industries where robots take a part from one location to another with pinpoint accuracy. Compared to pick-place at the macro scale, robotic pick-place of small objects (< 50 micrometers) is challenging due to complex force interactions at micro-nanometer scales and high precision and accuracy requirements. This Pecha Kucha presentation will touch upon some of these challenges and solutions. Robotic pick-place using micrograsping tools will be presented, an example that will be used to demonstrate how gravity is dominated by surface forces (e.g., van der Waals force) at small scales and how object release can be achieved. The presentation will also introduce robotic pick-place of a single biological cell and the precision extraction of a single chromosome from the cell nucleus. These tasks will illustrate high precision control at sub-micrometer, sub-nanoNewton, and picoliter levels
BIOGRAPHY
Yu Sun is a Professor in the Department of Mechanical and Industrial Engineering, with joint appointments in the Institute of Biomaterials and Biomedical Engineering and the Department of Electrical and Computer Engineering at the University of Toronto (UofT).

His Advanced Micro and Nanosystems Laboratory develops MEMS devices and micro-nanorobotic systems to manipulate and characterize cells, molecules, and nanomaterials under optical and electron microscopes.

Sun obtained his Ph.D. in mechanical engineering from the University of Minnesota in 2003. He did postdoctoral research at the Swiss Federal Institute of Technology (ETH-Zürich) before he joined Toronto in 2004. He is presently a McLean Senior Faculty Fellow at UofT and the Canada Research Chair in Micro and Nano Engineering Systems. During 2012 and 2013, he directed the University’s Nanofabrication Center that hosts 50$Million micro-nanofabrication equipment. He is a fellow of the ASME (American Society of Mechanical Engineers) and a fellow of the CAE (Canadian Academy of Engineering).
   

SUPERHUMAN NAVIGATION WITH A FRANKENSTEIN MODEL

Dr. Michael Milford

University of Queensland
Science and Engineering Faculty,
Electrical Engineering, Computer Science,
Robotics and Aerospace Systems
Brisbane, Australia
ABSTRACT
Current robotic and personal navigation systems leave much to be desired; GPS only works in open outdoor areas, lasers are expensive and cameras are highly sensitive to changing environmental conditions. In contrast, nature has evolved superb navigation systems. We are attempting to solve the challenging problem of place recognition, a key component of navigation, by modelling and combining the visual recognition skills of humans and the rodent spatial memory system. Humans possess a hierarchical vision processing system that enables robust recognition of places despite large changes in environmental conditions and viewing pose. However, we know only broad details of how the human brain learns and encodes these places, such as the key role that degradations in spatial memory play in neurodegenerative diseases like Alzheimer’s. The opposite is true of rodents, who have poor vision but a sophisticated and very well understood spatial memory system, known from decades of intensive behavioural and neural experiments. This spatial memory system enables rodents to navigate effectively and reliably in almost any environment on earth. This approach combines the best understood and most capable components of place recognition in nature to create a whole more capable than its parts. Our aim is to produce advances in robotic and personal navigation technology, make breakthroughs in our understanding of the brain, and to surpass human performance in the visual navigation domain.
BIOGRAPHY
I hold a PhD in Electrical Engineering and a Bachelor of Mechanical and Space Engineering from the University of Queensland (UQ), awarded in 2006 and 2002 respectively. After a brief postdoc in robotics at UQ, I worked for three years at the Queensland Brain Institute as a Research Fellow on the Thinking Systems Project. In 2010 I moved to the Queensland University of Technology (QUT) to finish off my Thinking Systems postdoc, and then was appointed as a Lecturer in 2011. In 2012 I was awarded an inaugural Australian Research Council Discovery Early Career Researcher Award, which provides me with a research-intensive fellowship salary and extra funding support for 3 years. In 2013 I became a Microsoft Faculty Fellow and lived in Boston on sabbatical working with Harvard and Boston University. I am currently a Senior Lecturer at QUT with a research focus, although I continue to teach Introduction to Robotics every year. From 2014 to 2020 I am a Chief Investigator on the Australian Research Council Centre of Excellence for Robotic Vision. My research interests include vision-based mapping and navigation, computational modelling of the rodent hippocampus and entorhinal cortex, especially with respect to mapping and navigation, computational modelling of human visual recognition, biologically inspired robot navigation and computer vision and Simultaneous Localisation And Mapping (SLAM).
 

Virtual Reality for Everyone!

Professor Steven M. LaValle

Professor (currently on leave at Oculus VR)
Department of Computer Science
University of Illinois, Urbana, IL, USA
ABSTRACT
Virtual reality is back, and this time the technology has advanced enough for it to thrive. We recently introduced the Oculus Rift, which is a lightweight virtual reality headset that creates a highly immersive experience. The users strap the device onto their heads, and graphical images are rendered on high-resolution screens with a wide field of view, while their head movements are efficiently tracked so that the virtual world is carefully synchronized to sufficiently fool their brains. Over the past year, tens of thousands of developers, researchers, and hobbyists have applied it to a wide variety of projects. Some of these have serious potential to change to world. Others are simply entertaining. A few of them are quite disturbing. This talk will show some of the craziest things people have done with the Rift while also highlighting some of the serious applications. Whether with the Rift, Sony Morpheus, or other emerging devices, exciting times lie ahead as the ecosystem rapidly grows around next-generation virtual reality platforms.
BIOGRAPHY
Steven M. LaValle is Principal Scientist at Oculus VR, Inc. He is also a roboticist and a Professor of Computer Science at the University of Illinois, Urbana-Champaign. He is best known for his introduction of rapidly exploring random tree (RRT) algorithms, and his book on Planning Algorithms, one of the most highly cited texts in the field.
   

The Biggest Robots on Earth are about to become Unmanned

Dr. Jonas Ohr

Tech. Manager Motion Control & Automation
ABB Crane Systems
Vasteras, Sweden
ABSTRACT
The container shipping companies buy larger and larger ships which in turn require larger and larger ship-to-shore cranes. But ship-to-shore cranes not only grow taller, wider, longer and heavier. To increase the production rates modern cranes often have more than one trolley and hoist and also more degrees of freedom for fine adjustment motions. This means more motors and drives, more actuators, more sensors and more advanced control. The way a crane is operated is also about to change drastically from having a driver in the cabin on the trolley on each crane, to remote operation where one driver can operate several cranes from a control station in a nearby located control room. For this to work efficiently the automation must be well designed all the way from the smallest sensors to the larges movable mechanical devices and from the simplest line of code to the optimization of the whole operation.

The first part of the presentation gives an overview of container shipping and container terminals and cranes. The second part focus on automation and motion control of cranes and how some specific sensor and control challenges find there counterpart in the field of motion control of robots.
BIOGRAPHY
PhD in Automatic Control 2003 and MSc in Engineering Physics 1995 both from Uppsala University, Sweden. First job in industrial automation in 1984 and have combined studies, research and work in industry since then. Have been working for ABB Corporate Research and ABB Robotics to mention a few. Currently works for ABB Crane Systems as Tech. manager for Motion Control and Automation. Practical experience from working at “live” (moving) machines is about 14,000 hours in total. About half that time is automation in general, and the other half concern modeling and control engineering. Have since full time employment in industry 2003 published about 10 conference and journal papers together with researchers from Linköping Univerity and ABB. Gives lectures occasionally at Uppsala University. Member of the board in the Engineering Physics education, Uppsala University. Is a “beta” tester of MapleSim for MapleSoft, Canada.
   

Angry Darwin Expedition: Robot Learners and Interaction Studies

Dr. Hae Won Park

Post-Doctoral Fellow, Inventor of TabAccess
Electrical and Computer Engineering
Georgia Institute of Technology
Atlanta, GA, USA
ABSTRACT
Programming a robot to perform tasks requires training that is beyond the skill level of most individuals. In this talk, we highlight our research effort in designing an interactive instance-based robot learner that generalizes task behaviors from accumulation of non-expert user demonstrations. We will discuss the fun results from our recent Angry Darwin Expedition, in which our robot, Darwin, learned to play a strategic game “Angry Birds” from various users. During a six-month period, over 130 people interacted with our robot learner including 90 children, among which 33 participated in the formal experiment. Our motivation was to combine a robot learner with a source that provides personalized context for interaction. Here, we propose integrating a touchscreen tablet and a robot learner for engaging the user during human-robot interaction scenarios; in particular, we measure how the system’s learning models change based on the participant’s engagement level. Through a tablet environment, the user teaches a task to the robot in a shared workspace and intuitively monitors the robot’s behavior and progress in real time. In this setting, the user is able to interrupt the robot and provide necessary demonstrations at the moment learning is taking place, thus providing a means to continuously engage both the participant and the robot in the learning cycle.
ABSTRACT
Programming a robot to perform tasks requires training that is beyond the skill level of most individuals. In this talk, we highlight our research effort in designing an interactive instance-based robot learner that generalizes task behaviors from accumulation of non-expert user demonstrations. We will discuss the fun results from our recent Angry Darwin Expedition, in which our robot, Darwin, learned to play a strategic game “Angry Birds” from various users. During a six-month period, over 130 people interacted with our robot learner including 90 children, among which 33 participated in the formal experiment. Our motivation was to combine a robot learner with a source that provides personalized context for interaction. Here, we propose integrating a touchscreen tablet and a robot learner for engaging the user during human-robot interaction scenarios; in particular, we measure how the system’s learning models change based on the participant’s engagement level. Through a tablet environment, the user teaches a task to the robot in a shared workspace and intuitively monitors the robot’s behavior and progress in real time. In this setting, the user is able to interrupt the robot and provide necessary demonstrations at the moment learning is taking place, thus providing a means to continuously engage both the participant and the robot in the learning cycle.BIOGRAPHY

Hae Won Park is a post-doctoral fellow in Electrical and Computer Engineering at the Georgia Institute of Technology. She earned her Ph.D. and M.S. in Electrical and Computer Engineering from Georgia Institute of Technology in 2014 and 2009, and a B.S. in Electrical Engineering from POSTECH, Korea in 2006. Before joining Georgia Tech, she was a research scientist in the Robot Vision group at Korea Institute of Sciences and Technology. Her research interest is in making technology more accessible to all, including learning and improving robot skills through accumulated experience from daily interaction with humans and designing assistive devices. She is now doubling as a technical consultant to Zyrobotics, a spin-off from Georgia Tech that is licensing the two technologies generated from her Ph.D. research. During her graduate school, she authored 17 peer-reviewed publications, filed two patents, and received awards in various robotics and innovative technology competitions.