The Illinois Robotics Group is proud to host the Robotics Seminar @ Illinois series. These seminars provide a diverse lineup of speakers reflecting the interdisciplinary nature of the field of robotics.
We are hosting speakers conducting research in the field of robotics. The talks are given by both professors and students each week, with occasional demonstrations afterwards in the Center for Autonomy Labs housed in the CSL Studio.
Talks are held at 1pm on Friday’s virtually through Zoom, with some in-person talks viewed in the CSL Studio conference room (1232), which is just west of the Center for Autonomy Lab facilities.
Please Feel Free to Recommend Speakers for Future Talks
If you have suggestions or questions about the Robotics Seminar @ Illinois series, please feel free to contact John M. Hart, Manager & Coordinator of the Center for Autonomy (CfA) Shared Robotics Laboratories.
Fall 2025 Schedule
The full playlist of the Fall 2025 talks can be found through this link (requires U of I mediaspace login), you can also expand the detailed talk info below to check individual links.
1. Sep 19th — Prof. John Reid (Illinois)
Digital Agriculture and the Journey Towards Autonomous Machines
► Talk details
In-person event in CSL Studio 1232, no recording
Abstract:
Agriculture is one of the most complex and robotics-intensive domains you can imagine: large machines operating in unstructured environments, dynamic obstacles like people, animals, and weather, and the need for centimeter-level precision across miles of terrain. Unlike factory floors, farms are unbounded, safety-critical, and demand scalable autonomy. This seminar will show how digital technologies are driving the transformation of agriculture from automation to autonomy, and why this environment offers challenging problems in robotics.
Speaker Bio:
Dr. John F. Reid is Executive Director of the Center for Digital Agriculture and Research Professor at the University of Illinois Urbana-Champaign, with joint appointments in the Siebel School of Computing & Data Science, the Department of Agricultural & Biological Engineering, and Electrical and Computer Engineering. A member of the National Academy of Engineering, he has more than 35 years of experience spanning academia and industry, including 19 years at John Deere, where he led enterprise initiatives in robotics, autonomy, and advanced technology. His research focuses on the digital transformation of agriculture, with emphasis on robotics, AI-enabled autonomy, and safe self-learning systems for production agriculture. At Illinois, he leads multidisciplinary efforts to develop intelligent, sustainable, and scalable solutions that connect computing, engineering, and agricultural sciences into practiced solutions.
2. Sep 26th — Prof. Fabio Ramos (University of Sydney & NVIDIA)
Diversity in Motion Planning via Parallel Probabilistic Inference
► Talk details
Recording link: https://mediaspace.illinois.edu/media/t/1_ja4ifct1/386332862
Abstract:
Much has been said about the need for diversity in robotics. From diverse datasets for training large vision-action models to diverse motion planners that can infer multi-modal trajectories, the word “diversity” has been a common theme in the last few years of robotics research. But how do we define or even measure diversity? In this talk, I will provide a probabilistic interpretation for diversity and show that tools designed for deep learning such as differentiable programming languages and parallel computation in GPUs can be conveniently utilized for large-scale probabilistic inference that naturally captures the notion of diversity. I will describe a powerful nonparametric inference method that uses both differentiability and parallelism to provide nonparametric posterior approximations for model predictive control, motion planning, and state estimation. Finally, I will define diversity in trajectory planning in terms of a new mathematical tool–signature transforms–and how it can lead to novel planning methods in the future.
Speaker Bio:
Fabio Ramos is a Professor in robotics and machine learning at the School of Computer Science at the University of Sydney and a Principal Research Scientist at NVIDIA. His research focuses on statistical machine learning techniques for large-scale Bayesian inference and decision making with applications in robotics, mining, environmental monitoring and healthcare. Between 2008 and 2011 he led the research team that designed the first autonomous open-pit iron mine in the world. He has over 200 peer-review publications and received Best Paper Awards and Student Best Paper Awards at several conferences including International Conference on Intelligent Robots and Systems (IROS), Australasian Conference on Robotics and Automation (ACRA), European Conference on Machine Learning (ECML), and Robotics Science and Systems (RSS).
3. Oct. 3rd — Prof. Dinesh Jayaraman (UPenn)
Engineering Better Robot Learners: Exploration and Exploitation
► Talk details
Recording link: https://mediaspace.illinois.edu/media/t/1_mf62q8z9/386332862
Abstract:
Industry is placing big bets on “brute forcing” robotic control, but such approaches ignore the centrality of resource constraints in robotics on power, compute, time, data, etc. Towards realizing a true engineering discipline of robotics, my research group has been “exploiting and exploring” robot learning: exploiting to push the limits of what can be achieved with today’s prevalent approaches, and “exploring” better design principles for masterful and minimalist robots in the future. As examples of “exploit”, we have trained quadruped robots to perform circus tricks on yoga balls and robot arms to perform household tasks in entirely unseen scenes with unseen objects. As examples of “explore”, we are studying the sensory requirements of robot learners: what sensors do they need and when do they need them during training and task execution? In this talk, I will highlight these examples and discuss some lessons we have learned in our research towards better-engineered robot learners.
Speaker Bio:
Dinesh Jayaraman is an assistant professor at the University of Pennsylvania’s CIS department and GRASP lab. He leads the Perception, Action, and Learning (Penn PAL) research group, which works at the intersections of computer vision, robotics, and machine learning. Dinesh received his PhD (2017) from UT Austin, before becoming a postdoctoral scholar at UC Berkeley (2017-19). Dinesh’s research has received a Best Paper Award at CORL ’22, a Best Paper Runner-Up Award at ICRA ’18, a Best Application Paper Award at ACCV ’16, the NSF CAREER award ’23, an Amazon Research Award ’21, and been covered in The Economist, TechCrunch, and several other press outlets.
4. Oct. 10th — Prof. Farshid Alambeigi (UT Austin)
Surgineering Using Intelligent and Flexible Robotic Systems
► Talk details
Recording link: https://mediaspace.illinois.edu/media/t/1_p3hx5f7d/386332862
Abstract:
Recent advances in surgical robotics have enabled innovative techniques that reduce patient trauma, shorten hospital stays, and improve both diagnostic accuracy and therapeutic outcomes. However, despite the many benefits of robot-assisted minimally invasive and endoscopic procedures, significant limitations remain, particularly in the dexterity, intelligence, and autonomy of current robotic systems, as well as in the biomechanical design of medical devices and implants.
One of the most pressing challenges is inadequate dexterity, driven by limited access to target anatomy and insufficient control over rigid instruments and implants. These constraints highlight the need for specialized instrumentation, intelligent sensing, and adaptive control paradigms capable of navigating complex anatomical environments. Advancing autonomy in surgical systems also requires more than robotic intelligence alone, it demands synergistic human–robot interaction, where the system can both assist and adapt to the surgeon’s decision-making process in real time.
This talk will introduce our lab’s efforts in advancing the engineering of surgery (or surgineering) to address these challenges. I will present translational research across several clinical applications, including:
• Spinal fixation using steerable drills and flexible pedicle screws
• Colorectal cancer screening with vision-based tactile sensors and complementary AI algorithms
• In vivo bioprinting for treating volumetric muscle loss via robotic delivery systems
By integrating continuum manipulators, stretchable soft sensors, intelligent implants, and semi/autonomous control strategies, our work aims to fundamentally transform the paradigm of minimally invasive and endoscopic interventions, bringing true dexterity and autonomy to surgical robotics.
Speaker Bio:
Dr. Farshid Alambeigi is the Leland Barclay Fellowship in Engineering Associate Professor in the Walker Department of Mechanical Engineering at The University of Texas at Austin, where he has served since August 2025. He is also a core faculty member of Texas Robotics. Dr. Alambeigi earned his Ph.D. in Mechanical Engineering (2019) and M.Sc. in Robotics (2017) from Johns Hopkins University. In 2018, he was awarded the 2019 Siebel Scholarship in recognition of his academic excellence and leadership. He is the recipient of the NIH NIBIB Trailblazer Award (2020) for his work on flexible implants and robotic systems for minimally invasive spinal fixation surgery and the NIH Director’s New Innovator Award (2022) for pioneering in vivo bioprinting surgical robotics for the treatment of volumetric muscle loss. His contributions have also been recognized with the UT Austin Faculty Innovation Award, the Outstanding Research Award by an Assistant Professor, the Walker Scholar Award, and several best paper awards and recognitions. He serves as an Associate Editor for the IEEE Transactions on Robotics (TRO), IEEE Robotics and Automation Letters (RAL), and the IEEE Robotics and Automation Magazine (RAM).
At UT Austin, Dr. Alambeigi directs the Advanced Robotic Technologies for Surgery (ARTS) Lab. In collaboration with the UT Dell Medical School, the ARTS Lab advances the concept of Surgineering, engineering the surgery, by developing dexterous, intelligent robotic systems designed to partner with surgeons. The ultimate goal of this work is to enhance surgical precision, improve clinician performance, and advance patient safety and outcomes.
5. Oct. 17th — Prof. Yu She (Purdue)
Toward Intelligent Manipulation: From Multimodal Sensing to Neural-Symbolic Control
► Talk details
In-person event in CSL Studio 1232, Recording link: https://mediaspace.illinois.edu/media/t/1_31klklvi/386332862
Abstract:
Achieving intelligent robotic manipulation requires bridging the gap between multimodal perception and interpretable, safe control. This talk presents a unified research journey toward neural-symbolic control, where robots reason and act through the integration of multimodal sensing, differentiable optimization, and data-driven learning. I will begin with ManiFeel, a comprehensive benchmark for visuotactile manipulation, which establishes a scalable framework to study how tactile sensing enhances policy learning under visually degraded or contact-rich conditions. ManiFeel systematically evaluates sensing modalities, tactile representations, and policy architectures, revealing when and how multimodal feedback improves manipulation robustness. Building on these insights, I will introduce LeTac-MPC, a learning-based model predictive control (MPC) framework that couples high-resolution tactile sensing with differentiable MPC. LeTac-MPC enables tactile-reactive grasping across diverse object properties, achieving robust, adaptive control in dynamic and force-interactive environments. Finally, I will present LeTO, a learning constrained visuomotor policy that embeds differentiable trajectory optimization into neural networks, combining the interpretability and safety of model-based control with the adaptability of deep learning. LeTO enables robots to generate smoother, constraint-compliant, and dynamically feasible trajectories, ensuring safer and more reliable interactions in real-world manipulation tasks. Together, these works chart a pathway toward intelligent, physics-grounded, and neural-symbolic manipulation driven by multimodal sensing and optimization-based learning.
Speaker Bio:
Dr. Yu She is an assistant professor at Purdue University Edwardson School of Industrial Engineering. Prior to that, he was a postdoctoral researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT from 2018 to 2021. He earned his PhD degree in the Department of Mechanical Engineering at the Ohio State University in 2018. His research is at the intersection of mechanism design, tactile sensing, intelligent control, and robot learning. He is a recipient of the ASME Feudenstein Young Investigator Award (2025), Showalter Early Investigator Award (2024), and the Google Research Scholar Award (2022) and multiple best paper recognitions.
6. Oct. 31st — Illinois Student Talks
Medical and Surgical Robotic Systems for Increased Access to Healthcare
Adaptive Stress Testing Black-Box LLM Planners
► Talk details
In-person event in CSL Studio 1232, Recording link:
https://mediaspace.illinois.edu/media/t/1_0t8tjx3p/386332862
https://mediaspace.illinois.edu/media/t/1_ixwkqv5f/386332862
Abstract:
Talk 1:
Diagnostic and interventional clinical systems require novel combinations of technology to meet clinical needs. In this talk, we’ll explore two medical problems: access to vision screening and access to traumatic brain injury treatment, and how we might approach these problems using novel systems. In the first, a fully automated retinal imaging system is shown to produce screening-quality retinal images that can be used to assess the health of the eye, serving as the skeleton for future eye examination techniques to be built upon. The second is a novel and compact image-guided surgery system designed to improve the workflow for bedside neurosurgical guidance.
Talk 2:
Large language models (LLMs) have recently demonstrated success in generalizing across decision-making tasks including planning, control, and prediction, but their tendency to hallucinate unsafe and undesired outputs poses risks. We argue that detecting such failures is necessary, especially in safety-critical scenarios. Existing methods for black-box models often detect hallucinations by identifying inconsistencies across multiple samples. Many of these approaches typically introduce prompt perturbations like randomizing detail order or generating adversarial inputs, with the intuition that a confident model should produce stable outputs. We first perform a manual case study showing that other forms of perturbations (e.g., adding noise, removing sensor details) cause LLMs to hallucinate in a multi-agent driving environment. We then propose a novel method for efficiently searching the space of prompt perturbations using adaptive stress testing (AST) with Monte-Carlo tree search (MCTS). Our AST formulation enables discovery of scenarios and prompts that cause language models to act with high uncertainty or even crash. By generating MCTS prompt perturbation trees across diverse scenarios, we show through extensive experiments that offline analyses can be used at runtime to automatically generate prompts that influence model uncertainty, and to inform real-time trust assessments of an LLM. We further characterize LLMs deployed as planners in a single-agent lunar lander environment and in a multi-agent robot crowd navigation simulation. Overall, ours is one of the first hallucination intervention algorithms to pave a path towards rigorous characterization of black-box LLM planners.
Speaker Bio:
Speaker 1:
Alexander Smith is a sixth-year MD-PhD student at the Carle Illinois College of Medicine and the Siebel School for Computing and Data Science at the University of Illinois at Urbana-Champaign. He received his B.S. degree with a dual major in Biomedical Engineering and Computer Science from Saint Louis University in 2019. From 2019-2020, he worked as a Researcher at Johns Hopkins University in the Carnegie Center for Surgical Innovation.
Speaker 2:
Neeloy Chakraborty is a fifth-year PhD candidate at the University of Illinois working in the Human-Centered Autonomy Lab. Prior to that, he completed his M.S. in 2023 and B.S. in 2021 at the University of Illinois in electrical and computer engineering. His primary research revolves around developing reliable technologies to enable safer interactions between humans and automation. Throughout his time at Illinois, he has studied modular approaches for instruction-following embodied AI robots, developed real-time video generation models to allow users to remotely control robots under challenging constraints, designed scalable multi-modal algorithms to detect anomalies in multi-agent settings, and identified hallucinations in large language model generations to proactively avoid failures in decision-making. Neeloy has also conducted research through internships at Dolby Labs, Ford, and Brunswick, tackling problems in generative modeling and reinforcement learning.
7. Nov. 7th — Prof. Andrea Bajcsy (CMU)
Towards Open World Robot Safety
► Talk details
Recording link: https://mediaspace.illinois.edu/media/t/1_ndqqu022/386332862
Abstract:
Robot safety is a nuanced concept. We commonly equate safety with collision-avoidance, but in complex, real-world environments (i.e., the “open world”) it can be much more: for example, a mobile manipulator should understand when it is not confident about a requested task, that areas roped off by caution tape should never be breached, and that objects should be gently manipulated to prevent breaking or spilling. However, designing robots that have such a nuanced safety understanding—and can reliably generate appropriate actions—is an outstanding challenge.
In this talk, I will describe my group’s work on systematically uniting modern machine learning models (such as large vision-language models and latent world models) with classical formulations of safety in the control literature to generalize safe robot decision-making to increasingly open world interactions. Throughout the talk, I will present experimental instantiations of these ideas in domains like vision-based robotic manipulation and navigation.
Speaker Bio:
Andrea Bajcsy is an Assistant Professor in the Robotics Institute at Carnegie Mellon University where she leads the Interactive and Trustworthy Robotics Lab (Intent Lab). She broadly works at the intersection of robotics, machine learning, control theory, and human-AI interaction. Prior to joining CMU, Andrea received her Ph.D. in Electrical Engineering & Computer Science from University of California, Berkeley in 2022. She is the recipient of the DARPA Young Faculty Award (2025), NSF CAREER Award (2025), Amazon Research Award (2025), Google Research Scholar Award (2024), Rising Stars in EECS Award (2021), Honorable Mention for the T-RO Best Paper Award (2020), NSF Graduate Research Fellowship (2016), and worked at NVIDIA Research for Autonomous Driving.
8. Nov. 14th — Prof. Wei Wang (UW-Madison)
Talk title: Toward Advanced Autonomy in Complex Aquatic Environments
► Talk details
Recording link: TBA
Abstract:
Marine robots have undergone significant growth, driven by recent advances in artificial intelligence, sensing technologies, and decision-making systems. As demands for ocean exploration, exploitation, and conservation continue to rise, there is an increasing necessity for advanced autonomy in marine robots at both individual and group levels, as well as for frequent and safe deployment of marine robots at an affordable cost. However, the current marine robots available still fall short in meeting the demands of real-world applications, facing challenges such as robust and safe control in complex environments, self-localization and mapping of their environment, and efficient sensing and coordination within a group. My long-term goal is enabling robots to perform tasks autonomously in challenging marine environments by developing innovative robots and advanced algorithms. In this talk, I will highlight my research in three key areas: 1) learning-based control and navigation of 2D surface vehicles in urban waterways, 2) design and coordination of 2D multi-robot systems on the water surface and 3) developing biologically inspired robots and sensors to address challenges faced by 3D underwater robots and robotic swarms. I will also briefly discuss open research problems on these topics.
Speaker Bio:
Dr. Wei Wang is an Assistant Professor in the Department of Mechanical Engineering at the University of Wisconsin–Madison. Prior to this appointment, he was a Research Scientist at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology. He earned his Ph.D. in Mechanical Engineering from Peking University and his B.E. in Electrical Engineering from the University of Electronic Science and Technology of China. Dr. Wang has published extensively in top-tier robotics journals and conferences, including Science Robotics, IEEE Transactions on Robotics (TRO), Robotics and Automation Letters (RAL), Bioinspiration & Biomimetics (B&B), ICRA, IROS, and CDC. His research has been widely featured in international media outlets such as Reuters, NBC, CNBC, and MIT News.
8. Nov. 21th — Dr. Faizan Tariq (Honda Research Institute)
Talk title: Interaction-Aware Planning & Decision Making
► Talk details
Recording link: TBA
Abstract: The integration of robotic systems alongside human agents introduces significant challenges in decision-making, planning, and control. In particular, navigation algorithms for robots must account for interactions with humans to ensure safety and efficiency. In this talk, I will present an overview of four research directions we’ve explored in our work on Interaction-Aware Planning and Decision Making. First, we will explore interactive path planning in environments with dynamic agents, including approaches based on path-speed decomposition and multi-future planning under optimization-based frameworks. Next, I will discuss sampling-based methods for interactive planning, highlighting how neural networks can be incorporated into these frameworks. Third, we will examine strategies for crash mitigation when formal safety guarantees cannot be established. Finally, I will introduce recent efforts on end-to-end predictive planning approaches that aim to unify perception and decision-making.
Speaker Bio: Dr. Tariq is a Senior Research Scientist in the Cooperative Intelligence – Mobility Research group at the Honda Research Institute, US. He received his B.Sc. degree in electrical and electronics engineering from the Middle East Technical University, Ankara, Turkey, in 2018. He received his M.S. and Ph.D. degrees in electrical and computer engineering from the University of Maryland, College Park, MD, USA, in 2022 and 2023, respectively. His research interests include the application of control, optimization, and learning techniques to robotic systems, with a focus on real-time decision-making and motion planning for autonomous vehicles. He received the Best Student Paper Award (1st place) at the IEEE International Conference on Intelligent Transportation Systems (ITSC), 2021.
9. Dec. 5th — Prof. Janelle Clark (UMBC)
Talk title: Unraveling Haptic Acuity in Human-Robot Partnerships
► Talk details
In-person event in CSL Studio 1232, Recording link: TBA
Abstract: Robots are becoming important to enable people in their work and daily life, such as in assistive and performance enhancing devices, like prosthetics and exoskeletons, and in teleoperating remote agents, to complete tasks in remote or dangerous locations. The communication between people and their robotic devices must be intuitive and responsive in order to be effective. My work focuses on utilizing the sense of touch for human-robot communication, acknowledging the important role of haptic information in navigating our environment. In this talk, I will discuss my work in two areas of haptic technologies. The first is in the design and integration of haptic devices in human-robot systems. This leads to the second, addressing a bottleneck in device saliency by combining methods from psychophysics and contact mechanics. Through my work I aim to create intuitive communication with haptic systems to create more immersive and efficient interactions for the human utilizing robotic support.
Speaker Bio: Janelle Clark is currently an assistant professor in the Mechanical Engineering Department at the University of Maryland, Baltimore County leading the Tactile and Robotic Assistance (TARA) Lab. She received her Ph.D. in Mechanical Engineering at Rice University, Houston, TX, USA in 2022 in the Mechatronics and Haptic Interfaces (MAHI) Lab and postdoctoral fellowship in the Human-Robot Interaction Laboratory in the Minor School of Computer Science at the University of Massachusetts Lowell, Lowell, MA, USA. Her research focuses on user-centric design and haptic interfaces for human-robot interactions in instances of shared embodiment, with application domains such as prosthetics, exoskeletons, teleoperation, and virtual reality. Her work combines principles from psychophysics, contact mechanics, robotics, hardware design, and controls to investigate physiological contributions to haptic perception and the creation of intuitive human-in-the-loop control systems.
Spring 2025 Schedule
1. Jan. 31st — Prof. Phanindra Tallapragada (Clemson University)
Spin is all you need
► Talk details
Recording link: TBA
Abstract: https://uofi.box.com/s/uvzotk69o6kabmc9f8iyzirot7kcssyd
Mobility in robots is usually achieved by a few common means; with a few exceptions these are wheels or legs in ground robots, propellers in flying robots, articulated tails or fins and propellers in swimming robots or flapping flagella in micro swimmers. Nonlinear dynamics can provide insights into alternative means of generating efficient mobility. The talk will present several examples of locomotion produced by the interplay of variations in the inertia tensor, constraints (holonomic and nonholonomic) and periodic actuation. The actuation of such robots is achieved by means of internal actuators that do not directly interact with the environment. Periodic motion or spin of an internal body such as a rotor can transfer high frequency reaction forces and moments that in turn can produce oscillations of flexible structures like tails in a fish-like robot and in legs or cilia in a soft robot. Further these spin-generated forces modulate the forces at surfaces producing discontinuous phenomenon like slipping and jumping. In the low Reynolds number regime, spin actuation can produce propulsion of microswimmers and spinning swimmers can manipulate small particles in a contactless manner. The talk will demonstrate this framework with a spin driven swimming robot which has a locomotion efficiency approaching that of several species of fish, a spin driven pipe crawling robot, a spin driven jumping robot and a spin driven microswimmer and particle manipulator. Spin is all one needs.
Speaker Bio:
Phanindra Tallapragada is an associate professor of mechanical engineering at Clemson University. He obtained his Ph.D in Engineering Mechanics from Virginia Tech in 2010 and did post doctoral research at the University of North Carolina Charlotte. Earlier he obtained his B.Tech and M.Tech in Civil Engineering from the Indian Institute of Technology, Kharagpur. He joined Clemson University as an assistant professor in 2013. His research interests are in dynamical systems and bioinspired locomotion related to terrestrial motion, fish-like swimming, low Reynolds number swimming and operator methods for transport and manipulation in dynamical systems.
2. Feb. 7th — Prof. Girish Krishnan (Illinois)
Will Soft Robots Survive the Axe?
► Talk details
Recording link: https://uofi.box.com/s/2vhdeahk5sz2jn2l69hsvlatm3r5sh3a
Abstract:
We are witnessing a fascinating era in the evolution of autonomous robots. The integration of camera vision, AI-based perception, and advanced actuation and sensing technologies with off-the-shelf traditional robotic manipulators featuring multiple rigid degrees of freedom is nearing human-like precision. However, this precision, accuracy, and robustness are often compromised when the robot’s structure deviates from such traditional designs. Interestingly, around 90% of natural organisms are invertebrates, lacking a rigid backbone and exhibiting fluid, non-segmented morphologies. Creatures like snakes, octopus tentacles, and elephant trunks skillfully manipulate delicate objects without causing damage while still striking prey with remarkable precision. This raises a critical question: what fundamental advances in material structure design, actuation, sensing and perception, and controls are necessary to replicate these capabilities in bio-inspired robotic systems? This talk will showcase recent advancements in these areas for soft continuum manipulators, highlighting their potential applications in agriculture and healthcare. But are these advances sufficient to survive the axe, or will soft robots become obsolete before reaching the plateau of productivity?
Speaker Bio:
Girish Krishnan is an Associate Professor at the University of Illinois at Urbana-Champaign. He is also a Health Innovation Professor at the Carle Illinois College of Medicine. He obtained his PhD from University of Michigan at Ann Arbor and masters from the Indian Institute of Science. His research is in the intersection of design and robotics, and particularly in soft robotics with applications in agriculture and healthcare. Prof. Krishnan is the recipient of the 2015 NSF Early Career award (CAREER), 2016 UIUC council award for excellence in advising and 2017 Freudestein Young Investigator award (ASME). He has published around 40 peer-reviewed journals, 50 conference proceedings, and holds four patents.
3. Feb. 14th — Illinois Student Talks: Patrick Naughton and Amartya Purushottam
Learning Effective Mappings for Teleoperated Manipulation
Dynamic Loco-Manipulation of a Wheeled Humanoid Robot via Whole-body Bilateral Teleoperation
► Talk details
Recording link: https://uofi.box.com/s/8eygnt0cs75vedh30jbd2jxf6634zx7q
Abstract:
Talk 1:
Teleoperation allows people to extend their perception and interaction capabilities to remote locations in cases where distance or safety constraints prevent them from physically traveling. While humans are extremely adept at a wide variety of manipulation tasks, transferring these skills through a robot has remained an open challenge for over half a century. Given constraints on an operator’s total cognitive load, high degree-of-freedom systems face a tradeoff between flexibility and precision. Giving the operator direct control over every joint on the robot provides them with the maximum flexibility to complete many kinds of tasks, but the high cognitive load associated with such an interface makes it impossible to properly coordinate the joints for tasks requiring high precision. In contrast, shared control systems that let the operator control the robot at a higher level of abstraction reduce this cognitive load and can improve the operator’s precision, but simultaneously reduce the flexibility of the interface to accomplish tasks for which it was not explicitly designed. Towards the goal of creating truly telepresent systems, I will discuss several learning and optimization techniques to customize robotic teleoperation interfaces to specific users and tasks, allowing operators to more naturally express their intent and enabling greater transfer of their manipulation skills.
Talk 2:
Humanoid robots have the potential to perform physically demanding tasks in industries such as manufacturing, construction, and healthcare. However, achieving precise and dynamic mobile manipulation—coordinating whole-body motion to interact forcefully with the environment—remains a challenge. Teleoperation provides a promising opportunity to bridge this gap by embedding human planning, coordination, and sensorimotor skills into robot control loops while also enabling rich data collection to enhance learning-based control policies. Yet, traditional teleoperation interfaces lack intuitive hands-free control and fail to provide adequate feedback, making it difficult for operators to simultaneously manage contact-rich locomotion and manipulation in real-time. In this talk, I will discuss the development of a bilateral teleoperation framework for controlling a wheeled humanoid robot for dynamic mobile manipulation using motion retargeting strategies, whole-body control, and a human-machine interface. The system allows operators to seamlessly switch between position and force control modes, balancing precision and compliance across different tasks. Additionally, haptic feedback improves operator awareness by synchronizing human-robot motion and conveying contact forces and moments from the robot’s interactions with the environment.
Speaker Bio:
Speaker 1:
Patrick Naughton is a PhD student at UIUC advised by Professor Kris Hauser. His work focuses on designing fluent interfaces for teleoperated manipulation using a combination of machine learning and optimization-based approaches. He previously received his Bachelor’s degree in Electrical Engineering at Washington University in St. Louis.
Speaker 2:
Amarty Purushottam is a PhD student at UIUC advised by Professor Joao Ramos & Professor Katie Driggs-Campbell. His work focuses on bilateral teleoperation, hybrid motion-force control, and developing humanoid robots. He previously received his Bachelor’s and Master’s degrees in Electrical Engineering at UIUC.
4. Feb. 25th (Thursday) — CSL Student Conference Robotics Session
Panelist: Professor Guanya Shi (CMU), Sheng Cheng (Illinois), Nicklas Hansen (UCSD)
Panel Discussion: Using Models in Robotics
5. March 7th — Prof. Wenhao Luo (UIC)
Harmonizing Safety and Resilience for Adaptive Multi-Agent Systems
► Talk details
Recording link: https://uofi.box.com/s/4hj1sk4oyqfuo107atasai79xd0oyduv
Abstract:
Reliable interactions among robots require safety and resilient operational guarantees that adapt to real-world conditions. However, most current algorithm designs are rigidly optimized for well-defined and predictable environments. This not only renders pre-computed guarantees brittle under uncertainties and failures, but also incentivizes overly conservative robot behaviors that compromise primary task performance. In this talk, I will present our recent work harnessing safety and resilience for robust and jointly optimized multi-agent autonomy. I will first discuss a control-theoretic method to enforce minimally disruptive safe robotic behaviors under uncertainties, and how to leverage the expressivity of control-theoretic analysis to inspire risk-aware, decentralized multi-agent coordination in autonomous driving. In the presence of incomplete information, I will then show a sample-efficient reinforcement learning framework that allows a robot to safely explore with certified predictive confidence while simultaneously optimizing task performance. Finally, I will turn to resilient autonomy through the example of communication-aware multi-robot coordination. A set of communication-motion co-design frameworks will be demonstrated that allow robots to (i) optimize low-level motion constraints for system-level property guarantees, (ii) utilize data for compositional constraints learning in a realistic environment, and (iii) endure or recover from unexpected robot failures, leading to co-optimized system resilience and task efficiency.
Speaker Bio:
Wenhao Luo is an assistant professor in computer science at the University of Illinois Chicago. His research lies at the intersection of robotics, control theory, artificial intelligence, and machine learning. Specifically, his group develops principled methods for robust and interactive autonomy that enable robots to safely and effectively collaborate with each other and with humans in the physical world. Wenhao received his M.S. and Ph.D. in Robotics from Carnegie Mellon University in 2016 and 2021, respectively, and a B.E. degree with honors from Central South University in 2012. Before joining UIC, he was an assistant professor at UNC Charlotte (2021-2024) and a research intern at Microsoft Research in the summer of 2019 and 2020. Wenhao’s research has received best paper awards and nominations at multiple conferences, including AAMAS, IFAC CPHS, and workshops at ICRA, IROS, and IJCAI. He was also selected for AAAI New Faculty Highlights at AAAI 2023 and RSS Pioneers at RSS 2021. His work has been supported by NSF, USDA, DARPA, ONR, and AFOSR.
6. March 14th — Prof. Dave Cappelleri (Purdue)
Microrobotics to Agricultural Robots: Robots at Different Length-Scales Interacting with the Environment
► Talk details
In-person event in CSL Studio 1232, no recording
Abstract:
The Multi-Scale Robotics & Automation Lab (MSRAL) at Purdue University performs cutting-edge research on robotic and automation systems at various length scales: macro-scale (cm to m), meso-scale (~100’s of um to a few mm’s), and micro-scale (10’s of um to 100’s of um). All the developed systems are designed to interact with the environment in unique ways. In this talk, I will discuss some recent MSRAL microrobotics projects on different types of families of wireless mobile microrobots driven by external magnetic fields that we have developed over the years and some recent work in agricultural robotics and mobile manipulation.
Speaker Bio:
David J. Cappelleri is the Assistant Vice President for Research Innovation in the Office of Research, B.F.S. Schaefer Scholar & Professor in the School of Mechanical Engineering, and Professor in the Weldon School of Biomedical Engineering (by courtesy) at Purdue University. Prof. Cappelleri founded the Multi-Scale Robotics & Automation Lab (MSRAL) that performs cutting-edge research on robotic and automation systems at various length scales. His research interests include mobile microrobotics for biomedical and manufacturing applications, surgical robotics, automated manipulation and assembly, and unmanned aerial and ground robot design for agricultural applications. Prof. Cappelleri is the Purdue site director for the NSF Engineering Research Center on the Internet of Things for Precision Agriculture (IoT4Ag). Prof. Cappelleri has received various awards, such as the NSF CAREER Award, Harvey N. Davis Distinguished Assistant Professor Teaching Award, the Association for Lab Automation Young Scientist Award, and is Fellow of the American Society of Mechanical Engineers (ASME). He received a Batchelor of Mechanical Engineering degree from Villanova University, a MS in Mechanical Engineering degree from The Pennsylvania State University, and a PhD degree in Mechanical Engineering & Applied Mechanics from the University of Pennsylvania.
7. April 4th — Engineering Open House: Live Robot Demo Event
8. April 11th — External Student Talks: Letian Fu and Yifeng Zhu
What Would GPT for Robots Look Like? A Hypothesis via In-Context Imitation Learning
Learning Robot Manipulation Skills from Single-Video Human Demonstrations
► Talk details
Recording link: https://uofi.box.com/s/r3rher5wbcej7kxjkzxpo8vfggu8a328
Abstract:
Talk 1:
What if robotists had our own GPT-3 moment? Current Vision-Language-Action (VLA) models for robots have predominantly followed a zero-shot paradigm akin to GPT-2, achieving promising yet limited generalization without task-specific fine-tuning. In this talk, I propose a new hypothesis: by embracing an autoregressive framework that treats robot actions as next-token predictions, we can unlock powerful in-context imitation learning capabilities similar to GPT-3. Inspired by breakthroughs in large language models, this approach leverages demonstration trajectories as prompts, allowing robots to quickly adapt and perform novel tasks in diverse environments without additional training. This talk will focus on ICRT (https://icrt.dev/) and discuss how GPT-style autoregression could help make embodied intelligence more flexible and generalizable.
Talk 2:
Recent advances in imitation learning have significantly improved robots’ ability to acquire diverse manipulation skills. However, the popular imitation learning pipelines typically require extensive teleoperation data and often struggle to generalize across different visual and spatial conditions. In this talk, I will present two of our recent approaches to teaching robots new manipulation skills using only single-video demonstrations. We frame this as ‘open-world imitation from observation,’ where robots can learn and execute new manipulation skills by watching just one video of human demonstration, without requiring additional robot data collection. Our methods enable robots to successfully deploy the derived policies in previously unseen scenarios with diverse visual and spatial conditions.
Speaker Bio:
Speaker 1:
Letian (Max) Fu is a PhD student in Electrical Engineering and Computer Sciences at UC Berkeley, advised by Professor Ken Goldberg. He is also a research intern at NVIDIA, where he works with Jim Fan and Yuke Zhu. Max’s research focuses on building scalable multi-modal robot foundation models that generalize across tasks and environments. His work explores how to leverage the inductive biases of pre-trained vision and language models to accelerate robot learning.
Speaker 2:
Yifeng Zhu is a Ph.D Candidate in Computer Science at the University of Texas at Austin, co-advised by Prof. Peter Stone and Prof. Yuke Zhu. His research focuses on robot learning and robot manipulation. His thesis mainly focuses on how robots learning generalizable manipulation policies from a small amount of data. Yifeng was awarded RSS Pioneer in 2023.
9. April 18th — Dr. Shivam Vats (Brown)
Resource-Rational Robot Learning
► Talk details
Recording link: https://uofi.box.com/s/uooo78to7t89cr1wutbrznm93rt75ywk
Abstract:
Many of the recent advances in robot learning have been driven by scaling up computational, hardware and human resources without a fundamental improvement in the efficiency of learning. Consequently, robots remain prohibitively expensive to train and struggle at adapting to novel situations— a key hallmark of intelligence. In my talk, I will discuss how resource-rational learning enables robots to master complex tasks with minimal experience. I will present metareasoning algorithms that guarantee resource-rationality by deliberating about what to learn, and how to allocate resources like time, computation, and human assistance optimally. I will also show how simple algorithmic improvements, such as incorporating uncertainty awareness and state space factorization, can yield significantly more sample-efficient learning, ultimately enabling more scalable and adaptable robot behavior.
Speaker Bio:
Shivam Vats is a Postdoctoral Research Associate in the Department of Computer Science at Brown University, working with George Konidaris. His research focuses on developing efficient decision making algorithms that improve with minimal experience, enabling robots to continually learn on their own. His work spans robot manipulation, human-robot collaboration and multi-robot planning, with papers nominated for the Outstanding Interaction Paper Award at ICRA 2022 and Spotlight presentation at ICLR 2025. He earned his PhD in robotics from Carnegie Mellon University where he was co-advised by Maxim Likhachev and Oliver Kroemer.
10. April 25th — Prof. Sylvia Herbert (UCSD)
Constructing Neural Safety Filters for Autonomous Systems
► Talk details
Recording link: https://uofi.box.com/s/xpfes3as5p8iw9mu0q3721r335q8sue7
Abstract:
Safety filters are attractive as a modular framework for “robustifying” an autonomous system and have become increasingly popular in an era of black-box nominal control policies. However, constructing valid safety filters is a challenge for many real-world high-dimensional autonomous systems. Learning-based approaches to designing safety filters are one possible solution, though they suffer from learning errors and are particularly sensitive to distribution shifts in online deployment—an aspect that is not ideal for safety. In this talk, I will highlight two of the approaches we have taken to improve outcomes for neural safety filters:
For lower-dimensional systems (e.g. 4D-6D), we have introduced tools for refining neural safety filters. This takes advantage of the scalability of learning-based approaches with the safety guarantees of control-theoretic methods. Additionally, refinement allows for online adaptation in the face of novel information.
For higher-dimensional systems (7D-50D+) we show how semi-supervised learning using techniques from applied math and control theory can better guide the learning process. This work was recently nominated for best paper at the Learning for Dynamics and Control (L4DC) conference. Additionally, by parameterizing environmental conditions, we can show adaptability online.
Speaker Bio:
Sylvia Herbert is an Assistant Professor of Mechanical and Aerospace Engineering at the University of California, San Diego. Her research focus is to enable efficient and safe decision-making in robots and other complex autonomous systems while reasoning about uncertainty in real-world environments and human interactions. These techniques are backed by both theory and physical testing on robotic platforms.
Prior to joining UCSD, Professor Herbert received her PhD in Electrical Engineering from UC Berkeley, where she studied with Professor Claire Tomlin on safe and efficient control of autonomous systems. Before that, she earned her BS/MS at Drexel University in Mechanical Engineering. She is the recipient of the ONR Young Investigator Award, 2023 IROS Robocup Best Paper Award, 2025 Learning for Dynamics and Control (L4DC) Best Paper Nomination, Hellman Fellowship, UCSD JSOE Early Career Faculty Award, UC Berkeley Chancellor’s Fellowship, UC Berkeley Outstanding Graduate Student Instructor Award, and the Berkeley EECS Demetri Angelakos Memorial Achievement Award for Altruism.
11. May 2nd — Prof. Rika Antonova (Cambridge University)
The Ingredients for Efficient Robot Learning and Exploration
► Talk details
Recording link: https://uofi.box.com/s/4c5b7saf7q7q0ceiev1hd7raf3qfz0nk
Abstract:
In this talk, I will outline the ingredients for enabling efficient robot learning. First, I will demonstrate how large vision-language models can enhance scene understanding and generalization, allowing robots to learn general rules from specific examples for handling everyday objects. Next, I will describe methods for leveraging equivariance to significantly reduce the amount of training data needed for learning from human demonstrations.
Moving beyond demonstrations, I will discuss how simulation can enable robots to learn autonomously. I will describe the challenges and opportunities of aligning differentiable simulators with reality, and also introduce methods for facilitating reinforcement learning with ‘black-box’ simulators. To further expand robot capabilities, we can adapt the hardware. In particular, I will demonstrate how differentiable simulation can be used for learning tool morphology to automatically adapt tools for robots. I will also outline experiments with new affordable and robust sensors. Finally, I will share plans for our new project on co-design of hardware and policy learning, which will leverage global optimization, rapid prototyping, and real-to-sim to jointly search the vast space of hardware designs and reinforcement learning methods.
Speaker Bio:
Rika Antonova is an Associate Professor at the University of Cambridge. Her research interests include data-efficient reinforcement learning algorithms, robotics, active learning & exploration. Earlier, Rika was a postdoctoral scholar at Stanford University upon receiving the Computing Innovation Fellowship from the US National Science Foundation. Rika completed her PhD at KTH, and earlier she obtained a research Master’s degree from the Robotics Institute at Carnegie Mellon University. Before that, Rika was a senior software engineer at Google.
Fall 2024 Schedule
1. Sept. 6th — Illinois Robotics Group Collaboration meeting: Making Connections, in person
2. Sept. 20th — Prof. Negar Mehr (UC Berkeley)
Interactive Autonomy: Game-Theoretic Learning and Control for Multi-Agent Interactions
► Talk details
Recording link: https://uofi.box.com/s/igtm12fbuhfpfe3s2ru25ngy8nj5smrf
Abstract:
To transform our lives, autonomous systems need to interact with other agents in complex shared environments. For example, autonomous cars need to interact with pedestrians, human-driven cars, and other autonomous cars. Autonomous delivery drones need to navigate in the aerial space shared by other drones, or mobile robots in a warehouse must navigate in the factory space shared by robots. The multi-agent nature of such application domains requires us to develop a systematic methodology for enabling efficient interactions of autonomous systems across various applications. In this talk, I will first focus on game-theoretic planning and control for robots. To reach intelligent robotic interactions, robots must account for the dependence of agents’ decisions upon one another. I will discuss how game-theoretic planning and control enables robots to be cognizant of their influence on other agents. I will present our recent results on leveraging the structure that is inherent in interactions to develop efficient motion planning algorithms which are suitable for real-time operation on robot hardware. In the second part of the talk, I will focus on how robots can learn and infer the intentions of their surrounding agents to account for agents’ preferences and objectives. Currently, robots can infer the objectives of isolated agents within the formalism of inverse reinforcement learning; however, in multi-agent domains, agents are not isolated, and the decisions of all agents are mutually coupled. I will discuss a mathematical theory and numerical algorithms for inferring these interrelated preferences from observations of agents’ interactions.
Speaker Bio:
Negar Mehr is an assistant professor in the Department of Mechanical Engineering at the University of California, Berkeley. Before that, she was an assistant professor of Aerospace Engineering at the University of Illinois Urbana-Champaign. She was a postdoctoral scholar at Stanford Aeronautics and Astronautics department from 2019 to 2020. She received her Ph.D. in Mechanical Engineering from UC Berkeley in 2019 and her B.Sc. in Mechanical Engineering from Sharif University of Technology, Tehran, Iran, in 2013. She is a recipient of the NSF CAREER Award. She was awarded the IEEE Intelligent Transportation Systems best Ph.D. dissertation award in 2020.
3. Sept. 27th — Dr. Joe Norby (CMU/Apptronik)
Legged Robotics in Academia and Industry: Finding the Right Problem to Solve
► Talk details
Recording link: No recording
Abstract:
Optimal control problems for dynamic legged locomotion are difficult to solve while respecting real-time constraints. Identifying a technical strategy for a general purpose humanoid product requires careful thinking about state-of-the-art and product needs. Fundamental to both is identifying the correct problem to solve given all the constraints at hand. This talk will explore these topics and provide a perspective on how researchers and industry professionals can learn from one another about how to solve hard problems.
Speaker Bio:
Joe Norby is the lead for the Motion Control and Planning Team and the acting Director of Controls at Apptronik. He received the B.S. degree in mechanical engineering from the University of Notre Dame in 2016, and the Ph.D. degree in mechanical engineering from Carnegie Mellon University in 2022. His graduate work focused on global and local motion planning for autonomous agility in legged robots. At Apptronik, he specializes in navigation, dynamic locomotion, contact-rich manipulation, and real time controls for humanoid robots that perform useful work
4. Oct. 4th — Prof. Dan Halperin (Tel Aviv University)
Ten Problems in Geobotics
► Talk details
Recording link: https://uofi.box.com/s/xr3ppgywubvs34v8gg54ajs2abadyfgj
Abstract:
Can we optimize the coordinated motion of a fleet of robots, when even for two robots we know so little? Can we quickly decide if one object could cage another? Is there an effective way to explain why we failed to find an assembly plan for a new product design? We review these and other challenging problems at the intersection of robotics and computational geometry—let’s call this intersection Geobotics. What is common to most of these problems is that the prevalent algorithmic techniques used in robotics do not seem suitable for solving them, or at least do not suggest quality guarantees for the solution. Solving some of them, even partially, can shed light on less well-understood aspects of computation in robotics. Joint work with Mikkel Abrahamsen.
Speaker Bio:
Dan Halperin received his Ph.D. in Computer Science from Tel Aviv University, after which he spent three years at the Computer Science Robotics Laboratory at Stanford University. He then joined the Department of Computer Science at Tel Aviv University, where he is currently a full professor and for two years was the department chair. Halperin’s main field of research is Computational Geometry and Its Applications. Application areas he is interested in include robotics, automated manufacturing, algorithmic motion planning, and 3D printing. A major focus of Halperin’s work has been in research and development of robust geometric software, in collaboration with a group of European universities and research institutes: the CGAL project and library, which recently won the SoCG test of time award. Halperin was the program-committee chair/co-chair of several conferences in computational geometry, algorithms and robotics, including SoCG, WAFR, ESA, and ALENEX. Halperin is an ACM Fellow and an IEEE Fellow.
5. Oct. 11th — Prof. Kaiyu Hang (Rice University)
Reducing Wishful Assumptions for Increasing Robustness in Robot Manipulation
► Talk details
Recording link: No recording
Abstract:
Manipulation is an essential skill enabling robots to physically engage them in real-world tasks. Being a topic involving multiple sub-problems, including planning, control, learning, design, and estimation, robot manipulation still remains a highly challenging field after decades of extensive research. Among others, one major issue always associated with manipulation systems is that it is hard to transfer most of the “successful” solutions from labs to real-world applications, either they be learning-based or analytical approaches, due to physical and perception uncertainties etc. In other words, we have been making too many wishful assumptions in the development of robot systems, such as that we will have enough data for training, we will have good sensors, etc. In this talk, I will focus on the exploration of common wishful assumptions, their impacts, as well as how we can develop robots with less of such assumptions. Research topics including robot self-identification, interactive perception, manipulation funnels, caging-based manipulation, and end-effector design will be discussed.
Speaker Bio:
Kaiyu Hang is an Assistant Professor of Computer Science at Rice University, where he leads the Robotics and Physical Interactions Lab. He is broadly interested in robotic systems that can physically interact with other robots, people, and the world. By developing algorithms in optimization, planning, learning, estimation, and control, his research is focused on efficient, robust, and generalizable manipulation systems, addressing problems that range from small scale grasping and in-hand manipulation, to large scale dual-arm manipulation, mobile manipulation, and multi-robot manipulation.
6. Oct. 25th — Dr. Lukas Kaul (Toyota Research Institute)
A multi-path approach to making robots useful in the real world
► Talk details
Recording link: https://uofi.box.com/s/8milojak13qu58qtyevtn9unjmb8dpjy
Abstract:
The rapid commoditization of high performance hardware for compute, sensing and actuation, paired with recent incredible breakthroughs in data driven methods for robot control, make a wave of new robot applications seem imminent. Yet, the long-standing vision of widespread real-world use of cutting edge robotics in areas where they are desperately needed, is still far from being fulfilled. Why is that, and how can we bridge the gap from what seems possible in the lab today to deployments in actual use cases that can have a positive impact on society? This talk describes TRI’s approach to answering this question with a multipath approach that spans various technologies and timespans. It also highlights several of TRI’s robotics research achievements and recent progress in autonomous mobile manipulation.
Speaker Bio:
Dr. Lukas Kaul is a Senior Research Scientist with the robotics division at the Toyota Research Institute in Los Altos, California. His current work revolves around hardware system design and algorithms for autonomous mobile manipulators, with a focus on the challenges of real-world deployability. He received his PhD in computer science from the Karlsruhe Institute of Technology (KIT) in 2019 for a thesis on efficient methods for humanoid balancing. Prior to that, he received his B.Sc. and M.Sc. in mechanical engineering from KIT, where his work spanned topics ranging from control circuits for high-voltage micro vibratory conveyors to large-scale mapping of underground caves with drones. Passionate about making robotics ever more accessible, he published his book “Practical Arduino Robotics”, a guide to getting started with building robots, in 2023.
7. Nov. 1st — In Person Event on Robotics Industry
Panel Discussion: Experiences in Robotics Startup
► Details
Recording link: https://uofi.box.com/s/2l2m8pmvij78d8j6qrm9hbpwvazopzrd
8. Nov. 8th — Prof. Hiroyasu Tsukamoto (Illinois)
The Power of Contraction in Control, Learning, and Beyond: Toward Trustworthy Aerospace and Robotic Autonomy
► Talk details
Recording link: https://uofi.box.com/s/tr1l89du3fzx55xji281pj55fpaaqpcg
Abstract:
Contraction theory provides an analytical tool for studying differential dynamics of nonlinear systems under a contraction metric, the existence of which results in a necessary and sufficient characterization of the incremental exponential stability of multiple solution trajectories with respect to each other. More intuitively, it defines a systematic way to measure the distance of the systems’ current performance to their ideal, exponentially decreasing in time. It is increasingly recognized that the concept of contraction is fundamental in driving systems to their ideal counterparts both in physics-informed and data-driven systems. This talk gives a brief mathematical overview of contraction theory, with some recent efforts in generalizing the ideas to broader problem settings. Its practical impact is demonstrated through applications in several joint projects with NASA-JPL.
Speaker Bio:
Hiroyasu Tsukamoto is an Assistant Professor of Aerospace Engineering at the University of Illinois at Urbana-Champaign and the director of the ACXIS Laboratory (Autonomous Control, Exploration, Intelligence, and Systems). Prior to joining Illinois, he was a Postdoctoral Research Affiliate in Robotics at the NASA Jet Propulsion Laboratory, where he contributed to the Science-Infused Spacecraft Autonomy for Interstellar Object Exploration and Multi-Spacecraft Autonomy Technology Demonstration projects. He received his Ph.D. and M.S. in Space Engineering (Autonomous Robotics and Control) from Caltech in 2018 and 2023, respectively, and his B.S. degree in Aeronautics and Astronautics from Kyoto University, Japan, in 2017. He is the recipient of several awards, including the William F. Ballhaus Prize for the Best Doctoral Dissertation in Space Engineering at Caltech and the Innovators Under 35 Japan Award from MIT Technology Review. More info: https://hirotsukamoto.com
9. Nov. 15th — Prof. Leia Stirling (U-M)
Moving, thinking, and wearing an exoskeleton: Examinations of exoskeleton interactions with human information processing
► Talk details
Recording link: https://uofi.box.com/s/pcadg90t8c63qbq8h2oyphyt234l5tra
Slides llink: https://uofi.box.com/s/pcadg90t8c63qbq8h2oyphyt234l5tra
Abstract:
Exoskeletons have been proposed to augment, assist, and rehabilitate motion. The efficacy of an exoskeleton in supporting the designed goals is affected by how a person moves with and uses the exoskeleton. This fluency between the human operator and exoskeleton is affected by the alignment between dimensions of the person and exoskeleton, but is also affected by the manner in which the exoskeleton is integrated in the perception-cognition-action decision process of the operator. In this talk, I will highlight studies that examine the interactions between sensory perception, executive function, and motor action selection while wearing a lower-body active exoskeleton during goal-oriented tasks.
Speaker Bio:
Leia Stirling is an Associate Professor in the Industrial and Operations Engineering Department and Robotics Department, an Affiliate Faculty in the Space Institute, and the University of Michigan Center for Occupational Health and Safety Engineering (COHSE) Director of Occupational Safety Engineering and Ergonomics. Her research group brings together methods from human factors, biomechanics, and robotics to understand the physical and cognitive interactions for goal-oriented human task performance and to support operational decision making that relies on manual task performance. These goals may include reducing musculoskeletal injury risks, supporting telehealth, and improving technology usability. Her research has been funded by the National Science Foundation, NASA, NIOSH, Boeing, and the U.S. Army.
10. Nov. 22nd — Prof. Siyi Xu (Illinois)
Electrical actuation and control of soft fluidic robots: towards wearable and implantable soft robotic systems for daily purposes
► Talk details
Recording link: https://uofi.box.com/s/ptc2h7k9r27dl957dw2g3e8woeeenkrh
Abstract:
Soft robots made of compliant materials are adaptive to environment, robust to impact, and lightweight, making them safe for human-centered applications. Fluidic robots, as one of the most commonly used soft robots, have been widely applied in wearable technologies, including assistive devices, therapeutic tools, and human-machine interfaces. However, the power and control systems for fluidic robots generally are not only rigid, but also ten times heavier than the actuators, significantly limiting the robots’ mobility, flexibility, and safety to human. In this talk, I will introduce compliant, lightweight, and compact transducing systems that have enabled actuation and control of fluidic actuators. I will start by introducing a power-dense electrically-responsive soft actuator that is centimeter scale in size and weighs only 0.1g. Then I will present centi-meter scale soft valves and peristaltic pumps that employ these soft actuators as the core components. I will discuss the working mechanism and design criteria of these valves and pumps, and demonstrate their capabilities to generate and control fluidic flows. Lastly, I will demonstrate closed-loop control of a bending hydraulic actuator with the aforementioned soft valves and pumps. These soft actuators and transducing systems will serve as the foundation for my future work on untethered soft robotic systems for everyday applications, paving the way for advancements in wearable and implantable technologies.
Speaker Bio:
Siyi Xu is an Assistant Professor of Mechanical Science and Engineering University of Illinois at Urbana-Champaign and the director of the WISER Laboratory (Wearable and Implantable Soft Electronics and Robotics Lab). Prior to joining UIUC, she was a postdoctoral fellow at the Querrey Simpson Institute for Bioelectronics at Northwestern University, where she worked on implantable sensors for cardiac output flow and drug delivery monitoring for chemotherapy patients. She received her PhD in Mechanical Engineering at Harvard University in 2022. Her research focuses on developing soft robotic platforms equipped with compliant, lightweight, and compact sensing and transducing systems for human-centered applications. Siyi received her bachelor’s degree in Materials Science and Engineering from the University of Illinois at Urbana-Champaign. She was selected as a Rising Star in Mechanical Engineering in 2021. She is also a co-recipient of the IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award.
11. Dec. 6th — External Student Panel
► Details
Recording link: https://uofi.box.com/s/84g0o3mh73j9llcd5d77p2lv5yoj17nf
Spring 2024 Schedule
1. Jan. 26th — Prof. Timothy Bretl (Illinois)
Mechanics, Manipulation, and Perception of Deformable Objects
► Talk details
Recording link: https://uofi.box.com/s/rlw8msdrgosqtmzu433gtwn2a3yr31ji
Recording link: TBA
Abstract:
Commercially available bionic limbs have been far behind the state-of-the-art research that has been developed at academic institutions around the world. PSYONIC’s Ability Hand was developed to take advances in soft robotics and sensorimotor prostheses and make them accessible to humans and robots. The Ability Hand is an FDA-registered multiarticulated bionic hand that is the fastest on the market, robust to impacts, and gives users touch feedback. It is also covered by Medicare in the US. As a robotic manipulator, it is currently in use by NASA, Meta, Apptronik, Mercedes-Benz, and some of the top automakers, manufacturers, and brain-mother interface companies globally. This talk will detail the development of the Ability Hand, its current capabilities, and further advancements in bionic limbs that will be coming in the near future.
Speaker Bio:
Dr. Akhtar received his Ph.D. in Neuroscience and M.S. in Electrical & Computer Engineering from the University of Illinois at Urbana-Champaign in 2016. He received a B.S. in Biology in 2007 and M.S. in Computer Science in 2008 at Loyola University Chicago. His research is on motor control and sensory feedback for prosthetic limbs, and he has collaborations with the Center for Bionic Medicine at the Shirley Ryan AbilityLab, the John Rogers Research Group at Northwestern University, and the Range of Motion Project in Guatemala and Ecuador. In 2021, he was named as one of MIT Technology Review’s top 35 Innovators Under 35 and America’s Top 50 Disruptors in Newsweek.
2. Feb. 2nd — Illinois Robotics Group Icebreaker Introduction, in person
3. Feb. 9th — Prof. Rob Platt (Northeastern)
Leveraging Symmetries to Improve Robotic Policy Learning
► Talk details
Recording link: No recording
Abstract:
Many robotics problems have transition dynamics that are symmetric in SE(2) and SE(3) with respect to rotation, translation, scaling, reflection, and other transformations. In these situations, any optimal policy will also be symmetric over these transformations. In this talk, we leverage this insight to improve the sample efficiency of policy learning by encoding the symmetries directly into the neural network model using group invariant and equivariant layers. The result is that we can learn non-trivial visuomotor control policies with very little experience. In some cases, we can learn good policies from scratch by training directly on real robotic hardware in real time. We apply this idea both to imitation learning and reinforcement learning and achieve state of the art results in both cases.
Speaker Bio:
Rob Platt is an Associate Professor in the Khoury College of Computer Sciences at Northeastern University and a Faculty Fellow at BDAI. He is interested in developing robots that can perform complex manipulation tasks alongside humans in the uncertain everyday world. Much of his work is at the intersection of robotic policy learning, planning, and perception. Prior to coming to Northeastern and BDAI, he was a Research Scientist at MIT and a technical lead at NASA Johnson Space Center.
4. Feb. 15th (Thursday) — CSL Student Conference Robotics Session
Panelist: Dr. Andy Zeng (Deepmind), Jiayuan Mao (MIT), Professor Yunzhu Li (Illinois)
Panel Discussion: Foundation Models for Robot Learning
5. Feb. 23rd — Prof. Mark Plecnik (Notre Dame)
The Purposeful Placement of Singularities
► Talk details
Recording link: https://uofi.box.com/s/rt3f9q3dohedfr99sad62p1ijsoog505
Abstract:
Kinematic singularities in robots are generally problematic. There are two main types of singularities. For the first type, a robot loses its ability to exert forces in a certain direction. For the second type, a robot loses its ability to move in a certain direction. Therefore it is best that singularities be avoided. This presentation takes a backward approach. Singularities are embraced, and we explore what extra functionalities might be obtained by purposefully designing singularities into the configuration space. This act of design is nonelementary. We take a computational approach, one based in root-finding. The root-finding problems posed can be huge. Therefore, we have pushed the bounds in coming up with more capable root finding algorithms. Results are packaged into visualizations suitable for design space exploration.
Speaker Bio:
Mark Plecnik is an assistant professor in the Department of Aerospace and Mechanical Engineering at the University of Notre Dame. His research is grounded in the computational design of robot geometries. His current focus is in forming the prevailing dynamics of a robot by shaping its configuration space rather than depending purely on motor control to achieve desired motions. Plecnik received the NSF CAREER Award in 2022. He is a senior member of IEEE.
6. March 1st — Internal Research lightning talk, in person
► Details
This event will consist of short faculty and student presentations to share their research insights, registration link: https://docs.google.com/forms/d/e/1FAIpQLSdNLdfT4BjZgmL5jl2jUjIB6MBriWhgCrHavkk25HLn9m1Lgg/viewform.
7. March 8th — Prof. Henny Admoni (CMU)
Robots that Learn From and Collaborate with People
► Talk details
Recording link: https://uofi.box.com/s/ib5f5v9dxhv4rgg7inpjynkqs6tpttv4
Abstract:
Human-robot collaboration has the potential to improve people’s lives by enabling robots to provide timely assistance. For collaborative robots to live up to their potential, they must be able to learn from people and adapt to their human partners’ needs. While learning from demonstration has been successful for learning robot skills, demonstrations are only one way that people teach each other. In this work, we present an extended learning framework that formalizes four teaching archetypes. We empirically show that the type of feedback people provide has different benefits to the learner and costs to the teacher. We also describe how robots can actively learn by selecting the best feedback type at any given moment. Finally, we discuss how robots should communicate their own models to help people teach more effectively.
Speaker Bio:
Dr. Henny Admoni is an Associate Professor in the Robotics Institute at Carnegie Mellon University, where she leads the Human And Robot Partners (HARP) Lab. Dr. Admoni’s research interests include human-robot interaction, assistive robotics, and nonverbal communication. Dr. Admoni holds a PhD in Computer Science from Yale University, and a BA/MA joint degree in Computer Science from Wesleyan University.
8. March 22th — Prof. Michael Posa (UPenn)
Do we really need all that data? Learning and control for contact-rich manipulation
► Talk details
Recording link: https://uofi.box.com/s/7zsydpq40v2nx5wlc1u9ugmzgj1trn44
Abstract:
For all the promise of big-data machine learning, what will happen when robots deploy to our homes and workplaces and inevitably encounter new objects, new tasks, and new environments? If a solution to every problem cannot be pre-trained, then robots will need to adapt to this novelty. Can a robot, instead, spend a few seconds to a few minutes gathering information and then accomplish a complex task? Why does it seem that so much data is required, anyway? I will first argue that the hybrid or contact-driven aspects of manipulation clashes with the inductive biases inherent in standard learning methods, driving this current need for large data. I will then show how contact-inspired implicit learning, embedding convex optimization, can reshape the loss landscape and enable more accurate training, better generalization, and ultimately data efficiency. Finally, I will present our latest results on how these learned models can be deployed via real-time multi-contact MPC for dexterous robotic manipulation, where the robot must autonomously make and break contact and initiate stick-slip transitions.
Speaker Bio:
Michael Posa is an Assistant Professor in Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. He leads the Dynamic Autonomy and Intelligent Robotics (DAIR) lab, a group within the Penn GRASP laboratory. His group focuses on developing computationally tractable algorithms to enable robots to operate both dynamically and safely as they interact with their environments. Michael received his Ph.D. in Electrical Engineering and Computer Science from MIT in 2017 and received his B.S. in Mechanical Engineering from Stanford University in 2007. Before his doctoral studies, he worked as an engineer at Vecna Robotics. He received the NSF CAREER Award in 2023, the RSS Early Career Spotlight in 2023, a Google Faculty Research Award, and a Young Faculty Researcher Award from the Toyota Research Institute. His work has also received awards recognition at TRO, ICRA, Humanoids, and HSCC.
9. March 29th — External Student Talks: Haozhi Qi (UC Berkeley) and Hyung Ju Terry Suh (MIT)
Manipulation and Perception with Multimodal Robot Hands
Exposing the Real Difficulties of Contact-rich Manipulation
► Talk details
Recording link: https://uofi.app.box.com/s/9rdz2y3wdoheet9e214r5wnd0a05o4m6
Abstract: TBA
Speaker Bio: TBA
10. April 12th — Faculty Panel Discussion, in person
11. April 19th — Prof. Yan Gu (Purdue)
Modeling, Estimation, and Control of Robot Locomotion in Non-inertial Shipboard Environment
► Talk details
Recording link: https://uofi.box.com/s/g600o7q3d2mmcmjsojgytsutnhm7y4sa
Abstract:
Legged robots have the potential to assist humans with a wide range of real-world tasks in dynamic, unstructured environments. While legged robot operation in inertial environments has been extensively studied, legged locomotion in non-inertial settings (e.g., ships and oil platforms) remains a new robot functionality that has not been solved. This new functionality will empower legged robots to perform critical, high-risk tasks such as shipboard maintenance, inspection, firefighting, and fire suppression as well as surveillance and disinfection on moving public transportation vehicles. Yet, enabling reliable locomotion in a non-inertial environment presents substantial fundamental challenges in robot control due to the high complexity of the hybrid, time-varying physical interaction between the robot and the environment. Dr. Gu will present the current progress from her research group in creating new approaches of robot modeling, state estimation, and control that achieve provably robust quadrupedal and humanoid locomotion in non-inertial environments. She will also report in-lab and shipboard test results that reveal major performance gaps of commercial legged robot systems in addressing dynamic shipboard environments
Speaker Bio:
Yan Gu received the B.S. degree from Zhejiang University, China, in 2011 and the Ph.D. degree from Purdue University in 2017, both in Mechanical Engineering. She joined the School of Mechanical Engineering at Purdue University as an Associate Professor in Fall 2022. Prior to joining Purdue, she was an Assistant Professor in the Department of Mechanical Engineering at the University of Massachusetts Lowell. Her research focuses on modeling, state estimation, planning, and control of legged locomotion and manipulation in highly dynamic and complex environments including non-inertial settings. Her research draws on nonlinear control theory, theory of hybrid systems, dynamics, and optimization to advance robot modeling, state estimation, and control. Dr. Gu is an Associate Editor for the IEEE/ASME Transactions on Mechatronics and IEEE Robotics and Automation Letters, as well as a Guest Editor for the IEEE Transactions on Robotics. She received the Young Investigator Program Award from the Office of Naval Research in 2023, the Faculty Early Career Development Program (CAREER) Award from the National Science Foundation in 2021, and Verizon’s 5G Robotics Challenge Award in 2019. Her research has been reported by various media such as Boston Globe, CNET, Robotics Business Review, and NPR’S WBUR.
13. April 26th — Dr. Zachary Serlin (MIT)
Real-time Tasking and Coordination Algorithms for Autonomous Systems from High-level Specifications
► Talk details
Recording link: https://uofi.box.com/s/yn0r8r61lnslt3fvqq6qvqqiwmpyl9xv
Abstract:
Deploying a fleet of autonomous vehicles effectively is a challenging problem. A team of robots working together has the potential to outperform a large monolithic system because the team offers intrinsic resilience to individual failures and can inhabit large geographic areas while lowering the costs of individual robots. The team can also expand its capabilities by using heterogeneous robots with different abilities (e.g., sensors, effectors, dynamics, or computation). However, in order to bring these advantages to fruition, a user needs the ability to specify desired collective behaviors for the swarm while maintaining mechanisms for self-monitoring, self-reconfiguration, and self-repair. Using existing technologies, the job of tasking and controlling large numbers of systems often overwhelms human operators, as the number of variables in their decision-making process grows exponentially. Existing automated approaches for coordinating heterogeneous teams of robots either consider small numbers of agents, are application-specific, or do not adequately address common real-world requirements, e.g., strict deadlines or inter-task dependencies.
This talk will address these challenges by introducing scalable and robust algorithms for task-based coordination from high-level specifications to coordinate heterogenous teams. I will talk about a formal specification language, capability temporal logic (CaTL), to describe rich, temporal properties involving tasks requiring the participation of multiple agents with multiple capabilities, e.g., sensors or end effectors. Arbitrary missions and team dynamics can be jointly encoded as constraints in a mixed integer linear program, and solved efficiently using commercial off-the-shelf solvers. I will then walk through several hardware demonstrations and some computational results from deploying a planning and control system based on CaTL.
Speaker Bio:
Zachary Serlin is currently a technical staff research scientist in the Air, Missile, and Maritime Defense Technology Division at MIT Lincoln Laboratory. His research focuses on formal guarantees for artificial intelligence-based multi-agent autonomous systems. He has worked on large-scale logistics and command and control systems, automated strategy generation systems for heterogenous multi-agent coordination, and verifiable and resilient uncrewed platform planning and control systems. His work in coordinated heterogenous autonomy has been a finalist at the 2022 R&D 100 awards. He holds a BS and MS in mechanical engineering from Tufts University and a Ph.D. in mechanical engineering from Boston University.
12. April 26th — External Student Panel Discussion
Panelist: Cheng Chi (Columbia), Binhao Huang (Illinois), Liyiming Ke (UW), Yuzhe Qin (UCSD)
Panel Discussion: How to Learn Dexterous Manipulation Policy: Teleoperation vs. Simulation
► Details
Recording link: https://drive.google.com/drive/folders/1plW17-VtiBq75SIMcXztdiNzfmaJtP1A?usp=sharing
Abstract:
In the era of data-driven foundation models, dexterous manipulation sees the chance of being revolutionized by learning from data. However, as with all robotics tasks, we are facing a barrier: there is not much robotics data on hand, and there is not a clear way how we should get it. Efforts to collect very large datasets have required tremendous amounts of time and money; the great variety of robot embodiments and tasks also post a great challenge.
Recently, techniques like Imitation Learning and Reinforcement Learning have produced promising results in a number of dexterous manipulation tasks. In this seminar, we will discuss and debate two primary methods for acquiring dexterous manipulation data in IL and RL: teleoperation and simulation. Teleoperation offers environment realism and human expertise but relies heavily on human involvement. Simulation provides a safe, cost-effective environment for training but faces challenges in accurately representing the physical world. The seminar aims to delve into the strengths and limitations of each approach and discuss what are some promising future research directions.
14. May 3rd — Dr. Aadeel Akhtar (Psyonic)
PSYONIC – Advances in Commercial Bionic Limbs
► Talk details
Recording link: TBA
Abstract:
Commercially available bionic limbs have been far behind the state-of-the-art research that has been developed at academic institutions around the world. PSYONIC’s Ability Hand was developed to take advances in soft robotics and sensorimotor prostheses and make them accessible to humans and robots. The Ability Hand is an FDA-registered multiarticulated bionic hand that is the fastest on the market, robust to impacts, and gives users touch feedback. It is also covered by Medicare in the US. As a robotic manipulator, it is currently in use by NASA, Meta, Apptronik, Mercedes-Benz, and some of the top automakers, manufacturers, and brain-mother interface companies globally. This talk will detail the development of the Ability Hand, its current capabilities, and further advancements in bionic limbs that will be coming in the near future.
Speaker Bio:
Dr. Akhtar received his Ph.D. in Neuroscience and M.S. in Electrical & Computer Engineering from the University of Illinois at Urbana-Champaign in 2016. He received a B.S. in Biology in 2007 and M.S. in Computer Science in 2008 at Loyola University Chicago. His research is on motor control and sensory feedback for prosthetic limbs, and he has collaborations with the Center for Bionic Medicine at the Shirley Ryan AbilityLab, the John Rogers Research Group at Northwestern University, and the Range of Motion Project in Guatemala and Ecuador. In 2021, he was named as one of MIT Technology Review’s top 35 Innovators Under 35 and America’s Top 50 Disruptors in Newsweek.
Fall 2023 Schedule
1. Sept. 1st — Prof. David Fridovich-Keil (UT Austin)
Dynamic Game Models for Multi-Agent Interactions: The Role of Information in Designing Efficient Algorithms
► Talk details
Recording link: https://uofi.box.com/s/wtov1k9fb3x7p5zicy10wy8d25cu82bu
Abstract:
This talk introduces dynamic game theory as a natural modeling tool for multi-agent interactions ranging from large, abstract systems such as ride-hailing networks to more concrete, physically-embodied robotic settings such as collision-avoidance in traffic. We present the key theoretical underpinnings of dynamic game models for these varied situations and draw attention to the subtleties of information structure, i.e., what information is implicitly made available to each agent in a game. Thus equipped, the talk presents a state-of-the-art technique for solving these games, as well as a set of “dual” techniques for the inverse problem of identifying players’ objectives based on observations of strategic behavior.
Speaker Bio:
David Fridovich-Keil is an assistant professor at the University of Texas at Austin. David’s research spans optimal control, dynamic game theory, learning for control, and robot safety. While he has also worked on problems in distributed control, reinforcement learning, and active search, he is currently investigating the role of dynamic game theory in multi-agent interactive settings such as traffic. David’s work also focuses on the interplay between machine learning and classical ideas from robust, adaptive, and geometric control theory.
2. Sept. 8th — Dr. Laura Treers (Georgia Tech)
Models and Mechanisms for Robotic Capability in Unstructured Environments
► Talk details
Recording link: TBA
Abstract:
Modern robots are highly capable in predictable environments, but often struggle in unstructured environments like granular media. However, loose terrains and granular environments are ubiquitous in our world, and making robots agile in these environments would open doors to applications in marine and space exploration, geotechnics, disaster response, and agriculture. In this presentation, I introduce our mole crab-inspired robot EMBUR (EMerita BUrrowing Robot), a legged robot capable of self-burrowing in granular media. I discuss current limitations for modeling such systems, and present my prior work on extending Granular Resistive Force Theory (RFT) to three dimensions. Lastly, I present demonstrations of tethered robotic teams in the field, and show how leveraging contact between the tether and natural objects can enable forceful maneuvers, even in complex terrains. I conclude my talk with a discussion of how the modeling techniques introduced can be extended to better inform robot design and control.
Abstract:
Laura Treers is a Postdoctoral Researcher in the Schools of Physics and Biological Sciences at Georgia Tech, working with Profs. Dan Goldman and Mike Goodisman. She received her PhD in Mechanical Engineering from UC Berkeley in summer 2023, working with Professor Hannah Stuart. She completed her undergraduate at MIT in 2018, also majoring in Mechanical Engineering. Her doctoral work focused on improving robotic capability in complex environments such as granular media, through novel mechanisms and modeling techniques. Her postdoctoral work is centered on manipulation and construction using cohesive granular materials, in both biological collectives (fire ants) and robots. She mentored undergraduate, masters, and junior PhD students throughout her time at Berkeley, is an active mentor for youth FIRST robotics programs, and feels strongly about helping students from all backgrounds and walks of life feel like they belong in STEM fields. In fall of 2024 she will start as an Assistant Professor of Mechanical Engineering at the University of Vermont (UVM).
3. Sept. 15th — Prof. Ryan Truby (Northwestern University)
From Monomer to Machine: Materializing Autonomy in 3D Printed Soft Robots
► Talk details
Recording link: https://uofi.app.box.com/s/bdxeq2jpphz4xg43g77bz1jlwp526ah2
Abstract:
Recent advances in soft robotics motivate the design of multifunctional composites with actuation and perception capabilities. These functionalities are required for addressing long-standing challenges in soft robot control and achieving more sophisticated bioinspired behaviors. However, continued progress towards this vision is stymied by limitations in current materials and manufacturing methods. With these challenges in mind, I will present approaches for designing and fabricating soft robots from architected materials with distributed sensing capabilities. First, I will introduce 3D printed architectures called handed shearing auxetics for use as motorized soft actuators. I will then introduce approaches to sensorizing these actuators through fluidic innervation, where networks of empty, air-filled cavities provide distributed sensing through pressure measurements. Finally, I will discuss opportunities for using these systems for untethered locomotion and other embodiments of robotic materials.
Speaker Bio:
Ryan Truby is the June and Donald Brewer Junior Professor and an Assistant Professor of Materials Science and Engineering and Mechanical Engineering at Northwestern University. His research broadly aims to advance machine intelligence by material design. He and his team in the Robotic Matter Lab are currently developing novel soft actuators and sensors, rapid multimaterial 3D printing methods, and machine learning-based control strategies for soft and bioinspired robots. Ryan’s research also includes work in 3D printing vascularized tissue constructs, soft electronics, artificial muscles, and architected materials. Prior to Northwestern, Ryan was a Postdoctoral Associate at MIT’s Computer Science and Artificial Intelligence Lab, and he received his Ph.D. in Applied Physics from Harvard University. Ryan is the recipient of Young Investigator Program Awards from the Office of Naval Research and the Air Force Office of Scientific Research, the Outstanding Paper Award at the 2019 IEEE International Conference on Soft Robotics, and the Gold Award for Graduate Students from the Materials Research Society. His work at the materials-robotics interface has been supported by a Schmidt Science Fellowship and an NSF Graduate Research Fellowship.
4. Sept. 22nd — Professor Yunzhu Li (Illinois)
Learning Structured World Models From and For Physical Interactions
► Talk details
Recording link: https://uofi.app.box.com/s/odlsgifpk52txkjje4oixq1kop2l3rcm
Abstract:
Humans have a strong intuitive understanding of the physical world. Through observations and interactions with the environment, we build a mental model that predicts how the world would change if we applied a specific action (i.e., intuitive physics). My research draws on insights from humans and develops model-based reinforcement learning (RL) agents that learn from their interactions and build predictive models that generalize widely across a range of objects made with different materials. The core idea behind my research is to introduce novel representations and integrate structural priors into the learning systems to model the dynamics at different levels of abstraction. I will discuss how such structures can make model-based planning algorithms more effective and help robots accomplish complicated manipulation tasks (e.g., manipulating an object pile, shaping deformable foam into a target configuration, and making a dumpling from the dough using various tools). Furthermore, I will demonstrate how structured scene representation facilitates the integration of large language models (LLMs), granting robots the ability to perform a large variety of everyday tasks specified in free-form natural language.
Speaker Bio:
Yunzhu Li is an Assistant Professor of Computer Science at the University of Illinois Urbana-Champaign (UIUC). Before joining UIUC, he collaborated with Fei-Fei Li and Jiajun Wu during his Postdoc at Stanford. Yunzhu earned his PhD from MIT under the guidance of Antonio Torralba and Russ Tedrake. His work stands at the intersection of robotics, computer vision, and machine learning, with the goal of helping robots perceive and interact with the physical world as dexterously and effectively as humans do. Yunzhu’s work has been recognized through the Best Systems Paper Award and the Finalist for Best Paper Award at the Conference on Robot Learning (CoRL). Yunzhu is also the recipient of the Adobe Research Fellowship and was selected as the First Place Recipient of the Ernst A. Guillemin Master’s Thesis Award in Artificial Intelligence and Decision Making at MIT. His research has been published in top journals and conferences, including Nature, NeurIPS, CVPR, and RSS, and featured by major media outlets, including CNN, BBC, The Wall Street Journal, Forbes, The Economist, and MIT Technology Review.
5. Sept. 29th — Professor Anh-Van Ho (JAIST)
Enhancing Embodied Intelligence through Adaptive Morphology
► Talk details
Recording link: https://uofi.app.box.com/s/31vwj904sbk7zthc62x3mi1imkrfwydi
Abstract:
My research philosophy revolves around elucidating the fundamental physics behind intriguing natural phenomena, leveraging cutting-edge technologies to engineer these principles, and applying them to develop innovative mechanisms that foster the safe, intelligent, and resilient coexistence of robotic systems with humans. This approach encompasses both the scientific and technological dimensions of robotics, emphasizing translational research. In this presentation, I will concentrate on our endeavors to design systems with adaptive morphology and embodied intelligence. These systems are adept at swiftly adapting to the ever-changing environmental conditions without imposing excessive computational demands on a central control unit. This also implies that such systems are capable of decentralizing certain calculations, entrusting them to the body itself. Additionally, I will introduce various topics within soft robotics, including soft underwater robots, soft-flying and locomotive robots, morphological designs for soft haptic interfaces, encompassing haptic sensing and haptic display, as well as safety measures and controls for drones and robot arms that rely on adaptive morphology and multimodal sensing.
Abstract:
Van Anh Ho (Senior Member, IEEE) received the Ph.D. degree in robotics from Ritsumeikan Unviersity, Kyoto, Japan, in 2012. Before that, he obtained the Bachelor degree in Electrical Engineering at Hanoi University of Science and Technology, Vietnam, in 2007; and Master degree in Mechanical Engineering in 2009 at Ritsumeikan University. He completed the JSPS Postdoctoral Fellowship in 2013 before joining Advanced Research Center Mitsubishi Electric Corp., Japan as a research scientist. From 2015 to 2017, he worked as Assistant Professor with Ryukoku University, Kyoto, Japan where he led a laboratory on soft haptics, soft modeling. From 2017, he joined the Japan Advanced Institute of Science and Technology (JAIST) for setting up a laboratory on soft robotics. His current research interests are soft robotics, soft haptic interaction, tactile sensing, grasping and manipulation, bio-inspired robots. He was recipient of the prestigious Japan Society for the Promotion of Science (JSPS) Research Fellowship for Young Scientist for his PhD course (DC) and postdoctoral fellowship. Ho was the recipient of 2019 IEEE Nagoya Chapter Young Researcher Award, and finalists of: Best System Paper at RSS 2023, Best Paper at IEEE SII (2016) and IEEE RoboSoft (2020). He is member of The Robotics Society of Japan (RSJ), and Senior Member of the IEEE. He is serving as Associate Editor for many international conferences, such as RSS, ICRA, IROS, RoboSoft; as well as for journals such as IEEE Transactions for Robotics (T-RO), IEEE Robotics and Automation Letters (RA-L), and Advanced Robotics. He is General Co-Chair of 2023 IEEE/SICE International Symposium on System Integration (SII), and General Chair of SII 2024.
Oct. 6th — Pause for IROS event
6. Oct. 13rd — Professor Ian Abraham (Yale University)
Optimality and Guarantees in Robotic Exploration
► Talk details
Recording link: https://uofi.app.box.com/s/o0m0gn2i52h4l5hykpyb61zq8bp2gp1u
Abstract:
Guaranteeing effective exploration is a vital component in the success of remote robotic applications in ocean and space exploration, environmental monitoring, and search and rescue tasks. This talk presents a novel formulation of exploration that permits optimality criteria and performance guarantees for robotic exploration tasks. We define the problem of exploration as a coverage problem on continuous (infinite-dimensional) spaces based on ergodic theory and derive control methods that satisfy optimality and guarantees such as asymptotic coverage, set-invariance, time-optimality, and reachability in exploration tasks. Last, we demonstrate successful execution on a range of robotic systems and provide an outlook on applications to robot learning problems.
Abstract:
“Ian Abraham is an Assistant Professor in Mechanical Engineering with courtesy appointment in the Computer Science Department at Yale University. His research group is focused on developing real-time optimal control methods for data-efficient robotic learning and exploration. Before joining Yale, he was a postdoctoral researcher at the Robotics Institute at Carnegie Mellon University in the Biorobotics Lab. He received his PhD. and M.S. degrees in Mechanical Engineering from Northwestern University and the B.S. degree in Mechanical and Aerospace Engineering from Rutgers University. During his Ph.D. he also worked at the NVIDIA Seattle Robotics Lab where he worked on robust model-based control for large parameter uncertainty. His research interest lies at the intersection of robotics, optimal control, and machine learning with a focus on developing real-time embedded software for exploration and learning. He is the recipient of the 2023 Best Paper Award at the Robotics: Science and Systems conference, the 2019 King-Sun Fu IEEE Transactions on Robotics Best Paper award, the Northwestern Belytschko Outstanding Research award for his dissertation, and the 2023 NSF CAREER award.”
7. Oct. 20th — Profesor Rachel Gehlhar Humann (UCLA)
Generalizable Nonlinear Control Methods for Lower-Limb Powered Prostheses
► Talk details
Recording link: https://uofi.app.box.com/s/7gvov5udxtbqeiqpbdsirhlx7cq5qo2u
Abstract:
Powered prostheses are capable of improving quality of life for 600,000 people with lower-limb amputations by enabling mobility with less human energy than traditional prostheses. However, current prosthesis control methods require many hours of heuristic tuning for every user and for every behavior. To develop a control method that applies across users, we construct model-based prosthesis control methods that rely solely on local information by translating formal nonlinear bipedal walking control methods to prostheses through a separable subsystem framework. On hardware, we realized the first model-dependent prosthesis controller that utilizes real-time force sensing to complete the model, improving tracking performance across terrains and subjects without additional tuning. Going forward, we will examine how assistive devices impact human biomechanical performance to guide the design of lower-limb assistive devices.
Speaker Bio:
Rachel Gehlhar Humann will be joining the UMN ME faculty in Spring 2024 as an assistant professor. Currently she is conducting her postdoctoral research in the Anatomics Lab at UCLA where she is studying how ankle stiffness impacts human biomechanics. She recently completed her PhD in mechanical engineering at the California Institute of Technology where she investigated nonlinear control methods for powered prostheses. Before Caltech, she graduated with a B.S. in mechanical engineering from the University of St. Thomas in St. Paul, MN.
8. Oct. 27th — Professor Yanran Ding (U-M)
Model Predictive Control for Highly Dynamic Legged Robots
► Talk details
Recording link: https://uofi.app.box.com/s/4y1vub1wauzznjw2ue5dslgeky2ombmf
Abstract:
Legged robots possess the unique advantage of negotiating unstructured terrains and cluttered environments by using their extremities to make discrete contact. The many real-world applications of the legged robot include disaster response, smart factory, home service and elderly care. My research endeavors to utilize model-based optimization methods to instill legged robots with their animal counterparts’ agility. This presentation provides an overview of the development of two custom robot platforms with highly dynamic capabilities. The first part focus on the formulation of a model predictive control (MPC) framework, which enables a quadruped robot to achieve dynamic maneuvers with a large angular excursion without singularity issues associated with Euler angles. The second part highlights contribution to the MIT Humanoid project, encompassing research thrusts in footstep adaptation, self-collision avoidance, and hardware experimentations.
Speaker Bio:
Yanran Ding is an assistant professor in the Robotics Department at the University of Michigan. His research goal is to realize agile motions on robot hardware that enables them to better provide physical services. Prior to joining Michigan, he worked as a postdoctoral associate at the MIT Biomimetic Robotics Lab. He earned his Ph.D. in the Mechanical Science and Engineering Department in the University of Illinois at Urbana-Champaign in 2021. He obtained his bachelor’s degree from Shanghai Jiao Tong University in 2015. He is the best student paper finalist at IROS 2017, and best paper finalist at TC on model-based optimization for robotics 2021.
9. Nov. 3rd — Professor Wenzhen Yuan (Illinois)
Towards efficient tactile perception
► Talk details
Recording link: TBA
Abstract:
The significance of tactile sensing for enabling more intelligent interactions between robots and their environment is widely recognized; however, the rate of growth in the application of tactile sensing within the field of robotics has not met our expectations. The reasons vary. In this talk, I will talk about our effort in two directions: simulating tactile sensors and using an active action-perception loop to explore the environment through touch. I will introduce how we build the simulators and the algorithms, and discuss what we have learned from them so far. While most of the work I present will be about “what we see as a possible way to push forward” rather than providing an answer, I hope the talk will inspire more thoughts about robotic tactile perception and encourage more collaborations.
Speaker Bio:
Legged robots possess the unique advantage of negotiating unstructured terrains and cluttered environments by using their extremities to make discrete contact. The many real-world applications of the legged robot include disaster response, smart factory, home service and elderly care. My research endeavors to utilize model-based optimization methods to instill legged robots with their animal counterparts’ agility. This presentation provides an overview of the development of two custom robot platforms with highly dynamic capabilities. The first part focus on the formulation of a model predictive control (MPC) framework, which enables a quadruped robot to achieve dynamic maneuvers with a large angular excursion without singularity issues associated with Euler angles. The second part highlights contribution to the MIT Humanoid project, encompassing research thrusts in footstep adaptation, self-collision avoidance, and hardware experimentations.
10. Nov. 10th — Professor Hao Zhang (UMass)
Lifelong Collaborative Autonomy
► Talk details
Recording link: https://uofi.box.com/s/njuzotzy99a16jqrlwm73cs4p0golboo
Abstract:
To materialize the vision of intelligent robots in the real world, it is imperative that robots exhibit the ability to operate over extended durations and engage in collaborative endeavors across diverse scenarios, often alongside human counterparts within a shared environment. This presentation will elucidate our ongoing research within the field of lifelong collaborative autonomy, which serves as a pivotal step towards realizing these robotic capabilities. Firstly, I will present our work on context-aware self-reflective adaptation, which empowers lifelong robots to dynamically adapt not only to changes in their external surroundings but also to modifications in their own functionalities. Following this, I will delve into our research on adaptive peer-to-peer teaming within scenarios involving multi-robot formation control, tightly-coupled cooperation, and multimodal human-robot collaboration. Finally, the presentation will conclude with a discussion of potential open questions and unmet capabilities in lifelong collaborative robotics.
Speaker Bio:
Hao Zhang is an Associate Professor of Computer Sciences at the University of Massachusetts Amherst, where he leads the Human-Centered Robotics Laboratory. His research is centered around lifelong collaborative autonomy with a particular emphasis on robot adaptation, multirobot collaboration, and human-robot teaming. He is the recipient of an NSF CAREER Award, a DARPA Young Faculty Award (YFA), and a DARPA Director’s Fellowship. His research program benefits from the generous support by a diverse spectrum of sponsors, including NSF, DARPA, ARL, ONR, DOE, DOT, and industry partners. He consistently publishes in top robotics and AI conferences and journals, and has received multiple best paper awards and nominations.
11. Nov. 10th — Round Table Disccusion on Saftey in Robotics
12. Dec. 1st — Jason Ma (UPenn)
► Talk details
Recording link: TBA
Abstract: TBA
Speaker Bio: TBA
Fall 2023 Schedule
1. Jan. 27th — Prof. Matthew Gombolay (Georgia Tech)
Democratizing Robot Learning and Teaming
► Talk details
Recording link: https://uofi.box.com/s/2gh8byltdvx5vyxihdqc9lie4kxnvv1q
Abstract:
New advances in robotics and autonomy offer a promise of revitalizing final assembly manufacturing, assisting in personalized at-home healthcare, and even scaling the power of earth-bound scientists for robotic space exploration. Yet, in real-world applications, autonomy is often run in the O-F-F mode because researchers fail to understand the human in human-in-the-loop systems. In this talk, I will share exciting research we are conducting at the nexus of human factors engineering and cognitive robotics to inform the design of human-robot interaction. In my talk, I will focus on our recent work on 1) enabling machines to learn skills from and model heterogeneous, suboptimal human decision-makers, 2) “white-box” that knowledge through explainable Artificial Intelligence (XAI) techniques, and 3) scale to coordinated control of stochastic human-robot teams. The goal of this research is to inform the design of autonomous teammates so that users want to turn – and benefit from turning – to the O-N mode.
Speaker Bio:
Dr. Matthew Gombolay is an Assistant Professor of Interactive Computing at the Georgia Institute of Technology. He was named the Anne and Alan Taetle Early-career Assistant Professor in 2018. He received a B.S. in Mechanical Engineering from the Johns Hopkins University in 2011, an S.M. in Aeronautics and Astronautics from MIT in 2013, and a Ph.D. in Autonomous Systems from MIT in 2017. Gombolay’s research interests span robotics, AI/ML, human-robot interaction, and operations research. Between defending his dissertation and joining the faculty at Georgia Tech, Dr. Gombolay served as technical staff at MIT Lincoln Laboratory, transitioning his research to the U.S. Navy and earning a R&D 100 Award. His publication record includes a best paper award from the American Institute for Aeronautics and Astronautics and the ACM/IEEE Conference on Human-Robot Interaction (HRI’22) as well as finalist awards for the best paper at the Conference on Robot Learning (CoRL’20) and best student paper at the American Controls Conference (ACC’20). Dr Gombolay was selected as a DARPA Riser in 2018, received the Early Career Award from the National Fire Control Symposium, and was awarded a NASA Early Career Fellowship.
2. Feb. 3rd —Prof. Justin Yim (Illinois)
Salto-1P, Small Walkers, Spirit 40, and Soon-to-be Robots
► Talk details
Recording link: https://uofi.box.com/s/584lcogbawqfsysk5mraz9uks5w1nbh6
Abstract:
Legged robots show great promise in navigating environments that are difficult for conventional platforms, but they do not yet have the agility, endurance, and related physical ability to deliver on this potential. This talk presents an overview of three robots that address different aspects of legged mobility (organized in increasing numbers of legs). The monoped Salto-1P explores agile leaping to enable a small robot to clear large obstacles, simple bipeds investigate how walking scales to smaller sizes, and perception and control for quadruped Spirit-40 tackle walking through entanglements. Discussion of future directions (and potential avenues for collaboration) concludes the presentation.
Speaker Bio: Dr. Justin Yim is a new assistant professor at UIUC in the Mechanical Science and Engineering department. He received his Ph.D. in Electrical Engineering from the University of California, Berkeley and his B.S.E. and M.S.E. from the University of Pennsylvania. Prior to starting at UIUC, he was a Computing Research Association CIFellow 2020 postdoc with Aaron Johnson at Carnegie Mellon University. Justin Yim’s research interests are in the design and control of legged robots to improve performance and understand locomotion principles. For his dissertation work developing the jumping monopod robot Salto-1P, he received best paper and best student paper awards at the IEEE/RSJ IROS and IEEE ICRA conferences.
3. Feb. 10th — Prof. Shuran Song (Comlubia University)
Learning Meets Gravity: Robots that Embrace Dynamics from Pixels
► Talk details
Recording link: https://uofi.box.com/s/jagw0mjnez37aiqfk00iqgueemw1ir7r
Abstract:
“Despite the incredible capabilities (speed, repeatability) of their hardware, most robot manipulators today are deliberately programmed to avoid dynamics – moving slow enough so they can adhere to quasi-static assumptions about the world. In contrast, people frequently (and subconsciously) make use of dynamic phenomena to manipulate everyday objects – from unfurling blankets to tossing trash, to improve efficiency and physical reach range. These abilities are made possible by an intuition of physics, a cornerstone of intelligence. How do we impart the same to robots? In this talk, I will discuss how we might enable robots to leverage dynamics for manipulation in unstructured environments. Modeling the complex dynamics of unseen objects from pixels is challenging. However, by tightly integrating perception and action, we show it is possible to relax the need for accurate dynamical models. Thereby allowing robots to (i) learn dynamic skills for complex objects, (ii) adapt to new scenarios using visual feedback, and (iii) use their dynamic interactions to improve their understanding of the world. By changing the way we think about dynamics – from avoiding it to embracing it – we can simplify a number of classically challenging problems, leading to new robot capabilities.”
Speaker Bio:
Shuran Song is an Assistant Professor in the Department of Computer Science at Columbia University. Before that, she received her Ph.D. in Computer Science at Princeton University, BEng. at HKUST. Her research interests lie at the intersection of computer vision and robotics. Song’s research has been recognized through several awards, including the Best Paper Awards at RSS’22 and T-RO’20, Best System Paper Awards at CoRL’21, RSS’19, and finalist at RSS, ICRA, CVPR, and IROS. She is also a recipient of the NSF Career Award, as well as research awards from Microsoft, Toyota Research, Google, Amazon, JP Morgan, and Sloan Foundation. To learn more about Shuran’s work, please visit: https://www.cs.columbia.edu/~shurans/
4. Feb. 17th — Dr. Andrea Bajscy (UC Berkeley)
Bridging Safety and Learning in Human-Robot Interaction
► Talk details
Recording link: https://uofi.box.com/s/vrgcq42u5t3avqmwxsf28w27z23wyfov
Abstract:
From autonomous cars in cities to mobile manipulators at home, robots must interact with people. What makes this hard is that human behavior—especially when interacting with other agents—is vastly complex, varying between individuals, environments, and over time. Thus, robots rely on data and machine learning throughout the design process and during deployment to build and refine models of humans. However, by blindly trusting their data-driven human models, today’s robots confidently plan unsafe behaviors around people, resulting in anything from miscoordination to dangerous collisions. My research aims to ensure safety in human-robot interaction, particularly when robots learn from and about people. In this talk, I will discuss how treating robot learning algorithms as dynamical systems driven by human data enables safe human-robot interaction. I will first introduce a Bayesian monitor which infers online if the robot’s learned human model can evolve to well-explain observed human data. I will then discuss how control-theoretic tools enable us to formally quantify what the robot could learn online from human data and how quickly it could learn it. Coupling these ideas with robot motion planning algorithms, I will demonstrate how robots can safely and automatically adapt their behavior based on how trustworthy their learned human models are. I will end this talk by taking a step back and raising the question: “What is the ‘right’ notion of safety when robots interact with people?” and discussing opportunities for how rethinking our notions of safety can capture more subtle aspects of human-robot interaction.
Speaker Bio:
Andrea Bajcsy is a postdoctoral scholar at UC Berkeley in the Electrical Engineering and Computer Science Department and an incoming Assistant Professor at the Robotics Institute at CMU (starting Fall 2023). She studies safe human-robot interaction, particularly when robots learn from and learn about people. Andrea received her Ph.D. in Electrical Engineering & Computer Science from UC Berkeley and B.S. in Computer Science at the University of Maryland, College Park.
5. Feb. 24th — Prof. Amanda Prorok (Cambridge)
Learning-Based Methods for Multi-Agent Navigation
► Talk details
Recording link: https://uofi.box.com/s/hlzxrcpbj66npra85tnahnr9ewlzf0gh
Abstract:
In this talk, I discuss our work on using Graph Neural Networks (GNNs) to solve multi-agent coordination problems. I begin by describing how we use GNNs to find a decentralized solution by learning what the agents need to communicate to one another. This communication-based policy is able to achieve near-optimal performance; moreover, when combined with an attention mechanism, we can drastically improve generalization to very-large-scale systems. Next, I consider the inverse problem: instead of optimizing agent policies, what if we could modify the navigation environment, instead? Towards that end, I introduce an environment optimization approach that guarantees the existence of complete solutions, improving agent navigation success rates over heuristic methods. Finally, I discuss challenges in the transfer of learned policies to the real world.
Abstract:
“Amanda Prorok is Professor of Collective Intelligence and Robotics in the Department of Computer Science and Technology at Cambridge University, and a Fellow of Pembroke College. She has been honoured by numerous research awards, including an ERC Starting Grant, an Amazon Research Award, the EPSRC New Investigator Award, the Isaac Newton Trust Early Career Award, and several Best Paper awards. Her PhD thesis was awarded the Asea Brown Boveri (ABB) prize for the best thesis at EPFL in Computer Science. She serves as Associate Editor for IEEE Robotics and Automation Letters (R-AL) and Associate Editor for Autonomous Robots (AURO). Prior to joining Cambridge, Amanda was a postdoctoral researcher at the General Robotics, Automation, Sensing and Perception (GRASP) Laboratory at the University of Pennsylvania, USA, where she worked with Prof. Vijay Kumar. She completed her PhD at EPFL, Switzerland, with Prof. Alcherio Martinoli.”
6. Mar. 3rd — Dr. Glen Chou (MIT)
Toward Safe Learning-based Autonomy with Integrated Perception, Planning, and Control
► Talk details
Recording link: https://uofi.box.com/s/oaij8jo0oon7yye77rpn8x9rx464tx1a
Abstract:
To deploy robots in unstructured, human-centric environments, we must guarantee their ability to safely and reliably complete tasks. In such environments, uncertainty runs rampant and robots invariably need data to refine their autonomy stack. While machine learning can leverage data to obtain components of this stack, e.g., task constraints, dynamics, and perception modules, blindly trusting these potentially unreliable models can compromise safety. Determining how to use these learned components while retaining unified, system-level guarantees on safety and robustness remains an urgent open problem. In this talk, I will present two lines of research towards achieving safe learning-based autonomy. First, I will discuss how to use human task demonstrations to learn hard constraints which must be satisfied to safely complete that task, and how we can guarantee safety by planning with the learned constraints in an uncertainty-aware fashion. Second, I will discuss how to determine where learned perception and dynamics modules can be trusted, and to what extent. We imbue a motion planner with this knowledge to guarantee safe goal reachability when controlling from high-dimensional observations (e.g., images). We demonstrate that these theoretical guarantees translate to empirical success, in simulation and on hardware.
Speaker Bio:
Glen Chou is a postdoctoral associate at MIT CSAIL, advised by Prof. Russ Tedrake. His research focuses on end-to-end safety and robustness guarantees for learning-enabled robots. Previously, Glen received an MS and PhD in Electrical and Computer Engineering from the University of Michigan in 2022, and dual B.S. degrees in Electrical Engineering and Computer Science and Mechanical Engineering from UC Berkeley in 2017. He is a recipient of the National Defense Science and Engineering Graduate (NDSEG) fellowship and is an R:SS Pioneer.
7. Mar. 10th — Dr. Kyungseo Park (Illinois)
Whole-body Robot Skins for Safe and Contact-rich Human-Robot Collaboration
► Talk details
Recording link: https://uofi.box.com/s/wri23ddc58du8fzuhundk5gmbjh36nmj
Abstract:
Collaborative robots coexist with humans in unstructured environments and engage in various tasks involving physical interaction through the entire body. To ensure the safety and versatility of these robots, it is desirable to utilize soft tactile sensors that provide mechanical compliance and tactile data simultaneously. The mechanical property of the soft material effectively mitigates the risk of physical contacts, while the tactile data enable active compliance or social interaction. For this reason, many studies have been conducted to develop soft tactile sensors, but their extension to whole-body robot skin is partially hindered by practical limitations such as low scalability and poor durability. Thus, it is worthwhile to devise an optimal approach to implement whole-body robot skin. In this talk, I will present two works on a soft whole-body robot skin for safe and contact-rich human-robot collaboration. Firstly, I will introduce a biomimetic robot skin that imitates the features of human skin, such as protection and multimodal tactile sensation. Then, I will discuss the methods to implement the biomimetic robot skin (i.e., tomography) and demonstrate its capabilities to sense multi-modal tactile data over a large area. Secondly, a soft pneumatic robot skin will also be presented along with its working principle. This robot skin has a simple structure and functionality but has been seamlessly integrated into the robot arm and used to demonstrate safe and intuitive physical human-robot interaction. Finally, I will examine the significances and limitations of these works and discuss how they can be improved.
Speaker Bio:
Kyungseo Park is a postdoctoral researcher at UIUC Coordinated Science Laboratory, advised by Prof. Joohyung Kim. His research interests include robotics, physical human-robot interaction, and soft robotics. Kyungseo received a B.S., M.S., and Ph.D. in Mechanical Engineering from Korea Advanced Institute for Science and Technology (KAIST), South Korea, in 2016, 2018, and 2022, respectively
8. Mar. 24th — Dr. Maria Santos (Princeton)
Multi-robot Spatial Coordination: Heterogeneity, Learning, and Artistic Expression
► Talk details
Recording link: https://uofi.box.com/s/1vh26s5lpit3s077zbnj0v3opsv08ge9
Abstract:
“Multi-robot teams inherent features of redundancy, increased spatial coverage, flexible reconfigurability, and fusion of distributed sensors and actuators make these systems particularly suitable for applications such as precision agriculture, search-and-rescue operations, or environmental monitoring. In such scenarios, coverage control constitutes an attractive coordination strategy for a swarm, since it allows the robots in a team to spread over a domain according to the importance of its regions: the higher the relevance of an area for the objective of the application, the higher the concentration of robots will be. The coverage paradigm typically considers that all the robots can equally contribute to the coverage task and that the coverage objective is fully known prior deployment of the team. In this talk, we consider realistic scenarios where swarms need to simultaneously monitor multiple types of features (e.g. radiation, humidity, temperature) concurrently at different locations, which require a mixture of sensing capabilities too extensive to be designed into every individual robot. This challenge is addressed by considering heterogeneous multi-robot teams, where each robot is equipped with a subset of those sensors as long as, collectively, the team has all the sensor modalities needed to monitor the collection of features in the domain. Furthermore, we dive into the scenario where robots need to monitor an environment without previous knowledge of its spatial distribution of features. To achieve this, we present an approach where the team simultaneously learns the coverage objectives and optimizes its spatial allocation accordingly, all via local interactions within the team and with the environment. Towards the end of the talk, we move away from the conventional applications of robotic swarms to touch upon how coverage can serve as an interaction modality for artists to effectively utilize robotic swarms for artistic expression. In particular, we focus on the heterogeneous variation of coverage as the means to interactively control desired concentrations of color throughout a canvas for the purpose of artistic multi-robot painting.”
Abstract:
María Santos is a Postdoctoral Research Associate in the Department of Mechanical and Aerospace Engineering at Princeton University, where she works with Dr. Naomi Leonard. María completed her PhD in Electrical and Computer Engineering at the Georgia Institute of Technology in 2020, advised by Dr. Magnus Egerstedt. Prior to that, she received the M.S. degree in Industrial Engineering (Ingeniera Industrial) in 2013 from the University of Vigo, Vigo, Spain and a M.S. degree in Electrical and Computer engineering from the Georgia Institute of Technology, Atlanta, GA, USA, in 2016 as a Fulbright scholar. María’s research deals with the distributed coordination of large multi-agent and multi-robot systems, with a particular focus on modeling heterogeneous teams and the execution of dynamic or unknown tasks. She is also very interested in exploring how to use swarm robotics in various forms of artistic expression, research for which she was awarded a La Caixa Fellowship for Graduate Studies in North America during her doctoral studies.
9. Mar. 31st — Dr. Sheng Cheng (Illinois)
Safe Learning and Control: An L1 Adaptive Control Approach
► Talk details
Recording link: https://uofi.box.com/s/acsn0l0dznb8nz48y8et7hrskyhe8xb4
Abstract:
In recent years, learning-based control paradigms have seen many success stories on various systems and robots. However, as these robots prepare to enter the real world, operating safely in the presence of imperfect model knowledge and external disturbances will be vital to ensure mission success. In the first part of the talk, we present an overview of L1 adaptive control, how it enables safety in autonomous robots, and discuss some of its success stories in the aerospace industry. In the second part of the talk, we present some of our recent results that explore controller tuning using machine learning tools while preserving the controller structure and stability properties. An overview of different projects at our lab that build upon this framework will be demonstrated to show different applications.
Speaker Bio:
Sheng Cheng received the B.Eng. degree in Control Science and Engineering from the Harbin Institute of Technology in China and the M.S. and Ph.D. degrees in Electrical Engineering from the University of Maryland. He is a Postdoctoral Research Associate with the University of Illinois Urbana–Champaign. His current research interests include aerial robotics, learning-enabled control, adaptive control, and distributed parameter systems.
10. Apr. 7th — Prof. Talia Moore (U-M)
Robots for Evolutionary Biology
► Talk details
Recording link: https://uofi.box.com/s/8den6asi11r4fzhmxykab4b11jdth5r7
Abstract:
How can biomimetic robots help us in our quest to understand the natural world? And how can examining the evolution and diversity of animal systems help us design better robots? Using a case study of snake-mimicking soft robots, I describe several categories of bio-inspired robotics that serve multiple distinct research goals. By introducing this categorization, I invite us all to consider the many ways in which robotics and biology can serve each other.
Speaker Bio:
Talia Y. Moore is an Assistant Professor of Robotics and of Mechanical Engineering at the University of Michigan, where she is also affiliated with the Department of Ecology and Evolutionary Biology, and the Museum of Zoology. She examines the biomechanics, evolution, and ecology of animals to create bio-inspired robotic systems and uses these physical models to evaluate biological hypotheses.
11. Apr. 14th — Prof. Abhishek Gupta (UW)
How to Train Your Robot: Techniques for Enabling Robotic Learning in the Real World
► Talk details
Recording link: https://uofi.box.com/s/040luxx1hq176kbucc6qobxxg8shmvuh
Abstract:
Reinforcement learning has been a powerful tool for building continuously improving systems in domains like video games and animated character control, but has proven relatively more challenging to apply to problems in real world robotics. In this talk, I will argue that this challenge can be attributed to a mismatch in assumptions between typical RL algorithms and what the real world actually provides, making data collection and utilization difficult. In this talk, I will discuss how to build algorithms and systems to bridge these assumptions and allow robotic learning systems to operate under the assumptions of the real world. In particular, I will describe how we can develop algorithms to ensure easily scalable supervision from humans, perform safe, directed exploration in practical time scales by leveraging prior data and enable uninterrupted autonomous data collection at scale. I will show how these techniques can be applied to real world robotic systems and discuss how these have the potential to be applicable more broadly across a variety of machine learning applications. Lastly, I will provide some perspectives on how this opens the door towards future deployment of robots into unstructured human-centric environments.
Speaker Bio:
Abhishek Gupta is an assistant professor in the Paul G. Allen School for Computer Science and Engineering at the University of Washington. He was formerly a postdoctoral fellow at MIT, working with Pulkit Agrawal and Russ Tedrake. He completed his PhD at UC Berkeley working with Pieter Abbeel and Sergey Levine, building systems that can leverage reinforcement learning algorithms to solve robotics problems. He is interested in research directions that enable directly performing reinforcement learning directly in the real world — reward supervision in reinforcement learning, large scale real world data collection, learning from demonstrations, and multi-task reinforcement learning. He has also spent time at Google Brain and is a recipient of the NDSEG and NSF graduate research fellowships. A more detailed description can be found at https://homes.cs.washington.edu/~abhgupta/
12. Nov. 18st — Prof. Nadia Figueroa (UPenn)
Collaborative Robots in the Wild: Challenges and Future Directions from a Human-Centric Perspective
► Talk details
Abstract:
Since the 1960’s we have lived with the promise of one day being able to own a robot that would be able to co-exist, collaborate and cooperate with humans in our everyday lives. This promise has motivated a vast amount of research in the last decades on motion planning, machine learning, perception and physical human-robot interaction (pHRI). Nevertheless, we are yet to see a truly collaborative robot navigating and manipulating objects, the environment or physically collaborating with humans and other robots outside of labs and in the human-centric dynamic spaces we inhabit; i.e., “in-the-wild”. This bottleneck is due to a robot-centric set of assumptions of how humans interact and adapt to technology and machines. In this talk, I will introduce a set of more realistic human-centric assumptions and I posit that for collaborative robots to be truly adopted in such dynamic, ever-changing environments they must possess human-like characteristics of reactivity, compliance, safety, efficiency and transparency. Combining these objectives is challenging as providing a single optimal solution can be intractable and even infeasible due to problem complexity and contradicting goals. Hence, I will present possible avenues to achieve these requirements. I will show that by adopting a Dynamical System (DS) based approach for motion planning we can achieve reactive, safe and provably stable robot behaviors while efficiently teaching the robot complex tasks with a handful of demonstrations. Further, I will show that such an approach can be extended to offer task-level reactivity and can be adopted to efficiently and incrementally learn from failures, as humans do. I will also discuss the role of compliance in collaborative robots, the allowance of soft impacts and the relaxation to the standard definition of safety in pHRI and how this can be achieved with DS-based and optimization-based approaches. I will then talk about the importance of both end-users and designers having a holistic understanding of their robot’s behaviors, capabilities, and limitations and present an approach that uses Bayesian posterior sampling to achieve this. The talk will end with a discussion of open challenges and future directions to achieve truly collaborative robots in-the-wild.
Speaker Bio:
Nadia Figueroa is the Shalini and Rajeev Misra Presidential Assistant Professor in the Mechanical Engineering and Applied Mechanics (MEAM) Department at the University of Pennsylvania. She holds a secondary appointment in the Computer and Information Science (CIS) department and is a faculty advisor at the General Robotics, Automation, Sensing & Perception (GRASP) laboratory. Before joining the faculty, she was a Postdoctoral Associate in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT), advised by Prof. Julie A. Shah. She completed a Ph.D. (2019) in Robotics, Control and Intelligent Systems at the Swiss Federal Institute of Technology in Lausanne (EPFL), advised by Prof. Aude Billard. Prior to this, she was a Research Assistant (2012-2013) at the Engineering Department of New York University Abu Dhabi (NYU-AD) and in the Institute of Robotics and Mechatronics (2011-2012) at the German Aerospace Center (DLR). She holds a B.Sc. degree in Mechatronics (2007) from Monterrey Tech (ITESM-Mexico) and an M.Sc. degree in Automation and Robotics (2012) from the Technical University of Dortmund, Germany.
Her main research interest focuses on developing collaborative human-aware robotic systems: robots that can safely and efficiently interact with humans and other robots in the human-centric dynamic spaces we inhabit. This involves research at the intersection of machine learning, control theory, artificial intelligence, perception, and psychology – with a physical human-robot interaction perspective.
13. Apr. 28th — Prof. Zackory Erickson (CMU)
Robot Learning, Sensing, and Teleoperation in Pursuit of Robotic Caregivers
► Talk details
Recording link: TBA
Abstract:
Designing safe and reliable robotic assistance for caregiving is a grand challenge in robotics. A sixth of the United States population is over the age of 65 and in 2014 more than a quarter of the population had a disability. Robotic caregivers could positively benefit society; yet, physical robotic assistance presents several challenges and open research questions relating to teleoperation, active sensing, and autonomous control. In this talk, I will present recent techniques and technology that my group has developed towards addressing core challenges in robotic caregiving. First, I will introduce a head-worn interface that enables people with loss of hand function (due to spinal cord injury or neurodegenerative diseases) to teleoperate assistive mobile manipulators. I will then describe capacitive servoing, a new sensing technique for robotic manipulators to sense the human body and track trajectories along the body. Finally, I will present our recent work in robot learning, including policy learning and dynamics modeling, to perform complex manipulation of deformable garments and blankets around the human body.
Speaker Bio:
Zackory Erickson is an Assistant Professor in The Robotics Institute at Carnegie Mellon University, where he leads the Robotic Caregiving and Human Interaction (RCHI) Lab. His research focuses on developing new robot learning, mobile manipulation, and sensing methods for physical human-robot interaction and healthcare. Zackory received his PhD in Robotics and M.S. in Computer Science from Georgia Tech and B.S. in Computer Science at the University of Wisconsin–La Crosse. His work has won the Best Student Paper Award at ICORR 2019 and a Best Paper in Service Robotics finalist at ICRA 2019.
Fall 2022 Schedule
Video Recordings: https://uofi.box.com/s/nqmm8xml2t81kajmjjhuxr414ue3t3ii
1. Sept. 2nd — Prof. Hannah Stuart (UC Berkeley)
Embodying Dexterity: Designing for contact in robotic grasping and manipulation systems
► Talk details
Abstract:
For robots to perform helpful manual tasks, they must be able to physically interact with the real-world. The ability of robots to grasp and manipulate often depends on the strength and reliability of contact conditions, e.g. friction. In this talk, I will introduce how my lab is developing tools for “messy” or adversarial contact conditions to support the design of more capable systems. Coupled with prototyping and experimental exploration, we generate new systems that better embody desired capabilities. In this talk, I will draw upon recent examples including how we are (1) harnessing fluid flow in soft grippers to improve and monitor grasp state in unique ways, (2) modeling granular interaction forces to support new capabilities in sand, and (3) exploring assistive wearable device topologies for collaborative grasping. Specifically, for compute-and-power-limited robots, I will present a lightweight model selection algorithm that learns when a robot should exploit low-latency on-board computation, or, when highly uncertain, query a more accurate cloud model. Then, I will present a collaborative learning algorithm that allows a diversity of robots to mine their real-time sensory streams for valuable training examples to send to the cloud for model improvement. I will conclude this talk by describing my group’s research efforts to co-design the representation of rich robotic sensory data with networked inference and control tasks for concise, task-relevant representations.
Speaker Bio:
Dr. Hannah Stuart is the Don M. Cunningham Assistant Professor in the Department of Mechanical Engineering at the University of California at Berkeley. She received her BS in Mechanical Engineering at the George Washington University in 2011, and her MS and PhD in Mechanical Engineering at Stanford University in 2013 and 2018, respectively. Recent awards include the NASA Early Career Faculty grant and Johnson & Johnson Women in STEM2D grant.
2. Sept. 9th — Ardalan Tajbakhsh (CMU)
How to Become a Robotics Engineer?
► Talk details
Abstract:
In the past few years, the robotics industry has been growing rapidly due to the intersection of technology maturity and market demand. This exponential growth has given rise to many advanced multidisciplinary roles within the industry that often require a unique combination of skills in mathematics, physics, software engineering, and algorithms. While traditional curriculum in robotics broadly covers the foundations of the field, it can be quite challenging for new graduates to effectively focus their preparation towards specific industry roles without feeling overwhelmed. The first part of this talk will focus on providing a clear, actionable, and comprehensive roadmap for becoming a robotics engineer based on the most recent roles in the industry. In the second part, the interview structure for such roles as well as effective interviewing strategies will be presented. This talk is targeted towards undergraduate and graduate students in mechanical, electrical, aerospace, and robotics engineering who are looking to enter the robotics industry in the near future.
Speaker Bio:
Ardalan Tajbakhsh is currently a PhD candidate at Carnegie Mellon University where he focuses his research on dynamic multi-agent navigation in unstructured environments for real-world applications like warehouse fulfillment, hospital material transportation, environmental monitoring, and disaster recovery. His background spans a healthy mix of academic research and industry experience in robotics. Prior to his PhD, he was a robotics software engineer at Zebra Technologies where he led the algorithm development efforts for multi-robot coordination in warehouse fulfillment. He has previously held other industry roles at iRobot and Virgin Hyperloop One. Ardalan received an undergraduate degree in Mechanical Engineering with honors from UIUC and a masters in Mechanical Engineering with Robotics concentration from CMU.
3. Sept. 16th — Prof. Sehoon Ha (Georgia Tech)
Learning to walk for real-world missions
► Talk details
Abstract:
Intelligent robot companions have the potential to improve the quality of human life significantly by changing how we live, work, and play. While recent advances in software and hardware opened a new horizon of robotics, state-of-the-art robots are yet far from being blended into our daily lives due to the lack of human-level scene understanding, motion control, safety, and rich interactions. I envision legged robots as intelligent machines beyond simple walking platforms, which can execute a variety of real-world motor tasks in human environments, such as home arrangements, last-mile delivery, and assistive tasks for disabled people. In this talk, I will discuss relevant multi-disciplinary research topics, including deep reinforcement learning, control algorithms, scalable learning pipelines, and sim-to-real techniques.
Speaker Bio:
Sehoon Ha is currently an assistant professor at the Georgia Institute of Technology. Before joining Georgia Tech, he was a research scientist at Google and Disney Research Pittsburgh. He received his Ph.D. degree in Computer Science from the Georgia Institute of Technology. His research interests lie at the intersection between computer graphics and robotics, including physics-based animation, deep reinforcement learning, and computational robot design. His work has been published at top-tier venues including ACM Transactions on Graphics, IEEE Transactions on Robotics, and International Journal of Robotics Research, nominated as the best conference paper (Top 3) in Robotics: Science and Systems, and featured in the popular media press such as IEEE Spectrum, MIT Technology Review, PBS News Hours, and Wired.
4. Sept. 23rd — Prof. Mark Mueller (UC Berkeley)
Small drones in tight spaces: New vehicle designs and fast algorithms
► Talk details
Abstract:
Aerial robotics have become ubiquitous, but (like most robots) they still struggle to operate at high speed in unstructured, cramped environments. I will present some of our group’s recent work on pushing vehicles’ capabilities with two distinct approaches. First, I will present algorithmic work aiming to enable motion planning at high speed through unstructured environments, with a specific focus on standard multicopters. The second approach is to modify the vehicle’s physical characteristics, to create a fundamentally different vehicle for whom the problem is easier to the changed physics. Specific designs presented will include a highly collision resilient drone; a passively morphing drone capable of significantly reducing its size; and a preview of a passively morphing system capable of reducing its aerodynamic loads.
Speaker Bio:
Mark W. Mueller is an assistant professor of Mechanical Engineering at UC Berkeley where he leads the High Performance Robotics Laboratory (HiPeRLab). His research focuses on the design and control of aerial robots. He joined the mechanical engineering department at UC Berkeley in September 2016, after spending some time at Verity Studios working on a drone entertainment system, installed in the biggest theater on New York’s broadway. He completed his PhD studies at the ETH Zurich in Switzerland in 2015, and received an MSc there in 2011. He received a bachelors degree in mechanical engineering from the University of Pretoria in South Africa.
5. Sept. 30th — Prof. Nikolay Atanasov (UCSD)
Multi-Robot Metric-Semantic Mapping
► Talk details
Abstract:
The ability of autonomous robot systems to perform reliably and effectively in real-world settings depends on precise understanding of the geometry and semantics of their environment based on streaming sensor observations. This talk will present estimation techniques for sparse object-level mapping, dense surface-level mapping, and distributed multi-robot mapping. The talk will highlight object shape models, octree and Gaussian process surface models, and distributed inference in time-varying graphs.
Speaker Bio:
Nikolay Atanasov is an Assistant Professor of Electrical and Computer Engineering at the University of California San Diego, La Jolla, CA, USA. He obtained a B.S. degree in Electrical Engineering from Trinity College, Hartford, CT, USA in 2008 and M.S. and Ph.D. degrees in Electrical and Systems Engineering from University of Pennsylvania, Philadelphia, PA, USA in 2012 and 2015, respectively. His research focuses on robotics, control theory, and machine learning, applied to active perception problems for mobile robots. He works on probabilistic models that unify geometry and semantics in simultaneous localization and mapping (SLAM) and on optimal control and reinforcement learning of robot motion that minimizes uncertainty in these models. Dr. Atanasov’s work has been recognized by the Joseph and Rosaline Wolf award for the best Ph.D. dissertation in Electrical and Systems Engineering at the University of Pennsylvania in 2015, the best conference paper award at the IEEE International Conference on Robotics and Automation (ICRA) in 2017, and an NSF CAREER award in 2021.
6. Oct. 7th — Prof. Nima Fazeli (U-M)
Deformable Object Representations and Tactile Control for Contact Rich Robotic Tool-Use
► Talk details
Abstract:
The next generation of robotic systems will be in our homes and workplaces, working in highly unstructured environments and manipulating complex deformables objects. The success of these systems hinges on their ability to reason over the complex dynamics of these objects and carefully control their interactions using the sense of touch. To this end, first I’ll present our recent advances in multimodal neural implicit representations of deformable objects. These representations integrate sight and touch seamlessly to model object deformations and are uniquely well equipped to handle robotic sensing modalities. Second, I’ll present our recent progress on tactile control with high-resolution and highly deformable tactile sensors. Specifically, I’ll discuss our work leveraging the Soft Bubbles to gracefully manipulate tools where we handle high-dimensional tactile signatures and the complex dynamics introduced by the sensor compliance. I’ll end the talk on future directions in tactile controls and deformables and present some of the open challenges in these domains.
Speaker Bio:
Nima Fazeli is an Assistant Professor of Robotics and Assistant Professor of Mechanical Engineering at the University of Michigan and the director of the Manipulation and Machine Intelligence (MMint) Lab. Prof. Fazeli’s primary research interest is enabling intelligent and dexterous robotic manipulation with emphasis on the tight integration of mechanics, perception, controls, learning, and planning. Prof. Fazeli received his PhD from MIT (2019) working with Prof. Alberto Rodriguez, where he also conducted his postdoctoral training. He received his MSc from the University of Maryland at College Park (2014) where he spent most of his time developing models of the human (and, on occasion, swine) arterial tree for cardiovascular disease, diabetes, and cancer diagnoses. His research has been supported by the Rohsenow Fellowship and featured in outlets such as The New York Times, CBS, CNN, and the BBC.
7. Oct. 14th — Prof. Pulkit Agrawal (MIT)
Coming of Age of Robot Learning
► Talk details
Abstract:
I will discuss our progress in building robotic systems that are agile, dexterous, and real-world-ready in their ability to function in diverse scenarios. The key technical challenge of control in contact-rich problems is addressed using machine learning methods and the results will be illustrated via the following case studies:
(i) a dexterous manipulation system capable of re-orienting novel objects.
(ii) an agile quadruped robot operating on diverse natural terrains.
(iii) a system that only requires a few task demonstrations of an object manipulation task to generalize to new object instances in new configurations.
Speaker Bio:
Pulkit is the Steven and Renee Finn Chair Assistant Professor in the Department of Electrical Engineering and Computer Science at MIT, where he directs the Improbable AI Lab. His research interests span robotics, deep learning, computer vision, and reinforcement learning. His work has received the Best Paper Award at Conference on Robot Learning 2021 and Best Student Paper Award at Conference on Computer Supported Collaborative Learning 2011. He is a recipient of the Sony Faculty Research Award, Salesforce Research Award, Amazon Research Award, a Fulbright fellowship, etc. He received his Ph.D. from UC Berkeley, Bachelors’s degree from IIT Kanpur, where he was awarded the Directors Gold Medal and co-founded SafelyYou Inc.
8. Oct. 21st — Prof. Andreea Bobu (UC Berkeley)
Title: Aligning Robot Representations with Humans
► Talk details
Abstract:
Robots deployed in the real world will interact with many different humans to perform many different tasks in their lifetime, which makes it difficult (perhaps even impossible) for designers to specify all the aspects that might matter ahead of time. Instead, robots can extract these aspects implicitly when they learn to perform new tasks from their users’ input. The challenge is that this often results in representations which pick up on spurious correlations in the data and fail to capture the human’s representation of what matters for the task, resulting in behaviors that do not generalize to new scenarios. Consequently, the representation, or abstraction, of the tasks the human hopes for the robot to perform may be misaligned with what the robot knows. In my work, I explore ways in which robots can align their representations with those of the humans they interact with so that they can more effectively learn from their input. In this talk I focus on a divide and conquer approach to the robot learning problem: explicitly focus human input on teaching robots good representations before using them for learning downstream tasks. We accomplish this by investigating how robots can reason about the uncertainty in their current representation, explicitly query humans for feature-specific feedback to improve it, then use task-specific input to learn behaviors on top of the new representation.
Speaker Bio:
Andreea Bobu is a Ph.D. candidate at the University of California Berkeley in the Electrical Engineering and Computer Science Department advised by Professor Anca Dragan. Her research focuses on aligning robot and human representations for more seamless interaction between them. In particular, Andreea studies how robots can learn more efficiently from human feedback by explicitly focusing on learning good intermediate human-guided representations before using them for task learning. Prior to her Ph.D. she earned her Bachelor’s degree in Computer Science and Engineering from MIT in 2017. She is the recipient of the Apple AI/ML Ph.D. fellowship, is an R:SS and HRI Pioneer, has won best paper award at HRI 2020, and has worked at NVIDIA Research.
9. Oct. 21st — Dr. Preston Culbertson (Caltech)
Embracing uncertainty: Risk-sensitive and adaptive methods for manipulation and robot navigation
► Talk details
Abstract:
As robots continue to move from controlled environments (e.g., assembly lines) into unstructured ones such as roadways, hospitals, and homes, a key open question for roboticists is how to certify the safety of such systems under the wide range of environmental and perceptual conditions robots can encounter in the wild. In this talk, I will argue for a “risk-aware” approach to robot safety, and present methods for robot manipulation and navigation which account for uncertainty through adaptation and risk-awareness. First, I will present a distributed adaptive controller for collaborative manipulation, which allows a team of robots to adapt to parametric uncertainties to move an unknown rigid body along a desired trajectory in SE(3). In the second half of the talk, we will discuss Neural Radiance Fields (NeRFs), a “neural implicit” scene representation that can be generated using only posed RGB images. I will present our recent work leveraging NeRFs for both visual navigation and manipulation, and show how their probabilistic representation of occupancy/object geometry can be used to enable risk-sensitive planning across a variety of problem domains. I will conclude with some broader thoughts on “risk-awareness” and next directions for enabling safety under perceptual uncertainty.
Speaker Bio:
Preston Culbertson is a postdoctoral scholar in the AMBER Lab at Caltech, researching safe methods for robot planning and control using onboard vision. Preston completed his PhD at Stanford University, working under Prof. Mac Schwager, where his research focused on collaborative manipulation and assembly with teams of robots. In particular, Preston’s research interests include integrating modern techniques for computer vision with methods for robot control and planning that can provide safety guarantees. Preston received the NASA Space Technology Research Fellowship (NSTRF) and the “Best Manipulation Paper” award at ICRA 2018.
10. Nov. 4th — Prof. Marynel Vazquez (Yale University)
Multi-Party Human-Robot Interaction: Towards Generalizable Data-Driven Models with Graph State Abstractions
► Talk details
Abstract:
Many real-world applications require that robots handle the complexity of multi-party social encounters, e.g., delivery robots may need to navigate through crowds, robots in manufacturing settings may need to coordinate their actions with those of human coworkers, and robots in educational environments may help multiple people practice and improve their skills. How can we enable robots to effectively take part in these social interactions? At first glance, multi-party interactions may be seen as a trivial generalization of one-on-one human-robot interactions, suggesting no special consideration. Unfortunately, this approach is limited in practice because it ignores higher-order effects, like group factors, that often drive human behavior in multi-party Human-Robot Interaction (HRI).
In this talk, I will describe two research directions that we believe are important to advance multi-party HRI. One direction focuses on understanding group dynamics and social group phenomena from an experimental perspective. The other one focuses on leveraging graph state abstractions and structured, data-driven methods for reasoning about individual, interpersonal and group-level factors relevant to these interactions. Examples of these research directions include efforts to motivate prosocial human behavior in HRI, balance human participation in conversations, and improve spatial reasoning for robots in human environments. As part of this talk, I will also describe our recent efforts to scale HRI data collection for early system development and testing via online interactive surveys. We have begun to explore this idea in the context of social robot navigation but, thanks to advances in game development engines, it could be easily applied to other HRI application domains.
Speaker Bio:
Marynel Vázquez is an Assistant Professor in Yale’s Computer Science Department, where she leads the Interactive Machines Group. Her research focuses on Human-Robot Interaction (HRI), especially in multi-party and group settings. Marynel is a recipient of the 2022 NSF CAREER Award and two Amazon Research Awards. Her work has been recognized with nominations to Best Paper awards at HRI 2021, IROS 2018, and RO-MAN 2016, as well as a Best Student Paper award at RO-MAN 2022. Prior to Yale, Marynel was a Post-Doctoral Scholar at the Stanford Vision & Learning Lab and obtained her M.S. and Ph.D. in Robotics from Carnegie Mellon University, where she was a collaborator of Disney Research. Before then, she received her bachelor’s degree in Computer Engineering from Universidad Simón Bolívar in Caracas, Venezuela.
11. Nov. 11th — Prof. Monroe Kennedy III (Stanford University)
DenseTact: Calibrated Optical Tactile Sensing for the Next Generation of Robotic Manipulation
► Talk details
Abstract:
Robotic dexterity stands to be the key challenge to making collaborative robots ubiquitous in the home and industry environments, particularly those that require adaptive systems. The last few decades have produced many solutions in this space that include mechanical transducers (pressure sensors) that while effective usually suffer limitations of the resolution, cross-talk, and limited multi-modal sensing at every point. There are passive, soft sensors that through high friction and form-closure envelope items to be manipulated for stable grasps, and while often effective at securing a grasp, such sensors generally do not provide the dexterity needed to re-grasp, perform finger gaiting or truly quantify the stability of a grasp beyond basic immobilization observed through action. Finally, optical tactile sensors have presented many new avenues for research, with leading designs being GelSight and GelSlim for surface reconstruction and force estimation. While optical tactile sensors stand to be robotics best answer so far to sensing sensitivity that approaches anthropomorphic performance, there is still a noticeable gap in robotics research when it comes to performing manipulation tasks, with end-to-end solutions struggling to extend to new complex manipulation tasks without significant (and often unscalable) training.
In this talk, I will present DenseTact an optical tactile sensor that provides calibrated surface reconstruction and forces for a single fingertip. This calibrated, anthropomorphically inspired fingertip design will allow for modularization of the grasping process and open new avenues of research in robotic manipulation towards collaborative robotic applications.
Speaker Bio:
Monroe Kennedy III is an assistant professor in Mechanical Engineering and courtesy of Computer Science at Stanford University. Prof. Kennedy is the recipient of the NSF Faculty Early Career Award. He leads the Assistive Robotics and Manipulation laboratory (arm.stanford.edu), which develops collaborative robotic assistants by focusing on combining modeling and control techniques together with machine learning tools. Together, these techniques will improve robotic performance for tasks that are highly dynamic, require dexterity, have considerable complexity, and require human-robot collaboration. He received his Ph.D. in Mechanical Engineering and Applied Mechanics and Masters in Robotics at the University of Pennsylvania and was a member of the GRASP Lab.
12. Nov. 18st — Prof. Nadia Figueroa (UPenn)
Collaborative Robots in the Wild: Challenges and Future Directions from a Human-Centric Perspective
► Talk details
Abstract:
Since the 1960’s we have lived with the promise of one day being able to own a robot that would be able to co-exist, collaborate and cooperate with humans in our everyday lives. This promise has motivated a vast amount of research in the last decades on motion planning, machine learning, perception and physical human-robot interaction (pHRI). Nevertheless, we are yet to see a truly collaborative robot navigating and manipulating objects, the environment or physically collaborating with humans and other robots outside of labs and in the human-centric dynamic spaces we inhabit; i.e., “in-the-wild”. This bottleneck is due to a robot-centric set of assumptions of how humans interact and adapt to technology and machines. In this talk, I will introduce a set of more realistic human-centric assumptions and I posit that for collaborative robots to be truly adopted in such dynamic, ever-changing environments they must possess human-like characteristics of reactivity, compliance, safety, efficiency and transparency. Combining these objectives is challenging as providing a single optimal solution can be intractable and even infeasible due to problem complexity and contradicting goals. Hence, I will present possible avenues to achieve these requirements. I will show that by adopting a Dynamical System (DS) based approach for motion planning we can achieve reactive, safe and provably stable robot behaviors while efficiently teaching the robot complex tasks with a handful of demonstrations. Further, I will show that such an approach can be extended to offer task-level reactivity and can be adopted to efficiently and incrementally learn from failures, as humans do. I will also discuss the role of compliance in collaborative robots, the allowance of soft impacts and the relaxation to the standard definition of safety in pHRI and how this can be achieved with DS-based and optimization-based approaches. I will then talk about the importance of both end-users and designers having a holistic understanding of their robot’s behaviors, capabilities, and limitations and present an approach that uses Bayesian posterior sampling to achieve this. The talk will end with a discussion of open challenges and future directions to achieve truly collaborative robots in-the-wild.
Speaker Bio:
Nadia Figueroa is the Shalini and Rajeev Misra Presidential Assistant Professor in the Mechanical Engineering and Applied Mechanics (MEAM) Department at the University of Pennsylvania. She holds a secondary appointment in the Computer and Information Science (CIS) department and is a faculty advisor at the General Robotics, Automation, Sensing & Perception (GRASP) laboratory. Before joining the faculty, she was a Postdoctoral Associate in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT), advised by Prof. Julie A. Shah. She completed a Ph.D. (2019) in Robotics, Control and Intelligent Systems at the Swiss Federal Institute of Technology in Lausanne (EPFL), advised by Prof. Aude Billard. Prior to this, she was a Research Assistant (2012-2013) at the Engineering Department of New York University Abu Dhabi (NYU-AD) and in the Institute of Robotics and Mechatronics (2011-2012) at the German Aerospace Center (DLR). She holds a B.Sc. degree in Mechatronics (2007) from Monterrey Tech (ITESM-Mexico) and an M.Sc. degree in Automation and Robotics (2012) from the Technical University of Dortmund, Germany.
Her main research interest focuses on developing collaborative human-aware robotic systems: robots that can safely and efficiently interact with humans and other robots in the human-centric dynamic spaces we inhabit. This involves research at the intersection of machine learning, control theory, artificial intelligence, perception, and psychology – with a physical human-robot interaction perspective.
13. Dec. 2nd — Prof. Donghyun Kim (UMass)
Dynamic motion control of legged robots
► Talk details
Abstract:
To accomplish human- and animal-level agility in robotic systems, we must holistically understand robot hardware, real-time controls, dynamics, perception, and motion planning. Therefore, it is crucial to design control architecture fully considering both hardware and software. In this talk, I will explain our approaches to tackle the challenges in classical control (e.g., bandwidth of feedback control, uncertainty, and robustness) and high-level planning (e.g., step planning, perception, and trajectory optimization), and how the hardware limits are reflected in controller formulation. The tested robots are point-foot bipeds (Hume, Mercury), robots using liquid-cooling viscoelastic actuators (Draco), and quadruped robots using proprioceptive actuators (Mini-Cheetah). I will also present our ongoing research about a new point-foot biped robot (Pat) and a guide dog robot.
Speaker Bio:
Donghyun joined the faculty of the College of Information and Computer Sciences at the University of Massachusetts Amherst as an Assistant Professor in 2021. Before joining UMass, he was a postdoctoral research associate in the Biomimetic Robotics Lab, MIT, from 2019 to 2020. Donghyun was a postdoctoral research associate in the Human-Centered Robotics Lab, the University of Texas at Austin in 2018, where he received his Ph.D. degree in 2017. He holds an MS in Mechanical Engineering from Seoul National University and a BS in Mechanical Engineering from KAIST, Korea. His work on a new viscoelastic liquid-cooled actuator got the best paper award in Transactions on Mechatronics in 2020. His work published in Transactions on Robotics in 2016 was selected as a finalist for the best whole-body control paper and video.
04/29/22 – Guest Talk
Title: Distributed Perception and Learning Between Robots and the Cloud
Speaker: Dr. Sandeep Chinchali, University of Texas, Austin
Abstract: Augmenting robotic intelligence with cloud connectivity is considered one of the most promising solutions to cope with growing volumes of rich robotic sensory data and increasingly complex perception and decision-making tasks. While the benefits of cloud robotics have been envisioned long before, there is still a lack of flexible methods to trade-off the benefits of cloud computing with end-to-end systems costs of network delay, cloud storage, human annotation time, and cloud-computing time. To address this need, I will introduce decision-theoretic algorithms that allow robots to significantly transcend their on-board perception capabilities by using cloud computing, but in a low-cost, fault-tolerant manner. The utility of these algorithms will be demonstrated on months of field data and experiments on state-of-the-art embedded deep learning hardware.
Specifically, for compute-and-power-limited robots, I will present a lightweight model selection algorithm that learns when a robot should exploit low-latency on-board computation, or, when highly uncertain, query a more accurate cloud model. Then, I will present a collaborative learning algorithm that allows a diversity of robots to mine their real-time sensory streams for valuable training examples to send to the cloud for model improvement. I will conclude this talk by describing my group’s research efforts to co-design the representation of rich robotic sensory data with networked inference and control tasks for concise, task-relevant representations.
Speaker’s Bio: Sandeep Chinchali is an assistant professor in UT Austin’s ECE department and Robotics Consortium. He completed his PhD in computer science at Stanford, working on distributed perception and learning between robots and the cloud. Previously, he was the first principal data scientist at Uhana Inc. (acquired by VMWare), a Stanford startup working on data-driven optimization of cellular networks. Prior to Stanford, he graduated from Caltech, where he worked on robotics at NASA’s Jet Propulsion Lab (JPL). His paper on cloud robotics was a finalist for best student paper at Robotics: Science and Systems and his research has been funded by Cisco, NSF, the Office of Naval Research, and Lockheed Martin.
04/22/22 – Guest Talk
Title: Learning to Walk via Rapid Adaptation
Speaker: Ashish Kumar, Ph.D Student University of California, Berkeley
Abstract:Legged locomotion is commonly studied and programmed as a
discrete set of structured gait patterns, like walk, trot, gallop. However,
studies of children learning to walk (Adolph et al) show that real-world
locomotion is often quite unstructured and more like “bouts of intermittent
steps”. We have developed a general approach to walking which is built on
learning on varied terrains in simulation and then fast online adaptation
(fractions of a second) in the real world. This is made possible by our
Rapid Motor Adaptation (RMA) algorithm. RMA consists of two components: a
base policy and an adaptation module, both of which can be trained in
simulation. We thus learn walking policies that are much more flexible and
adaptable. In our set-up gaits emerge as a consequence of minimizing energy
consumption at different target speeds, consistent with various animal
motor studies.
You can see our robot walking at here
The project page is: here
04/15/22 – Guest Talk
Title: Trust in Multi-Robot Systems and Achieving Resilient Coordination
Speaker: Dr. Stephanie Gill, Harvard University
Abstract:Our understanding of multi-robot coordination and control has experienced great advances to the point where deploying multi-robot systems in the near future seems to be a feasible reality. However, many of these algorithms are vulnerable to non-cooperation and/or malicious attacks that limit their practicality in real-world settings. An example is the consensus problem where classical results hold that agreement cannot be reached when malicious agents make up more than half of the network connectivity; this quickly leads to limitations in the practicality of many multi-robot coordination tasks. However, with the growing prevalence of cyber-physical systems comes novel opportunities for detecting attacks by using cross-validation with physical channels of information. In this talk we consider the class of problems where the probability of a particular (i,j) link being trustworthy is available as a random variable. We refer to these as “stochastic observations of trust.” We show that under this model, strong performance guarantees such as convergence for the consensus problem can be recovered, even in the case where the number of malicious agents is greater than ½ of the network connectivity and consensus would otherwise fail. Moreover, under this model we can reason about the deviation from the nominal (no attack) consensus value and the rate of achieving consensus. Finally, we make the case for the importance of deriving such stochastic observations of trust for cyber-physical systems and we demonstrate one such example for the Sybil Attack that uses wireless communication channels to arrive at the desired observations of trust. In this way our results demonstrate the promise of exploiting trust in multi-robot systems to provide a novel perspective on achieving resilient coordination in multi-robot systems.
Speaker’s Bio: Stephanie is an Assistant Professor in the John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University. Her work centers around trust and coordination in multi-robot systems for which she has received the Office of Naval Research Young Investigator award (2021) and the National Science Foundation CAREER award (2019). She has also been selected as a 2020 Sloan Research Fellow for her contributions at the intersection of robotics and communication. She has held a Visiting Assistant Professor position at Stanford University during the summer of 2019, and an Assistant Professorship at Arizona State University from 2018-2020. She completed her Ph.D. work (2014) on multi-robot coordination and control and her M.S. work (2009) on system identification and model learning. At MIT she collaborated extensively with the wireless communications group NetMIT, the result of which were two U.S. patents recently awarded in adaptive heterogeneous networks for multi-robot systems and accurate indoor positioning using Wi-Fi. She completed her B.S. at Cornell University in 2006.
04/08/22 – Student Talks
Title: Underwater Vehicle Navigation and Pipeline Inspection using Fuzzy Logic
Speaker: I-Chen Sang, University of Illinois at Urbana-Champaign
Abstract:Underwater pipeline inspection is becoming a crucial topic in the off-shore subsea inspection industry. ROVs (Remotely Operated Vehicle) can play an important role in various fields like military, ocean science, aquaculture, shipping, and energy. However, using ROVs for inspection is not cost effective and the fixed leak detection sensors mounted along the pipeline have limited precision. Therefore, we proposed a navigation system using AUV (Autonomous Underwater Vehicle) to increase the position resolution of leak detection and lower the inspection cost. In a ROS/Gazebo-based simulation environment, we navigated the AUV with a fuzzy controller which takes navigation errors derived from both camera and sonar sensors as input. When released away from the pipeline, the AUV has the ability to navigate towards the pipeline and finally cruise along it. Additionally, with a chemical concentration sensor mounted on the AUV, it showed the capability to complete pipeline inspection and report the leak point.
Speaker’s Bio:I am a Ph.D. student at the Department of Industrial and Systems Engineering and started to work in AUVSL in Jan 2021. I hold a B.Sc. and M.Sc. degree in physics and 5-year working experience in defense industry before joining U of I. My current concentration is in systems design and manufacturing. The focus of my research is on the perception algorithm development of autonomous vehicles. I am currently working on ground vehicle lane detection using adaptive thresholding algorithms.
Title: A CNN Based Vision-Proprioception Fusion Method for Robust UGV Terrain Classification
Speaker: Yu Chen, University of Illinois at Urbana-Champaign
Abstract: The ability for ground vehicles to identify terrain types and characteristics can help provide more accurate localization and information-rich mapping solutions. Previous studies have shown the possibility of classifying terrain types based on proprioceptive sensors that monitor wheel-terrain interactions. However, most methods only work well when very strict motion restrictions are imposed including driving in a straight path with constant speed, making them difficult to be deployed on real-world field robotic missions. To lift this restriction, this paper proposes a fast, compact, and motion-robust, proprioception-based terrain classification method. This method uses common on-board UGV sensors and a 1D Convolutional Neural Network (CNN) model. The accuracy of this model was further improved by fusing it with a vision-based CNN that made classification based on the appearance of terrain. Experimental results indicated the final fusion models were highly robust with strong performance, with over 93% accuracy, under various lighting conditions and motion maneuvers.
Yu Chen‘s Bio: I am a Ph.D. student in U of I who majors in Mechanical Science and Engineering and works under the leadership of Prof. William Robert Norris and Prof. Elizabeth T Hsiao-Wecksler. I have gained my fair share of knowledge in manufacturing, mechanical design, and structural analysis during my undergraduate days in Michigan State. For my graduate studies, I am focusing my interest and energy on studying robot perception and dynamic control. Currently, I am working on developing efficient CNN fusion models to help robots gain higher accuracy and robustness when classifying terrain types and detecting obstacles.
04/01/22 – Guest Talk
Title: Bridging the Gap Between Safety and Real-Time Performance during Trajectory Optimization: Reachability-based Trajectory Design
Speakers: Ram Vasudevan, University of Michigan
Abstract:Autonomous systems offer the promise of providing greater safety and access. However, this positive impact will only be achieved if the underlying algorithms that control such systems can be certified to behave robustly. This talk describes a technique called Reachability-based Trajectory Design, which constructs a parameterized representation of the forward reachable set that it then uses in concert with predictions to enable real-time, certified, collision checking. This approach, which is guaranteed to generate not-at-fault behavior, is demonstrated across a variety of different real-world platforms including ground vehicles, manipulators, and walking robots.
Speaker’s Bio:Ram Vasudevan is an assistant professor in Mechanical Engineering and the Robotics Institute at the University of Michigan. He received a BS in Electrical Engineering and Computer Sciences, an MS degree in Electrical Engineering, and a PhD in Electrical Engineering all from the University of California, Berkeley. He is a recipient of the NSF CAREER Award, the ONR Young Investigator Award, and the 1938E Award . His work has received best paper awards at the IEEE Conference on Robotics and Automation, the ASME Dynamics Systems and Controls Conference, and IEEE OCEANS Conference and has been finalist for best paper at Robotics: Science and Systems.
03/25/22 – Guest Talk
Title: Toward the Development of Highly Adaptive Legged Robots
Speakers: Quan Nguyen, University of Southern California
Abstract:Deploying legged robots in real-world applications will require fast adaptation to unknown terrain and model uncertainty. Model uncertainty could come from unknown robot dynamics, external disturbances, interaction with other humans or robots, or unknown parameters of contact models or terrain properties. In this talk, I will first present our recent works on adaptive control and adaptive safety-critical control for legged locomotion adapting to substantial model uncertainty. In these results, we focus on the application of legged robots walking rough terrain while carrying a heavy load. I will then talk about our solution on trajectory optimization that allows legged robots to adapt to a wide variety of challenging terrain. This talk will also discuss the combination of control, trajectory optimization and reinforcement learning toward achieving long-term adaptation in both control actions and trajectory planning for legged robots.
Speaker’s Bio: Quan Nguyen is an Assistant Professor of Aerospace and Mechanical Engineering at the University of Southern California. Prior to joining USC, he was a Postdoctoral Associate in the Biomimetic Robotics Lab at the Massachusetts Institute of Technology (MIT). He received his Ph.D. from Carnegie Mellon University (CMU) in 2017 with the Best Dissertation Award.
His research interests span different control and optimization approaches for highly dynamic robotics including nonlinear control, trajectory optimization, real-time optimization-based control, robust and adaptive control. His work on the bipedal robot ATRIAS walking on stepping stones was featured on the IEEE Spectrum, TechCrunch, TechXplore and Digital Trends. His work on the MIT Cheetah 3 robot leaping on a desk was featured widely in many major media channels, including CNN, BBC, NBC, ABC, etc. Nguyen won the Best Presentation of the Session at the 2016 American Control Conference (ACC) and the Best System Paper Finalist at the 2017 Robotics: Science & Systems conference (RSS). Nguyen is a recipient of the 2020 Charles Lee Powell Foundation Faculty Research Award.
03/11/22 – Guest Talk
Title: Developing and Deploying Situational Awareness in Autonomous Robotic Systems
Speakers: Philip Dames, Temple University
Abstract: Robotic systems must possess sufficient situational awareness in order to successfully operate in complex and dynamic real-world environments, meaning they must be able to perceive objects in their surroundings, comprehend their meaning, and predict the future state of the environment. In this talk, I will first describe how multi-target tracking (MTT) algorithms can provide mobile robots with this awareness, including our recent results that extend classical MTT approaches to include semantic object labels. Next, I will discuss two key applications of MTT to mobile robotics. The first problem is distributed target search and tracking. To solve this, we develop a distributed MTT framework, allowing robots to estimate, in real time, the relative importance of each portion of the environment, and dynamic tessellation schemes, which account for uncertainty in the pose of each robot, provide collision avoidance, and automatically balance task assignment in a heterogeneous team. The second problem is autonomous navigation through crowded, dynamic environments. To solve this, we develop a novel neural network-based control policy that takes as its input the target tracks from an MTT, unlike previous approaches which only rely on raw sensor data. We show that our policy, trained entirely in one simulated environment, generalizes well to new situations, including a real-world robot.
Speaker’s Bio:Philip Dames is an Assistant Professor of Mechanical Engineering at Temple University, where he directs the Temple Robotics and Artificial Intelligence Lab (TRAIL). Prior to joining Temple, he was a Postdoctoral Researcher in Electrical and Systems Engineering at the University of Pennsylvania. He received his PhD Mechanical Engineering and Applied Mechanics from the University of Pennsylvania in 2015 and his BS and MS degrees in Mechanical Engineering from Northwestern University in 2010.
Title: Pedestrian trajectory prediction meets social robot navigation
Abstract: Multi-pedestrian trajectory prediction is an indispensable element of safe and socially aware robot navigation among crowded spaces. Previous works assume that positions of all pedestrians are consistently tracked, which leads to biased interaction modeling if the robot has a limited sensor range and can only partially observe the pedestrians. We propose Gumbel Social Transformer, in which an Edge Gumbel Selector samples a sparse interaction graph of partially detected pedestrians at each time step. A Node Transformer Encoder and a Masked LSTM encode pedestrian features with sampled sparse graphs to predict trajectories. We demonstrate that our model overcomes potential problems caused by the assumptions, and our approach outperforms related works in trajectory prediction benchmarks.
Then, we redefine the personal zones of walking pedestrians with their future trajectories. To learn socially aware robot navigation policies, the predicted social zones are incorporated into a reinforcement learning framework to prevent the robot from intruding into the social zones. We propose a novel recurrent graph neural network with attention mechanisms to capture the interactions among agents through space and time. We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios in simulation and real world.
Speakers’ Bios: Shuijing Liu is a fourth year PhD student in Human-Centered Autonomy Lab in Electrical and Computer Engineering, University of Illinois at Urbana Champaign, advised by Professor Katherine Driggs-Campbell. Her research interests include learning-based robotics and human robot interaction. Her research primarily focuses on autonomous navigation in crowded and interactive environments using reinforcement learning.
Zhe Huang is a third year PhD student in Human-Centered Autonomy Lab in Electrical and Computer Engineering, University of Illinois at Urbana Champaign, advised by Professor Katherine Driggs-Campbell. His research is focused on pedestrian trajectory prediction and collaborative manufacturing.
03/04/22 – Student Talks
Title: GRILC: Gradient-based Reprogrammable Iterative Learning Control for Autonomous Systems
Speakers: Kuan-Yu Tseng , University of Illinois at Urbana-Champaign
Abstract: We propose a novel gradient-based reprogrammable iterative learning control (GRILC) framework for autonomous systems. Performance of trajectory following in autonomous systems is often limited by mismatch between a complex actual model and a simplified nominal model used in controller design. To overcome this issue, we develop the GRILC framework with offline optimization using the information of the nominal model and the actual trajectory, and online system implementation. In addition, a partial and reprogrammable learning strategy is introduced. The proposed method is applied to the autonomous time-trialing example and the learned control policies can be stored into a library for future motion planning. The simulation and experimental results illustrate the effectiveness and robustness of the proposed approach.
Speaker’s Bio: Kuan-Yu Tseng is a third-year Ph.D. student in Mechanical Engineering at UIUC, advised by Prof. Geir Dullerud. He received M.S. and B.S. degrees in Mechanical Engineering from National Taiwan University in 2019 and 2017, respectively. His research interests include control and motion planning in autonomous vehicles and robots.
Title: Pedestrian trajectory prediction meets social robot navigation
Abstract: Multi-pedestrian trajectory prediction is an indispensable element of safe and socially aware robot navigation among crowded spaces. Previous works assume that positions of all pedestrians are consistently tracked, which leads to biased interaction modeling if the robot has a limited sensor range and can only partially observe the pedestrians. We propose Gumbel Social Transformer, in which an Edge Gumbel Selector samples a sparse interaction graph of partially detected pedestrians at each time step. A Node Transformer Encoder and a Masked LSTM encode pedestrian features with sampled sparse graphs to predict trajectories. We demonstrate that our model overcomes potential problems caused by the assumptions, and our approach outperforms related works in trajectory prediction benchmarks.
Then, we redefine the personal zones of walking pedestrians with their future trajectories. To learn socially aware robot navigation policies, the predicted social zones are incorporated into a reinforcement learning framework to prevent the robot from intruding into the social zones. We propose a novel recurrent graph neural network with attention mechanisms to capture the interactions among agents through space and time. We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios in simulation and real world.
Speakers’ Bios: Shuijing Liu is a fourth year PhD student in Human-Centered Autonomy Lab in Electrical and Computer Engineering, University of Illinois at Urbana Champaign, advised by Professor Katherine Driggs-Campbell. Her research interests include learning-based robotics and human robot interaction. Her research primarily focuses on autonomous navigation in crowded and interactive environments using reinforcement learning.
Zhe Huang is a third year PhD student in Human-Centered Autonomy Lab in Electrical and Computer Engineering, University of Illinois at Urbana Champaign, advised by Professor Katherine Driggs-Campbell. His research is focused on pedestrian trajectory prediction and collaborative manufacturing.
02/25/22 – Guest Talk
Title: Safety and Generalization Guarantees for Learning-Based Control of Robots
Speakers: Professor Matthew Spenko , Illlinois Institute of Technology
Abstract: For the past fifteen years, the RoboticsLab@IIT has focused on creating technologies to enable mobility in challenging environments. This talk highlights the lab’s contributions, from its work in gecko-inspired climbing and perching robots to the evaluation of navigation safety in self-driving cars, drone technology for tree science, and the development of amoeba-like soft robots. The latter of these, soft robots, will compose the majority of the talk. Soft robots can offer many advantages over traditional rigid robots including conformability to different object geometries, shape changing, safer physical interaction with humans, the ability to handle delicate objects, and grasping without the need for high-precision control algorithms. Despite these advantages, soft robots often lack high force capacity, scalability, responsive locomotion and object handling, and a self-contained untethered design, all of which have hindered their adoption. To address these issues, we have developed a series of robots comprised of several rigid robotic subunits that are flexibly connected to each other and contain a granule-filled interior that enables a jamming transition from soft to rigid. The jamming feature allows the robots to exert relatively large forces on objects in the environment. The modular design resolves any scalability issues, and using decentralized robotic subunits allows the robot to configure itself in a variety of shapes and conform to objects, all while locomoting. The result is a compliant, high-degree-of-freedom system with excellent morphability.
Speaker’s Bio: Matthew Spenko is a professor in the Mechanical, Materials, and Aerospace Engineering Department at the Illinois Institute of Technology. Prof. Spenko earned the B.S. degree cum laude in Mechanical Engineering from Northwestern University in 1999 and the M.S. and Ph.D. degrees in Mechanical Engineering from Massachusetts Institute of Technology in 2001 and 2005 respectively. He was an Intelligence Community Postdoctoral Scholar in the Mechanical Engineering Department’s Center for Design Research at Stanford University from 2005 to 2007. He has been a faculty member at the Illinois Institute of Technology since 2007, received tenure in 2013, and was promoted to full professor in 2019. His research is in the general area of robotics with specific attention to mobility in challenging environments. Prof. Spenko is a senior member of IEEE and an associate editor of Field Robotics. His work has been featured in popular media such as the New York Times, CNET, Engadget, and Discovery-News. Examples of his robots are on permanent display in Chicago’s Museum of Science and Industry.
02/18/22 – Guest Talk
Title: Safety and Generalization Guarantees for Learning-Based Control of Robots
Speakers: Prof. Anirudha Majumdar , Assistant Professor, Princeton University
Abstract: The ability of machine learning techniques to process rich sensory inputs such as vision makes them highly appealing for use in robotic systems (e.g., micro aerial vehicles and robotic manipulators). However, the increasing adoption of learning-based components in the robotics perception and control pipeline poses an important challenge: how can we guarantee the safety and performance of such systems? As an example, consider a micro aerial vehicle that learns to navigate using a thousand different obstacle environments or a robotic manipulator that learns to grasp using a million objects in a dataset. How likely are these systems to remain safe and perform well on a novel (i.e., previously unseen) environment or object? How can we learn control policies for robotic systems that provably generalize to environments that our robot has not previously encountered? Unfortunately, existing approaches either do not provide such guarantees or do so only under very restrictive assumptions.
In this talk, I will present our group’s work on developing a principled theoretical and algorithmic framework for learning control policies for robotic systems with formal guarantees on generalization to novel environments. The key technical insight is to leverage and extend powerful techniques from generalization theory in theoretical machine learning. We apply our techniques on problems including vision-based navigation and grasping in order to demonstrate the ability to provide strong generalization guarantees on robotic systems with complicated (e.g., nonlinear/hybrid) dynamics, rich sensory inputs (e.g., RGB-D), and neural network-based control policies.
Speaker’s Bio: Anirudha Majumdar is an Assistant Professor at Princeton University in the Mechanical and Aerospace Engineering (MAE) department, and an Associated Faculty in the Computer Science department. He received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2016, and a B.S.E. in Mechanical Engineering and Mathematics from the University of Pennsylvania in 2011. Subsequently, he was a postdoctoral scholar at Stanford University from 2016 to 2017 at the Autonomous Systems Lab in the Aeronautics and Astronautics department. He is a recipient of the NSF CAREER award, the Google Faculty Research Award (twice), the Amazon Research Award (twice), the Young Faculty Researcher Award from the Toyota Research Institute, the Best Conference Paper Award at the International Conference on Robotics and Automation (ICRA), the Paper of the Year Award from the International Journal of Robotics Research (IJRR), the Alfred Rheinstein Faculty Award (Princeton), and the Excellence in Teaching Award from Princeton’s School of Engineering and Applied Science.
Schedule: The seminar will be held every Friday at 1:00 PM CST starting 2/11.
Location: This semester, we will meet only virtually.
02/11/22 – Guest Talk
Title: Numerical Methods for Things That Move
Speakers: Zac Manchester , Assistant Professor, Carnegie Mellon University
Abstract: Recent advances in motion planning and control have led to dramatic successes like SpaceX’s autonomous rocket landings and Boston Dynamics’ humanoid robot acrobatics. However, the underlying numerical methods used in these applications are typically decades old, not tuned for high performance on planning and control problems, and are often unable to cope with the types of optimization problems that arise naturally in modern robotics applications like legged locomotion and autonomous driving. This talk will introduce new numerical optimization tools built to enable robotic systems that move with the same agility, efficiency, and safety as humans and animals. Some target applications include legged locomotion; autonomous driving; distributed control of satellite swarms; and spacecraft entry, descent, and landing. I will also discuss hardware platforms that we have deployed algorithms on, including quadrupeds, teams of quadrotors, and tiny satellites.
Speaker’s Bio: Zac Manchester is an Assistant Professor of Robotics at Carnegie Mellon University, founder of the KickSat project, and member of the Breakthrough Starshot Advisory Committee. He holds a Ph.D. in aerospace engineering and a B.S. in applied physics from Cornell University. Zac was a postdoc in the Agile Robotics Lab at Harvard University and previously worked at Stanford, NASA Ames Research Center and Analytical Graphics, Inc. He received a NASA Early Career Faculty Award in 2018 and has led three satellite missions. His research interests include motion planning, control, and numerical optimization, particularly with application to robotic locomotion and spacecraft guidance, navigation, and control.
Schedule: The seminar will be held every Friday at 1:00 PM CST starting 2/11.
Location: This semester, we will meet only virtually.
12/10/21 – Guest Talk
Title: Towards a Universal Modeling and Control Framework for Soft Robots
Speakers: Daniel Bruder , Harvard
Abstract: Soft robots have been an active area of research in the robotics community due to their inherent compliance and ability to safely interact with delicate objects and the environment. Despite their suitability for tasks involving physical human-robot interaction, their real-world applications have been limited due to the difficulty
involved in modeling and controlling soft robotic systems. In this talk, I’ll describe two modeling approaches aimed at overcoming the limitations of previous methods. The first is a physics-based approach for fluid-driven actuators that offers predictions in terms of tunable geometrical parameters, making it a valuable tool in the design of soft fluid-driven robotic systems. The second is a data-driven approach that leverages Koopman operator theory to construct models that are linear, which enables the utilization of linear control techniques for nonlinear dynamical systems like soft robots. Using this Koopman-based approach, a pneumatically actuated soft continuum manipulator was able to autonomously perform manipulation tasks such as trajectory following and pick-and-place with a variable payload without undergoing any task-specific training. In the future, these
approaches could offer a paradigm for designing and controlling all soft robotic systems, leading to their more widespread adoption in real-world applications.
Bios: Daniel Bruder received a B.S. degree in engineering sciences from Harvard University in 2013, and a Ph.D. degree in mechanical engineering from the University of Michigan in 2020. He is currently a postdoctoral researcher in the Harvard Microrobotics Lab. He is a recipient of the NSF Graduate Research Fellowship and the Richard and Eleanor Towner Prize for Outstanding Ph.D. Research. His research interests include the design, modeling, and control of robotic systems, especially soft robots.
11/19/21 – Guest Talk
Title: Value function-based methods for safety-critical control
Speakers: Jason Jangho Choi, University of California, Berkeley
Abstract: Many safety-critical control methods leverage on a value function that captures knowledge about how the safety constraint can be dynamically satisfied. These value functions appear in many different forms in various literature—for example, Hamilton-Jacobi Reachability, Control Barrier Functions, and Reinforcement Learning. The value functions are often computationally heavy, however, once they are computed offline, they can be used effectively fast for online applications. In the first part of my talk, I will share some recent progress in methods for constructing the value functions. Specifically, I would like to discuss how different notions of value functions can be merged into a unified concept, and will introduce a new dynamic programming principle that can effectively compute reachability value functions for hybrid systems like walking robots. In the second part, I will discuss the main issue when value functions computed offline are deployed in online safety control, model uncertainty, and how we can address this problem effectively with data-driven methods.
Bios: Jason Jangho Choi is a PhD student at the University of California, Berkeley, working with Professor Koushil Sreenath and Professor Claire Tomlin. He finished his undergraduate studies in mechanical engineering at Seoul National University. His research interests center on optimal control theories for nonlinear and hybrid systems, data-driven methods for safety control, and their applications to robotics mobility.
11/12/21 – Guest Talk
Title: Planning and Learning for Maneuvering Mobile Robots in Complex Environments
Speakers: Lantao Liu, Assistant Professor in the Department of Intelligent Systems Engineering at Indiana University-Bloomington
Abstract: In the first part of the talk, I will discuss our recent progress on the continuous-state Markov Decision Processes (MDPs) that can be utilized to address autonomous navigation and control in unstructured off-road environments. Our solution integrates a diffusion-type approximation to the robot stochastic transition model and a kernel-type approximation to the robot state values, so that the decision can be efficiently computed for real-time navigation. Results from unmanned ground vehicles demonstrate the applicability in challenging real-world environments. Then I will discuss the decision making with time-varying disturbances, the solution of which can navigate unmanned aerial vehicles disturbed by air turbulence or unmanned underwater vehicles disturbed by ocean currents. We explore the time-varying stochasticity of robot motion and investigate robot state reachability, based on which we design an efficient iterative method that offers a good trade-off between solution optimality and time complexity. Finally, I will present an adaptive sampling (active learning) and informative planning framework for fast modeling (mapping) unknown environments such as large ocean floors or time-varying air/water pollution. We consider real-world constraints such as multiple mission objectives as well as environmental dynamics. Preliminary results from an unmanned surface vehicle also demonstrate high efficiency of the method.
Bios: Lantao Liu is an Assistant Professor in the Department of Intelligent Systems Engineering at Indiana University-Bloomington. His main research interests include robotics and artificial intelligence. He has been working on planning, learning, and coordination techniques for autonomous systems (air, ground, sea) involving single or multiple robots with potential applications in navigation and control, surveillance and security, search and rescue, smart transportation, as well as environmental monitoring. Before joining Indiana University, he was a Research Associate in the Department of Computer Science at the University of Southern California during 2015 – 2017. He also worked as a Postdoctoral Fellow in the Robotics Institute at Carnegie Mellon University during 2013 – 2015. He received a Ph.D. from the Department of Computer Science and Engineering at Texas A&M University in 2013, and a Bachelor’s degree from the Department of Automatic Control at Beijing Institute of Technology in 2007.
11/05/21 – Faculty Talks
Title: Resilience of Autonomous Systems: A Step Beyond Adaptation
Speakers: Melkior Ornik , Assistant professor in the Department of Aerospace Engineering, UIUC
Abstract: The ability of a system to correctly respond to a sudden adverse event is critical for high-level autonomy in complex, changing, or remote environments. By assuming continuing structural knowledge about the system, classical methods of adaptive or robust control largely attempt to design control laws which enable the system to complete its original task even after an adverse event. However, catastrophic events such as physical system damage may simply render the original task impossible to complete. In that case, design of any control law that attempts to complete the task is doomed to be unsuccessful. Instead, the system should recognize the task as impossible to complete, propose an alternative that is certifiably completable given the current knowledge, and formulate a control law that drives the system to complete this new task. To do so, in this talk I will present the emergent twin frameworks of quantitative resilience and guaranteed reachability. Combining methods of optimal control, online learning, and reachability analysis, these frameworks first compute a set of temporal tasks completable by all systems consistent with the current partial knowledge, possibly within a time budget. These tasks can then be pursued by online learning and adaptation methods. The talk will consider three scenarios: actuator degradation, loss of control authority, and structural change in system dynamics, and will briefly present a number of applications to maritime and aerial vehicles as well as opinion dynamics. Finally, I will identify promising future directions of research, including real-time safety-assured mission planning, resilience of physical infrastructure, and perception-based task assignment.
Bios: Melkior Ornik is an assistant professor in the Department of Aerospace Engineering at the University of Illinois Urbana-Champaign, also affiliated with the Coordinated Science Laboratory, Department of Electrical and Computer Engineering and the Discovery Partners Institute. He received his Ph.D. degree from the University of Toronto in 2017. His research focuses on developing theory and algorithms for control, learning and task planning in autonomous systems that operate in uncertain, changing, or adversarial environments, as well as in scenarios where only limited knowledge of the system is available.
Title: Data-driven MPC: Applications and Tools
Speakers: William Edwards , UIUC
Abstract: Many of the most exciting and challenging applications in robotics involve control of novel systems with unknown nonlinear dynamics. When such systems are infeasible to model analytically or numerically, roboticists often turn to data-driven techniques for modelling and control. This talk will cover two projects relating to this theme. First, I will discuss an application of data-driven modelling and control to needle insertion in deep anterior lamellar keratoplasty (DALK), a challenging problem in surgical robotics. Second, I will introduce a new software library, AutoMPC, created to automate the design of data-driven model predictive controllers and make state-of-the-art algorithms more accessible for a wide range of applications.
Bios: William Edwards is a third-year Computer Science PhD student in the Intelligent Motion Laboratory at UIUC, advised by Dr. Kris Hauser. He received Bachelor’s degrees in Computer Science and Mathematics from the University of South Carolina in 2019. His research interests include motion planning, dynamics learning, and optimization.
10/29/21 – Student Talks
Title: Semi-Infinite Programming’s Application in Robotics
Speakers: Mengchao Zhang , UIUC
Abstract: In optimization theory, semi-infinite programming (SIP) is an optimization problem with a finite number of variables and an infinite number of constraints, or an infinite number of variables and a finite number of constraints. In this talk, I will introduce our work which uses SIP to solve the problems in the field of robotics.
In the semi-infinite program with complementarity constraints (SIPCC) work, we use SIP to address the problem that contact is an infinite phenomenon involving continuous regions of interaction. Our method enables a gripper to find a feasible pose to hold (non-)convex objects while ensuring force and torque balance. In the non-penetration iterative closest points for single-view multi-object 6D pose estimation work, we use SIP to solve the penetration between (non-)convex objects. Through introducing non-penetration constraints to the framework of iterative closest points (ICP), we improve the pose estimation result’s accuracy of deep neural network based method. Also, our method outperforms the best result on the IC-BIN dataset in the Benchmark for 6D Object Pose Estimation.
Bios: Mengchao Zhang is a Ph.D. student in the IML laboratory at UIUC. His research interest includes motion planning, manipulation, perception, and optimization.
Title: Data-driven MPC: Applications and Tools
Speakers: William Edwards , UIUC
Abstract: Many of the most exciting and challenging applications in robotics involve control of novel systems with unknown nonlinear dynamics. When such systems are infeasible to model analytically or numerically, roboticists often turn to data-driven techniques for modelling and control. This talk will cover two projects relating to this theme. First, I will discuss an application of data-driven modelling and control to needle insertion in deep anterior lamellar keratoplasty (DALK), a challenging problem in surgical robotics. Second, I will introduce a new software library, AutoMPC, created to automate the design of data-driven model predictive controllers and make state-of-the-art algorithms more accessible for a wide range of applications.
Bios: William Edwards is a third-year Computer Science PhD student in the Intelligent Motion Laboratory at UIUC, advised by Dr. Kris Hauser. He received Bachelor’s degrees in Computer Science and Mathematics from the University of South Carolina in 2019. His research interests include motion planning, dynamics learning, and optimization.
10/08/21 – Faculty Talks
Title: Research at the RoboDesign Lab at UIUC
Speakers: Joas Ramos , Assistant Professor at the UIUC
Abstract: The research at RoboDesign Lab intersects the study of the design, control, and dynamics of robots in parallel with human-machine interaction. We focus on developing hardware, software, and human-centered approaches to push the physical limits of robots to realize physically demanding tasks. In this talk, I will cover several ongoing research topics in the lab, such as the development of custom Human-Machine Interface (HMI) that enables bilateral teleoperation of mobile robots, a wheeled humanoid robot for dynamic mobile manipulation, actuation design for dynamic humanoid robots, and assistive devices for individuals with mobility impairment.
Bios:Joao Ramos is an Assistant Professor at the University of Illinois at Urbana-Champaign and the director of the RoboDesign Lab. He previously worked as a Postdoctoral Associate at the Biomimetic Robotics Laboratory, at the Massachusetts Institute of Technology. He received a PhD from the Department of Mechanical Engineering at MIT in 2018. He is the recipient of the 2021 NSF CARRER Award. His research focuses on the design and control of dynamic robotic systems, in addition to human-machine interfaces, legged locomotion dynamics, and actuation systems.
10/01/21 – Student Talks
Title: A multi-sensor fusion for agricultural autonomous navigation.
Speakers: Mateus Valverde Gasparino , UIUC
Abstract: In most agricultural setups, vehicles count on accurate GNSS estimations to autonomously navigate. However, for small under-canopy robots, it is not possible to guarantee accurate position estimation when between crop rows. To address this problem, we describe in this presentation a solution to autonomously navigate in a semi-structured agricultural environment. We demonstrate a navigation system that autonomously chooses between reference modalities to cover long areas in the farm and increase the navigation range to not only crop rows. By choosing the best reference to follow, the robot can accommodate for signal attenuation in GNSS and use the agricultural structure to autonomously navigate. A low-cost and compact robotic platform, designed to automate the measurements of plant traits, is used to implement and evaluate the system. We show two different perception systems that can be used on this framework: a LiDAR-based and a vision-based perception. We validate the system in a real agricultural environment, and we show it can effectively navigate for 4.5 km with only 6 human interventions.
Bios:Mateus Valverde Gasparino is a second-year Ph.D. advised by Prof Girish Chowdhary at the University of Illinois at Urbana-Champaign. He was awarded an M.Sc. degree in mechanical engineering and a Bachelor’s degree in mechatronics engineering from the University of São Paulo, Brazil. He is currently a graduate research assistant in the Distributed Autonomous Systems Laboratory (DASLab), and his research interests include perception systems, mapping, control, and learning for robots in unstructured and semi-structured outdoor environments.
Title: Learned Visual Navigation for Under-Canopy Agricultural Robots
Speakers: Arun Narenthiran Sivakumar , UIUC
Abstract: : We describe a system for visually guided autonomous navigation of under-canopy farm robots. Low-cost under-canopy robots can drive between crop rows under the plant canopy and accomplish tasks that are infeasible for over-the-canopy drones or larger agricultural equipment. However, autonomously navigating them under the canopy presents a number of challenges: unreliable GPS and LiDAR, high cost of sensing, challenging farm terrain, clutter due to leaves and weeds, and large variability in appearance over the season and across crop types. We address these challenges by building a modular system that leverages machine learning for robust and generalizable perception from monocular RGB images from low-cost cameras, and model predictive control for accurate control in challenging terrain. Our system, CropFollow, is able to autonomously drive 485 meters per intervention on average, outperforming a state-of-the-art LiDAR based system (286 meters per intervention) in extensive field testing spanning over 25 km.
Bios:Arun Narenthiran Sivakumar is a third year Ph.D. student in the Distributed Autonomous Systems Laboratory (DASLAB) at UIUC advised by Prof. Girish Chowdhary. He received his Bachelor’s degree in Mechanical Engineering in 2017 from VIT University, India and Master’s degree in Agricultural and Biological Systems Engineering with a minor in Computer Science in 2019 from the University of Nebraska, Lincoln. His research interests are applications of vision and learning based robotics in agriculture.
9/24/21 – Guest Talk
Title: Hello Robot: Democratizing Mobile Manipulation
Speakers: Aaron Edsinger, CEO and Cofounder and Charlie Kemp, CTO and Cofounder, Hello Robot
Abstract: Mobile manipulators have the potential to improve life for everyone, yet adoption of this emerging technology has been limited. To encourage an inclusive future, Hello Robot developed the Stretch RE1, a compact and lightweight mobile manipulator for research that achieves a new level of affordability. The Stretch RE1 and Hello Robot’s open approach are inspiring a growing community of researchers to explore the future of mobile manipulation. In this talk, we will present the Stretch RE1 and the growing community and ecosystem around it. We will present our exciting collaboration with Professor Wendy Roger’s lab and provide a live demonstration of Stretch. Finally, we will be announcing the Stretch Robot Pitch Competition — a collaboration with TechSage and Proctor and Gamble — where students have the opportunity to generate novel design concepts for Stretch that address the needs of individuals aging with disabilities at home.
There will also be information during the seminar on a competition where winners will receive a cash prize and be able to work with Hello Robot’s Stretch robot in the McKechnie Family LIFE Home.
Bios:
Aaron Edsinger, CEO and Cofounder: Aaron has a passion for building robots and robot companies. He has founded four companies focused on commercializing human collaborative robots. Two of these companies, Meka Robotics and Redwood Robotics, were acquired by Google in 2013. As Robotics Director at Google, Aaron led the business, product, and technical development of two of Google’s central investments in robotics. Aaron received his Ph.D. from MIT CSAIL in 2007.
Charlie Kemp, CTO and Cofounder: Charlie is a recognized expert on mobile manipulation. In 2007, he founded the Healthcare Robotics Lab at Georgia Tech, where he is an associate professor in the Department of Biomedical Engineering. His lab has conducted extensive research on mobile manipulators to assist older adults and people with disabilities. Charlie earned a B.S., M.Eng., and Ph.D. from MIT. He first met Aaron while conducting research in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
Previous Talks:
See [MORE INFO] for links to speaker webpages:
This Semester’s First Talk:
9/17/21 – Faculty Talk
Title: Introduction of KIMLAB (Kinetic Intelligent Machine LAB)
Speaker: Professor Kim, Associate Professor of Electrical and Computer Engineering, UIUC
Abstract: In this talk, I will share what is going on in KIMLAB, Kinetic Intelligent Machine LAB. I will briefly introduce myself. And then I will introduce some robots, research, and equipment in KIMLAB. Also, I will explain how the current efforts are related to my previous research and future directions.
Bio: Joohyung Kim is currently an Associate Professor of Electrical and Computer Engineering at the University of Illinois Urbana-Champaign. His research focuses on design and control for humanoid robots, systems for motion learning in robot hardware, and safe human-robot interaction. He received BSE and Ph.D. degrees in Electrical Engineering and Computer Science (EECS) from Seoul National University, Korea. Prior to joining UIUC, He was a Research Scientist in Disney Research doing research for animation character robots.
5/07/21 – Student Talks
Title: A Comparison Between Joint Space and Task Space Mappings for Dynamic Teleoperation of an Anthropomorphic Robotic Arm in Reaction Tests
Speaker: Sunyu, UIUC
Abstract: Teleoperation—i.e., controlling a robot with human motion—proves promising in enabling a humanoid robot to move as dynamically as a human. But how to map human motion to a humanoid robot matters because a human and a humanoid robot rarely have identical topologies and dimensions. This work presents an experimental study that utilizes reaction tests to compare joint space and task space mappings for dynamic teleoperation of an anthropomorphic robotic arm that possesses human-level dynamic motion capabilities. The experimental results suggest that the robot achieved similar and, in some cases, human-level dynamic performances with both mappings for the six participating human subjects. All subjects became proficient at teleoperating the robot with both mappings after practice, despite that the subjects and the robot differed in size and link length ratio and that the teleoperation required the subjects to move unintuitively. Yet, most subjects developed their teleoperation proficiencies more quickly with task space mapping than with joint space mapping after similar amounts of practice. This study also indicates the potential values of three-dimensional task space mapping, a teleoperation training simulator, and force feedback to the human pilot for intuitive and dynamic teleoperation of a humanoid robot’s arms.
Title: Safe and Efficient Robot Learning Using Structured Policies
Speaker: Anqi Li, UW
Abstract: Traditionally, modeling and control techniques have been regarded as the fundamental tools for studying robotic systems. Although they can provide theoretical guarantees, these tools make limiting modeling assumptions. Recently, learning-based methods have shown success in tackling problems that are challenging for traditional techniques. Despite its advantages, it is unrealistic to directly apply most learning algorithms to robotic systems due to issues such as sample complexity and safety concerns. In this line of work, we aim at making robot learning explainable, sample efficient, and safe by construction through encoding structure into policy classes. In particular, we focus on a class of structured policies for robotic problems with multiple objectives. The complex motions are generated by combining simple behaviors given by Riemannian Motion Policies (RMPs). It can be shown that the combined policy is stable if the individual policies satisfy a class of control Lyapunov conditions, which can imply safety. Given such a policy representation, we learn policies with such structure so that formal guarantees are provided. To do so, we keep the safety-critical policies, e.g. collision avoidance and joint limits policies, as fixed during learning. We can also make use of the known robot kinematics. We show learning with such structure effective on a number of learning from human demonstration tasks and reinforcement learning tasks.
4/30/21 – Faculty Talks
Speaker: Prof. Kaiyu Hang, Rice
Abstract: Dexterous manipulation is an integral task involving a number of subproblems, such as perception, planning, and control. Problem representations, which are essential elements in a system defining what is actually the problem being considered, determines both the capability of a system and the feasibility of applying such a system in real tasks.
4/23/21 – Student Talks
Title: Optimization-based Control for Highly Dynamic Legged Locomotion
Speaker: Dr. Yanran Ding, UIUC
Abstract: Legged animals in nature can perform highly dynamic movements elegantly and efficiently, whether it be running down a steep hill or leaping between branches. To transfer part of the animal agility to a legged robot would open countless possibilities in disaster response, transportation, and space exploration. The topic of this talk is motion control in highly dynamic legged locomotion applications. In this talk, instantaneous control of a small and agile quadruped Panther is presented in a squat jumping experiment where it reached a maximal height of 0.7 m using a quadratic program (QP)-based reactive controller. A short prediction horizon control is achieved in real-time with the model predictive control (MPC) framework. We present a representation-free MPC (RF-MPC) formulation that directly uses the rotation matrix to describe orientation, which enables complex 3D acrobatic motions that previously unachievable using Euler angles due to the presence of singularity. We experimentally validate the motion control methods on Panther.
Title: Hand Modeling and Simulation Using Stabilized Magnetic Resonance Imaging
Speaker: Bohan Wang, USC
Abstract: We demonstrate how to acquire complete human hand bone anatomy (meshes) in multiple poses using magnetic resonance imaging (MRI). Such acquisition was previously difficult because MRI scans must be long for high-precision results (over 10 minutes) and because humans cannot hold the hand perfectly still in non-trivial and badly supported poses. We invent a manufacturing process whereby we use lifecasting materials commonly employed in film special effects industry to generate hand molds, personalized to the subject, and to each pose. These molds are both ergonomic and encasing, and they stabilize the hand during scanning. We also demonstrate how to efficiently segment the MRI scans into individual bone meshes in all poses, and how to correspond each bone’s mesh to same mesh connectivity across all poses. Next, we interpolate and extrapolate the MRI-acquired bone meshes to the entire range of motion of the hand, producing an accurate data-driven animation-ready rig for bone meshes. We also demonstrate how to acquire not just bone geometry (using MRI) in each pose, but also a matching highly accurate surface geometry (using optical scanners) in each pose, modeling skin pores and wrinkles. We also give a soft tissue Finite Element Method simulation “rig”, consisting of novel tet meshing for stability at the joints, spatially varying geometric and material detail, and quality constraints to the acquired skeleton kinematic rig. Given an animation sequence of hand joint angles, our FEM soft tissue rig produces quality hand surface shapes in arbitrary poses in the hand range of motion. Our results qualitatively reproduce important features seen in the photographs of the subject’s hand, such as similar overall organic shape and fold formation.
4/16/21 – Faculty Talks
Speaker: Dr. Rahul Shome, Rice
4/09/21 – Student Talks
Title: Long-Term Pedestrian Trajectory Prediction Using Mutable Intention Filter and Warp LSTM
Speaker: Zhe Huang, UIUC
Abstract: Trajectory prediction is one of the key capabilities for robots to safely navigate and interact with pedestrians. Critical insights from human intention and behavioral patterns need to be integrated to effectively forecast long-term pedestrian behavior. Thus, we propose a framework incorporating a mutable intention filter and a Warp LSTM (MIF-WLSTM) to simultaneously estimate human intention and perform trajectory prediction. The mutable intention filter is inspired by particle filtering and genetic algorithms, where particles represent intention hypotheses that can be mutated throughout the pedestrian’s motion. Instead of predicting sequential displacement over time, our Warp LSTM learns to generate offsets on a full trajectory predicted by a nominal intention-aware linear model, which considers the intention hypotheses during filtering process. Through experiments on a publicly available dataset, we show that our method outperforms baseline approaches and demonstrate the robust performance of our method under abnormal intention-changing scenarios.
Title: Robot Learning through Interactions with Humans
Speaker: Shuijing Liu, UIUC
Abstract: As robots are becoming prevalent in people’s daily lives, it is important for them to learn to make intelligent decisions in interactive environments with humans. In this talk, I will present our recent works on learning-based robot decision making, through different types of human-robot interactions. In one line of work, we study robot navigation in human crowds and propose a novel deep neural network that enables the robot to reason about its spatial and temporal relationships with humans. In addition, we seek to improve human-robot collaboration in crowd navigation through active human intent estimation. In another line of work, we explore the interpretation of sound for robot decision making, inspired by human speech comprehension. Similar to how humans map a sound to meaning, we propose an end-to-end deep neural network that directly interprets sound commands for visual-based decision making. We continue this work by developing robot sensorimotor contingency with sound, sight, and motors through self-supervised learning.
4/02/21 – Faculty Talks
Speaker: Assistant Professor Yuke Zhu, UT Austin
3/26/21 – Student Talks
Title: Control, Estimation and Planning for Coordinated Transport of a Slung Load By a Team of Aerial Robots
Speaker: Junyi Geng, UIUC
Abstract: This talk will discuss the development of a self-contained transportation system that uses multiple autonomous aerial robots to cooperatively transport a single slung load. A “load-leading” concept is proposed and developed in this cooperative transportation problem. Different from existing approaches that usually fly a formation and treat the external slung load as disturbance, which ignore the payload dynamics, this approach proposes to attach sensors onboard the payload so that the payload can sense itself and lead the whole fleet. This unique design leads to a hierarchical load-leading control strategy, which is scalable and allows human-in-the-loop in addition to fully autonomous operation. It also enables a strategy for estimating payload parameters so as to improve the model accuracy. By manipulating the payload through the cables driven by the drones, the payload inertial parameters can be estimated. The close-loop performance is thus improved. The payload design also leads to convenient cooperative trajectory planning, which reduces to a simpler planning problem for the payload. Lastly, a load distribution based trajectory planning and control approach is developed to achieve near equal load distribution among the aerial vehicles for energy efficiency. This whole payload leading design enables the cooperative transportation team to fly longer, further and smarter. Components of this system are tested in simulation, indoor and outdoor flight experiments and demonstrate the effectiveness of the developed slung load transportation system.
Title: REDMAX: Efficient & Flexible Approach for Articulated Dynamics
Speaker: Ying Wang, Taxes A&M
Abstract: Industrial manipulators do not collapse under their own weight when powered off due to the friction in their joints. Although these mechanism are effective for stiff position control of pick-and-place, they are inappropriate for legged robots which must rapidly regulate compliant interactions with the environment. However, no metric exists to quantify the robot’s perform degradation due to mechanical losses in the actuators. We provide a novel formulation which describes how the efficiency of individual actuators propagate to the equations of motion of the whole robot. We quantitatively demonstrate the intuitive fact that the apparent inertia of the robots increase in the presence of joint friction. We also reproduce the empirical result that robots which employ high gearing and low efficiency actuators can statically sustain more substantial external loads. We expect that this framework will provide the foundations to design the next generation of legged robots which can effectively interact with the world.
3/12/21 – Student Talks
Title: Creating Practical Magnetic Indoor Positioning Systems
Speaker: David, Hanley, UIUC
Abstract: Steel studs, HVAC systems, rebar, and many other building components produce spatially varying magnetic fields. Magnetometers can measure these fields and can be used in combination with inertial sensors for indoor navigation of robots and handheld devices like smartphones. This talk will take an empirical approach to improving the performance of magnetic field-based navigation systems in practice. In support of this goal, a dataset—to improve empirical studies of these systems within the research community—shall be described. Then the impact a commonly used “planar assumption” has on the accuracy of current magnetic field-based navigation systems will be presented. The lack of robustness shown in this evaluation motivates both new algorithms for this type of navigation and new hardware, progress on both of which will be discussed.
Title: Residual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis
Speaker: Sim, Youngwoo, UIUC
Abstract: Industrial manipulators do not collapse under their own weight when powered off due to the friction in their joints. Although these mechanism are effective for stiff position control of pick-and-place, they are inappropriate for legged robots which must rapidly regulate compliant interactions with the environment. However, no metric exists to quantify the robot’s perform degradation due to mechanical losses in the actuators. We provide a novel formulation which describes how the efficiency of individual actuators propagate to the equations of motion of the whole robot. We quantitatively demonstrate the intuitive fact that the apparent inertia of the robots increase in the presence of joint friction. We also reproduce the empirical result that robots which employ high gearing and low efficiency actuators can statically sustain more substantial external loads. We expect that this framework will provide the foundations to design the next generation of legged robots which can effectively interact with the world.
3/05/21 – Faculty Talks
Speaker: Associate Professor Julia Hockenmaier, UIUC
Abstract: Virtual gaming platforms such as Minecraft allow us to study situated natural language generation and understanding tasks for agents that operate in complex 3D environments. In this talk, I will present work done by my group on defining a collaborative Blocks World construction task in Minecraft. In this task, one player (the Architect) needs to instruct another (the Builder) via a chat interface to construct a given target structure that only the Architect is shown. Although humans easily complete this task (often after lengthy back-and-forth dialogue), creating agents for each of this role poses a number of challenges for current NLP technologies. To understand these challenges, I will describe the dataset we have collected for this task, as well as the models that we have developed for both roles. I look forward to a discussion for how to adapt this work to natural language communication with actual robots rather than simulated agents.
Bio: Julia Hockenmaier is an associate professor at the University of Illinois at Urbana-Champaign. She has received a CAREER award for her work on CCG-based grammar induction and an IJCAI-JAIR Best Paper Prize for her work on image description. She has served as member and chair of the NAACL board, president of SIGNLL, and as program chair of CoNLL 2013 and EMNLP 2018.
2/19/21 – Student Talks
Title: Optimization-Based Visuotactile Deformable Object Capture
Speaker: Zherong Pan, UIUC
Abstract: Robot interacts with deformable objects all the time and relies on the perception system to estimate their state. While the shape of an object can be captured visually, their physical properties must be estimated by tactile interactions. We propose an optimization-based formulation that reconstructs a simulation-readydeformable object from multiple drape shapes under gravity. Starting from a trivial initial guess, our method optimizes both the rest shape and the material parameters to register the mesh with observed multi-view point cloud data, where we derive analytic gradients from implicit function theorem. We further interleave the optimization with remeshing operators to ensure a high quality of mesh. Experiments on beam recovery problems show that our optimizer can infer internalanisotropic material distributions and a large variation of rest shapes.
Title: Residual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis
Speaker: Ye Yuan, CMU
Abstract: Reinforcement learning has shown great promise for synthesizing realistic human behaviors by learning humanoid control policies from motion capture data. However, it is still very challenging to reproduce sophisticated human skills like ballet dance, or to stably imitate long-term human behaviors with complex transitions. The main difficulty lies in the dynamics mismatch between the humanoid model and real humans. That is, motions of real humans may not be physically possible for the humanoid model. To overcome the dynamics mismatch, we propose a novel approach, residual force control (RFC), that augments a humanoid control policy by adding external residual forces into the action space. During training, the RFC-based policy learns to apply residual forces to the humanoid to compensate for the dynamics mismatch and better imitate the reference motion. Experiments on a wide range of dynamic motions demonstrate that our approach outperforms state-of-the-art methods in terms of convergence speed and the quality of learned motions. Notably, we showcase a physics-based virtual character empowered by RFC that can perform highly agile ballet dance moves such as pirouette, arabesque and jeté. Furthermore, we propose a dual-policy control framework, where a kinematic policy and an RFC-based policy work in tandem to synthesize multi-modal infinite-horizon human motions without any task guidance or user input. Our approach is the first humanoid control method that successfully learns from a large-scale human motion dataset (Human3.6M) and generates diverse long-term motions.
2/12/21 – Opening Panel Discussion
How COVID-19 Affects Your Research?
Abstract: The COVID-19 pandemic is associated with an unprecedented impact on US academia and research enterprises. On the downside, university revenues have declined due to undergraduate enrollment drops and unstable external funding sources. Many traditional research activities have been suspended since last spring, especially in the STEM fields. This also revealed a limitation in collaboration and communication facilities and services. On the upside, however, the pandemic has served as a catalyst for the increased use and innovation in robotics. This involves autonomous devices for infection control, temperature taking, and movement tracking. In addition, social robotics and virtual avatars help people stay connected and reduce anxiety during the quarantine. In this seminar, we would invite four faculty members, Negar Mehr, Joao Ramos, Geir Dullerud, and Kris Hauser, to share their experience on the challenges and opportunities brought by the pandemic.
Talks from Inaugural Semester:
The Robots are Coming – to your Farm! Autonomous and Intelligent Robots in Unstructured Field Environments
Girish Chowdhary
Assistant Professor
Agricultural & Biological Engineering
Distributed Autonomous Systems Lab
November 22nd, 2019

Abstract: What if a team of collaborative autonomous robots grew your food for you? In this talk, I will discuss some key advances in robotics, machine learning, and autonomy that will one day enable teams of small robots to grow food for you in your backyard in a fundamentally more sustainable way than modern mega-farms. Teams of small aerial and ground robots could be a potential solution to many of the problems that modern agriculture faces. However, fully autonomous robots that operate without supervision for weeks, months, or entire growing season are not yet practical. I will discuss my group’s theoretical and practical work towards the underlying challenging problems in autonomy, sensing, and learning. I will begin with our lightweight, compact, and highly autonomous field robot TerraSentia and their recent successes in high-throughput phenotyping for agriculture. I will also discuss new algorithms for enabling a team of robots to weed large agricultural farms autonomously under partial observability. These direct applications will then lead up to my group’s more fundamental work in reinforcement learning and adaptive control that we believe are necessary to usher in the next generation of autonomous field robots that operate in harsh, changing, and dynamic environments.
Bio: Girish Chowdhary is an assistant professor at the University of Illinois at Urbana-Champaign with the Coordinated Science Laboratory, and the director of the Distributed Autonomous Systems laboratory at UIUC. At UIUC, Girish is affiliated with Agricultural and Biological Engineering, Aerospace Engineering, Computer Science, and Electrical Engineering. He holds a PhD (2010) from Georgia Institute of Technology in Aerospace Engineering. He was a postdoc at the Laboratory for Information and Decision Systems (LIDS) of the Massachusetts Institute of Technology (2011-2013), and an assistant professor at Oklahoma State University’s Mechanical and Aerospace Engineering department (2013-2016). He also worked with the German Aerospace Center’s (DLR’s) Institute of Flight Systems for around three years (2003-2006). Girish’s ongoing research interest is in theoretical insights and practical algorithms for adaptive autonomy, with a particular focus on field-robotics. He has authored over 90 peer reviewed publications in various areas of adaptive control, robotics, and autonomy. On the practical side, Girish has led the development and flight-testing of over 10 research UAS platform. UAS autopilots based on Girish’s work have been designed and flight-tested on six UASs, including by independent international institutions. Girish is an investigator on NSF, AFOSR, NASA, ARPA-E, and DOE grants. He is the winner of the Air Force Young Investigator Award, the Aerospace Guidance and Controls Systems Committee Dave Ward Memorial award, and several best paper awards, including a best systems paper award at RSS 2018 for his work on the agricultural robot TerraSentia. He is the co-founder of EarthSense Inc., working to make ultralight outdoor robotics a reality.
Student Talks:
Design and Control of a Quadrupedal Robot for Dynamic Locomotion
Yanran Ding
November 15th, 2019

Abstract: Legged animals have shown their versatile mobility to traverse challenging terrains via a variety of well-coordinated dynamic motions. This remarkable mobility of legged animals inspired the development of many legged robots and associated research works seeking for dynamic legged locomotion in robots. This talk explores the design and control of a small-scale quadrupedal robot prototype for dynamic motions. Here we present a hardware-software co-design scheme for the proprioceptive actuator and the model predictive control (MPC) framework for a wide variety of dynamic motions
Bio: Yanran Ding is a 4-th year Ph.D. student in the Mechanical Science and Engineering Department at University of Illinois at Urbana-Champaign. He received his B.S. degree in Mechanical Engineering from UM-SJTU joint institute, Shanghai Jiao Tong University, Shanghai, China in 2015. His research interests include the design of agile robotic systems and optimization-based control for legged robots to achieve dynamic motions. He is one of the best student paper finalists in the International Conference on Intelligent Robots and Systems (IROS) 2017.
Adapt-to-Learn: Policy Transfer in Reinforcement Learning
Girish Joshi
November 15th, 2019

Abstract: Efficient and robust policy transfer remains a key challenge in reinforcement learning. Policy transfer through warm initialization, imitation, or interacting over a large set of agents with randomized instances, have been commonly applied to solve a variety of Reinforcement Learning (RL) tasks. However, this is far from how behavior transfer happens in the biological world: Humans and animals are able to quickly adapt the learned behaviors between similar tasks and learn new skills when presented with new situations. We introduce a principled mechanism that can “Adapt-to-Learn”, that is adapt the source policy to learn to solve a target task with significant transition differences and uncertainties. We show through theory and experiments that our method leads to a significantly reduced sample complexity of transferring the policies between the tasks.
Bio: Girish Joshi is graduate student at DASLAB UIUC working under Dr. Girish Chowdhary. Prior to joining UIUC he did his masters in Indian Institute of Science Bangalore. His research interest are in Sample Efficient Policy Transfer in RL, Cross Domain skill transfer in RL, Information enabled Adaptive Control for Cyber-Physical Systems and Bayesian Nonparametric Approach in Adaptive Control and Decision making in Non-Stationary Environment.
Designing Robots to Support Successful Aging: Potential and Challenges
Wendy Rogers
Professor, Kinesiology and Community Health
Human Factors and Aging Laboratory
November 8th, 2019

Abstract: There is much potential for robots to support older adults in their goal of successful aging with high quality of life. However, for human-robot interactions to be successful, the robots must be designed with user needs, preferences, and attitudes in mind. The Human Factors and Aging Laboratory (www.hfaging.org) is specifically oriented toward developing a fundamental understanding of aging and bringing that knowledge to bear on design issues important to the enjoyment, quality, and safety of everyday activities of older adults. In this presentation, I will provide an overview of our research with robots: personal, social, telepresence. We focus on the human side of human-robot interaction, answering questions such as, are older adults willing to interact with a robot? What do they want the robot to do? To look like? How do they want to communicate with a robot? Through research examples, I will illustrate the potential for robots to support successful aging as well as the challenges that remain for the design and widespread deployment of robots in this context.
Bio: Wendy A. Rogers, Ph.D. – Shahid and Ann Carlson Khan Professor of Applied Health Sciences at the University of Illinois Urbana-Champaign. Her primary appointment is in the Department of Kinesiology and Community Health. She also has an appointment in the Educational Psychology Department and is an affiliate faculty member of the Beckman Institute and the Illinois Informatics Institute. She received her B.A. from the University of Massachusetts – Dartmouth, and her M.S. (1989) and Ph.D. (1991) from the Georgia Institute of Technology. She is a Certified Human Factors Professional (BCPE Certificate #1539). Her research interests include design for aging; technology acceptance; human-automation interaction; aging-in-place; human-robot interaction; aging with disabilities; cognitive aging; and skill acquisition and training. She is the Director of the Health Technology Graduate Program; Program Director of CHART (Collaborations in Health, Aging, Research, and Technology); and Director of the Human Factors and Aging Laboratory (www.hfaging.org). Her research is funded by: the National Institutes of Health (National Institute on Aging) as part of the Center for Research and Education on Aging and Technology Enhancement (www.create-center.org); and the Department of Health and Human Services (National Institute on Disability, Independent Living, and Rehabilitation Research; NIDILRR) Rehabilitation Engineering Research Center on Technologies to Support Aging-in-Place for People with Long-term Disabilities (www.rercTechSAge.org). She is a fellow of the American Psychological Association, the Gerontological Society of America, and the Human Factors and Ergonomics Society.
Student Talks:
Towards Soft Continuum Arms for Real World Applications
Naveen Kumar Uppalapati
November 1st, 2019

Abstract: Soft robots are gaining significant attention from the robotics community due to their adaptability, safety, lightweight construction, and cost-effective manufacturing. They have found use in manipulation, locomotion, and wearable devices. In manipulation, Soft Continuum Arms (SCAs) are used to explore uneven terrains, handle objects of different sizes, and interact safely with the environment. Current SCAs uses a serial combination of multiple segments to achieve higher dexterity and workspace. However, serial architecture leads to an increase in overall weight, hardware, and power requirements, thus limiting their use for real-world applications. In this talk, I will give an insight into the design of compact and lightweight SCAs. The SCAs use pneumatically actuated Fiber Reinforced Elastomeric Enclosures (FREEs) as their building blocks. A single section BR2 SCA design is shown to have greater dexterity and workspace than the current state of art SCAs. I will present a hybrid between the soft arm and rigid links known as Variable Length Nested Soft (VaLeNS) arm, which was designed to obtain the attributes of stiffness modulation and force transfer. Finally, I will present a mobile robot prototype for berry picking application.
Bio: Naveen Kumar Uppalapati is a 6th year Ph.D. student in the Dept. of Industrial and Enterprise Systems Engineering at the University of Illinois. He received his bachelor’s degree in Instrumentation and Control Engineering in 2013 from the National Institute of Technology, Tiruchirappalli, and Master’s degree in Systems engineering in 2016 from the University of Illinois. His research interests are design and modeling of soft robots, sensors design, and controls.
Toward Human-like Teleoperated Robot Motion: Performance and Perception of a Choreography-inspired Method in Static and Dynamic Tasks for Rapid Pose Selection of Articulated Robots
Allison Bushman
November 1st, 2019

Abstract: In some applications, operators may want to create fluid, human-like motion on a remotely-operated robot, for example, a device used for remote telepresence in hostage negotiation. This paper examines two methods of controlling the pose of a Baxter robot via an Xbox One controller. The first method is a joint-by-joint (JBJ) method in which one joint of each limb is specified in sequence. The second method of control, named Robot Choreography Center (RCC), utilizes choreographic abstractions in order to simultaneously move multiple joints of the limb of the robot in a predictable manner. Thirty-eight users were asked to perform four tasks with each method. Success rate and duration of successfully completed tasks were used to analyze the performances of the participants. Analysis of the preferences of the users found that the joint-by-joint (JBJ) method was considered to be more precise, easier to use, safer, and more articulate, while the choreography-inspired (RCC) method of control was perceived as faster, more fluid, and more expressive. Moreover, performance data found that while both methods of control were over 80\% successful for the two static tasks, the RCC method was an average of 11.85\% more successful for the two more difficult, dynamic tasks. Future work will leverage this framework to investigate ideas of fluidity, expressivity, and human-likeness in robotic motion through online user studies with larger participant pools.
Bio: Allison Bushman is a second-year master’s student in the Dept. of Mechanical Engineering and Materials Science at the University of Illinois at Urbana-Champaign. She received her bachelor’s degree in mechanical engineering in 2014 from Yale University. She currently works in the RAD Lab to understand what parameters are necessary in deeming a movement as natural or fluid, particularly as it pertains to designing movement in robots.
Dynamic Synchronization of Human Operator and Humanoid Robot via Bilateral Feedback Teleoperation
Joao Ramos
Assistant Professor
Mechanical Science and Engineering
[MORE INFO]

Abstract:
Autonomous humanoid robots are still far from matching the sophistication and adaptability of human’s perception and motor control performance. To address this issue, I investigate the utilization of human whole-body motion to command a remote humanoid robot in real-time, while providing the operator with physical feedback from the robot’s actions. In this talk, I will present the challenges of virtually connecting the human operator with a remote machine in a way that allows the operator to utilize innate motor intelligence to control the robot’s interaction with the environment. I present pilot experiments in which an operator controls a humanoid robot to perform power manipulation tasks, such as swinging a firefighter axe to break a wall, and dynamic locomotion behaviors, such as walking and jumping.
Bio:
Joao Ramos is an Assistant Professor at the University of Illinois at Urbana-Champaign. He previously worked as a Postdoctoral Associate working at the Biomimetic Robotics Laboratory, at the Massachusetts Institute of Technology. He received a PhD from the Department of Mechanical Engineering at MIT in 2018. During his doctoral research, he developed teleoperation systems and strategies to dynamically control a humanoid robot utilizing human whole-body motion via bilateral feedback. His research focus on the design and control of robotic systems that experiences large forces and impacts, such as the MIT HERMES humanoid, a prototype platform for disaster response. Additionally, his research interests include human-machine interfaces, legged locomotion dynamics, and actuation systems.
Some Thoughts on Learning Reward Functions
Bradly Stadie
Post-Doctoral Researcher
Vector Institute in Toronto
[MORE INFO]

Abstract:
In reinforcement learning (RL), agents typically optimize a reward function to learn a desired behavior. In practice, crafting reward functions that produce intended behaviors is fiendishly difficult. Due to the curse of dimensionality, sparse rewards are typically too difficult to optimize without carefully chosen curricula. Meanwhile, dense reward functions often encourage unintended behaviors or present overly cumbersome optimization landscapes. To handle these problems, a vast body of work on reward function design has emerged. In this talk, we will recast the reward function design problem into a learning problem. Specifically, we will consider two new algorithms for automatically learning reward functions. First, in Evolved Policy Gradients (EPG), we will carefully consider the problem of meta-learning reward functions. Given a distribution of tasks, can we meta-learn a parameterized reward function that generalizes to new tasks? Does this learned reward allow the agent to solve new tasks more efficiently than our original hand-designed rewards? Second, in Learning Intrinsic Rewards as a Bi-Level Optimization Problem, we consider the problem of learning a more effective reward function in the single-task setting. By using Self-Tuning Networks and tricks from the hyper-parameter optimization literature, we develop an algorithm that produces a better optimization landscape for the agent to learn against. This better optimization landscape ultimately allows the agent to achieve superior performance on a variety of challenging locomotion tasks, when compared to simply learning against the original hand-designed reward.
Bio:
Bradly Stadie is a postdoctoral researcher at the Vector Institute in Toronto, where he works with Jimmy Ba’s group. Bradly’s overarching research goal is to develop algorithms that allow machines learn as quickly and flexibly as humans do. At Toronto, Bradly has worked on a variety of topics including reward function learning, causal inference, neural network compression, and one shot imitation learning. Earlier in his career, he provided one of the first algorithms for efficient exploration in deep reinforcement learning. Bradly completed his PhD under Pieter Abbeel at UC Berkeley. He received his undergraduate degree in mathematics from the University of Chicago.
Human-like Robots and Robotic Humans: Who Engineers Who?
Ben Grosser
Associate Professor, School of Art + Design / NCSA
[MORE INFO]

Abstract:
For a while now we’ve watched robots regularly take on new human tasks, especially those that can be made algorithmic such as vacuuming the floor. But the same time frame has also seen growing numbers of experiments with artistic robots, machines made by artists that take on aesthetic tasks of production in art or music. This talk will focus on the complicated relationship between humans and machines by looking at a number of artworks by the author. These will include not only art making robots that many perceive as increasingly human, but also code-based manipulations of popular software systems that reveal how humans are becoming increasingly robotic. In an era when machines act like humans and humans act like machines, who is engineering who?
Bio:
Artist Ben Grosser creates interactive experiences, machines, and systems that examine the cultural, social, and political effects of software. Recent exhibition venues include the Barbican Centre in London, Museum Kesselhaus in Berlin, Museu das Comunicações in Lisbon, and Galerie Charlot in Paris. His works have been featured in The New Yorker, Wired, The Atlantic, The Guardian, The Washington Post, El País, Libération, Süddeutsche Zeitung, and Der Spiegel. The Chicago Tribune called him the “unrivaled king of ominous gibberish.” Slate referred to his work as “creative civil disobedience in the digital age.” His artworks are regularly cited in books investigating the cultural effects of technology, including The Age of Surveillance Capitalism, The Metainterface, Facebook Society, and Technologies of Vision, as well as volumes centered on computational art practices such as Electronic Literature, The New Aesthetic and Art, and Digital Art. Grosser is an associate professor in the School of Art + Design, co-founder of the Critical Technology Studies Lab at NCSA, and a faculty affiliate with the Unit for Criticism and the School of Information Sciences. https://bengrosser.com
Student Talks:
Multi-Contact Humanoid Fall Mitigation In Cluttered Environment
Shihao Wang
October 4, 2019
Abstract:
Humanoid robots are expected to take critical roles in our real-world in the future. However, this dream cannot be achieved until a reliable fall mitigation strategy has been proposed and validated. Due to a high center of mass position of the robot, robots have a high risk of falling down to the ground. In this case, we would like robot to utilize its nearby environment object for fall protection. This presentation discusses about my past work on robot fall protection with planning multi-contact fashion for fall recovery in cluttered environment. We believe that the capability of making use of robot’s contact(s) contributes an effective solution for fall protection.
Bio:
Shihao Wang is a 4th-year Ph.D. student of Department of Mechanical Engineering and Materials Science at Duke University. He is originally from China, received Bachelor degree in Mechanical Engineering from Beihang University in June 2014 and received Master degree in Mechanical Engineering from Cornell University in June 2015. After one year research at Penn State University, he joined Duke in Fall 2016 for Ph.D, research which is focused on Robotics, Legged Locomotion, Dynamic Walking and Controls.
Optimal Control Learning by Mixture of Experts
Gao Tang
October 4, 2019
Abstract:
Optimal control problems are critical to solve for task efficiency. However, the nonconvexity limits its application, especially in time-critical tasks. Practical applications often require a parametric optimization problem being solved which is essentially a mapping from problem parameters to problem solutions. We study how to learn this mapping from offline precomputation. Due to the existence of local optimum, the mapping may be discontinuous. This presentation discusses how to use the mixture of experts model to learn this discontinuous function accurately to achieve high reliability in robotic applications.
Bio:
Gao Tang is a 4-th year Ph.D. student of the Department of Computer Science at the University of Illinois at Urbana-Champaign. Before coming to UIUC, he spent 3 years as a Ph.D. student at Duke University. He received Bachelor and Master degree from Tsinghua University in Aerospace Engineering. His research is focused on numerical optimization and motion planning.
Bioinspired Aerial and Terrestrial Locomotion Strategies
Aimy Wissa
Assistant Professor, Mechanical Science and Engineering
Bio-inspired Adaptive Morphology (BAM) Lab
Sept. 27th, 2019

Abstract:
Nature has evolved various locomotion (self-propulsion) and shape adaptation (morphing) strategies to survive and thrive in diverse and uncertain environments. Both in air and on the ground, natural organisms continue to surpass engineered unmanned aerial and ground vehicles. Key strategies that Nature often exploits include local elasticity and adaptiveness to simplify global actuation and control. Unlike engineered systems, which rely heavily on active control, natural structures tend to also rely on reflexive and passive control. This approach of diverse control strategies yields multifunctional structures. Two examples of multifunctional structures will be presented in this talk, namely avian- inspired deployable structures and click beetle-inspired legless jumping mechanism.
The concept of wings as multifunctional adaptive structures will be discussed and several flight devices found on birds’ wings will be introduced as a pathway towards revolutionizing the current design of small-unmanned air vehicles. Experimental, analytical, and numerical results will be presented to discuss the efficacy of such devices. The discussion of avian-inspired devices will be followed by an introduction of a click beetle-inspired jumping mechanism that exploits distributed springs to circumvent muscle limitations, such a mechanism can bypass shortcomings of smart actuators especially in small-scale robotics applications.
Student Talks:
CyPhyHouse: A programming, simulation, and deployment toolchain for heterogeneous distributed coordination
Ritwika Ghosh
September 20th, 2019
Abstract:
Programming languages, libraries, and development tools have transformed the application development processes for mobile computing and machine learning. CyPhyHouse is a toolchain that aims to provide similar programming, debugging, and deployment benefits for distributed mobile robotic applications. Users can develop hardware-agnostic, distributed applications using the high-level, event driven Koord programming language, without requiring expertise in controller design or distributed network protocols, I will talk about the CyPhyHouse toolchain, its design, implementation, challenges faced and lessons learnt in the process.
Bio:
Ritwika Ghosh is a 6th year PhD student in the dept. of Computer Science at the University of Illinois. She received her Bachelors in 2013 from Chennai Mathematical Institute in India in 2013 in Math and Computer Science. Her research interests are Formal Methods, Programming Langauges and Distributed Systems.
Controller Synthesis Made Real: Reachavoid Specifications and Linear Dynamics
Chuchu Fan
September 20th, 2019
CSL Studio Rm 1232
Abstract:
The controller synthesis question asks whether an input can be generated for a given system (or a plant) so that it achieves a given specification. Algorithms for answering this question hold the promise of automating controller design. They have the potential to yield high-assurance systems that are correct-by-construction, and even negative answers to the question can convey insights about the unrealizability of specifications. There has been a resurgence of interest in controller synthesis, with the rise of powerful tools and compelling applications such as vehicle path planning, motion control, circuits design, and various other engineering areas. In this talk, I will introduce a novel approach relying on symbolic sensitivity analysis to synthesize provably correct controllers efficiently for large linear systems with reach-avoid specifications. Our solution uses a combination of an open-loop controller and a tracking controller, thereby reducing the problem to smaller tractable problems such as satisfiability over quantifier-free linear real arithmetic. I will also present RealSyn, a tool implementing the synthesis algorithm, which has been shown to scale to several high-dimensional systems with complex reach-avoid specifications.
Bio:
Chuchu Fan is finishing up her Ph.D. in the Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. She will join the AeroAstro Department of MIT as an assistant professor in 2020. She received her Bachelor’s degree from Tsinghua University, Department of Automation in 2013. Her research interests are in the areas of formal methods and control for safe autonomy.
Dancing With Robots: Questions About Composition with Natural and Artificial Bodies
Amy LaViers – Robotics, Automation, and Dance (RAD) Lab
September 13th, 2019

Abstract:
The formulation of questions is a central yet non-specific activity: an answer can be sought through many modes of investigation, such as, scientific inquiry, research and development, or the creation of art. This talk will outline guiding questions for the Robotics, Automation, and Dance (RAD) Lab, which are explored via artistic creation alongside research in robotics, each vein of inquiry informing the other, and, then, will focus on a few initial answers in the form of robot-augmented dances, digital spaces that track the motion of participants, artistic extensions to student engineering theses, and participatory performances that employ the audience’s own personal machines. For example, guiding questions include: By what measure do robots outperform humans? By what measures do humans outperform robots? How many ways can a human walk? Is movement a continuous phenomenon? Does it convey information? What is the utility of dance? What biases do new technologies hold? What structures can reasonably be named “leg”, “arm”, “hand”, “wing” and the like? Why does dancing feel so different than programming? What does it mean for two distinct bodies to move in unison? How does it feel to move alongside a robot? In order to frame these questions in an engineering context, this talk also presents an information-theoretic model of expression through motion, where artificial systems are modeled as a source communicating across a channel to a human receiver.
Bio:
Amy LaViers is an assistant professor in the Mechanical Science and Engineering Department at the University of Illinois at Urbana-Champaign (UIUC) and director of the Robotics, Automation, and Dance (RAD) Lab. She is a recipient of a 2015 DARPA Young Faculty Award (YFA) and 2017 Director’s Fellowship. Her teaching has been recognized on UIUC’s list of Teachers Ranked as Excellent By Their Students, with Outstanding distinction. Her choreography has been presented internationally, including at Merce Cunningham’s studios, Joe’s Pub at the Public Theater, the Ferst Center for the Arts, and the Ammerman Center for Arts and Technology. She is a co-founder of two startup companies: AE Machines, Inc, an automation software company that won Product Design of the Year at the 4th Revolution Awards in Chicago in 2017 and was a finalist for Robot of the Year at Station F in Paris in 2018, and caali, LLC, an embodied media company that is developing an interactive installation at the Microsoft Technology Center in Chicago. She completed a two-year Certification in Movement Analysis (CMA) in 2016 at the Laban/Bartenieff Institute of Movement Studies (LIMS). Prior to UIUC she held a position as an assistant professor in systems and information engineering at the University of Virginia. She completed her Ph.D. in electrical and computer engineering at Georgia Tech with a dissertation that included a live performance exploring stylized motion. Her research began in her undergraduate thesis at Princeton University where she earned a certificate in dance and a degree in mechanical and aerospace engineering.