A Comprehensive and Modular Robotic Control Framework for Model-Less Control Law Development Using Reinforcement Learning for Soft Robotics

Charles Edward Sullivan, University of Texas at El Paso

Abstract

Soft robotics is a growing field in robotics research. Heavily inspired by biological systems, these robots are made of softer, non-linear, materials such as elastomers and are actuated using several novel methods, from fluidic actuation channels to shape changing materials such as electro-active polymers. Highly non-linear materials make modeling difficult, and sensors are still an area of active research. These issues have rendered typical control and modeling techniques often inadequate for soft robotics. Reinforcement learning is a branch of machine learning that focuses on model-less control by mapping states to actions that maximize a specific reward signal. Reinforcement learning has reached a level of maturity at which accessible tools and methods are available to anyone. These tools and methods are typically used in simulation environments regarding robotics, with little work being done learning directly on hardware. For the interested researcher, getting started in soft robotics can be a daunting and difficult process. Previous attempts to standardize a basic control hardware for pneumatically actuated soft robotics have been made but fall short in the areas including but not limited to automatic, repeatable control, scalability and modularity, and sensor feedback. This Thesis develops a complete, integrated, and modular framework for soft robotic development and control. This includes actuation hardware, sensor data acquisition, ground truth measurement techniques, and reinforcement learning integration for controls development. This complete package accomplishes two things. First is to address the shortcomings of the existing tools such as the Soft Robotic Toolkit by developing a hardware and software framework capable of accurate pneumatic control that is inexpensive, accessible, and scalable. Second is to integrate existing reinforcement learning tools and workflows into said framework. These tools are integrated using Robot Operating System (ROS) as a messaging backbone. A focus on modular design principles allows easy integration of additional capabilities or exchange of different actuation methods, sensor technology, ground truth techniques, or learning algorithms for future work and system expansion. Prototype soft robots are developed, realized, and used to characterize the system and ensure full functionality. These robots are then characterized themselves using the framework and trained without a prior model using Twin delayed deep deterministic policy gradient (TD3) in 8000 training steps. An optimal policy was learned and compared with a random policy to demonstrate effectiveness.

Subject Area

Robotics|Artificial intelligence|Polymer chemistry|Electrical engineering|Computer science|Materials science|Mechanical engineering|Remote sensing

Recommended Citation

Sullivan, Charles Edward, "A Comprehensive and Modular Robotic Control Framework for Model-Less Control Law Development Using Reinforcement Learning for Soft Robotics" (2020). ETD Collection for University of Texas, El Paso. AAI28262192.
https://scholarworks.utep.edu/dissertations/AAI28262192

Share

COinS