Welcome to the research division of the University of South Florida's Center for Assistive, Rehabilitation and Robotics Technologies (CARRT). Our research groups and laboratories incorporate innovative theory and state-of-the-art facilities to develop, analyze and test cutting edge assistive and rehabilitation robotics technologies.
Our faculty, staff, graduate and undergraduate students pursue a wide range of projects all focused on maintaining and enhancing the lives of people with disabilities.
You can look at various videos of CARRT research and service projects through the following link: CARRT YouTube Channel
Abstract
The goal of this research is to create a human-robot collaborative system that learns from sensor-assisted teleoperation to reach the maximum possible autonomy requiring minimal user input. It is assumed that the user while unable to do a task manually, can use a haptic interface to perform it in teleoperation. This system will utilize (a) the cognitive and perceptual abilities of an individual in a wheelchair with significantly limited motor skills, AND (b) the superior dexterity, range of motion, and grasping power along with intention recognition and learning capability of a robot. Our system will seek autonomy by continuously learning from sensor-assisted and imprecise teleoperation by a human. Initially, the workload will be distributed between the robot and human based on their abilities to perform ADL/IADL related basic actions in (i) autonomous or (ii) sensor assisted teleoperation modes. Any tasks or portions of the tasks that can be performed autonomously will be done in autonomous mode and the remaining tasks or subtasks will be done in sensor-assisted teleoperation mode. The human-robot collaborative system will gradually learn to perform these tasks autonomously in the long run.
Motivation
Individuals with diminished physical capabilities must often rely on assistants to perform Activities of Daily Living (ADL) or other Instrumental Activities of Daily Living (IADL). Even though lacking or severely limited in gross and fine motor skills, such individuals are often in possession of sound cognitive and perceptual abilities. In this work, we will focus on individuals with Muscular Dystrophy (MD), Multiple Sclerosis (MS), and Spinal Cord Injuries (SCI levels C5 to C7).
ABSTRACT
The purpose of this project is to utilize active Steady-State Visual Evoked Potential (SSVEP)-based Brain Machine Interface (BMI) system to extract the signal coming from the pilot’s brain and enhance engagement and attention to critical flight commands needed during flight. A method will be developed to read the BMI-SSVEP signal, digitize it and use it for controlling the spatial position of a target destination for an active drone to follow. Emotiv Epoch BMI headset is used to extract the Electroencephalography (EEG) signal from the brain and use it to provide commands to the flight control system and display visual feedback to the pilot. The visual feedback will be based on real-time flight data and target location.
DESCRIPTION
Walking is a fundamental daily activity, and independent walking is a primary goal for individuals with a stroke. However, less than 22% of people with a stroke are able to regain sufficient functional walking to be considered independent community ambulators. Many individuals with a stroke have asymmetric walking patterns (e.g., different step lengths with each leg) that reduce walking efficiency, decrease walking speed, and increase the likelihood of injuries and falls. To get a feel for asymmetric gait, one can try wearing a thick-soled heavy shoe on one foot and go barefoot on the other; then try to walk with the same step length on each foot while maintaining a consistent timing between placing each foot on the ground. This simple perturbation will likely modify the gait such that the person has to push harder with one leg during walking. In contrast to the one perturbation in this example, each of the millions of individuals with a stroke have multiple asymmetric changes that compound the detrimental effects. While there are many different therapies to help individuals regain their walking ability, disabilities are unique and often need a solution specific to each person. This project will use a combination of existing therapies applied simultaneously to generate a user-specific therapy that adapts to the individual's needs. This project focuses on gait rehabilitation after a stroke but may lead to benefits in therapies for gait recovery in individuals with lower limb amputations, hemiparetic cerebral palsy, and other gait impairments.
This project addresses research questions such as: (i) what is the level of the symmetry that can potentially be targeted for a patient that has inherent asymmetry in functionality; (ii) what factors influence the perception of a symmetric gait; (iii) how to model the interactions among multiple therapies used for rehabilitation. To answer these questions, the research team will first understand how the effects from two different therapies combine. Based on the results of multiple pairs of simultaneous therapies, the second phase will use real-time feedback, based on the measured gait to optimize the output from two or more individual therapies. Controlling multiple therapies should allow for the control of multiple gait parameters that can change the gait pattern in a user-specific way. Since individuals with a stroke inherently have different force and motion capabilities on each leg, perfect symmetry may not be possible. Throughout this project, experiments will determine the bounds of acceptable asymmetry from a visual perspective. This perception will help understand what clinical physical therapists perceive about gait and help direct their attention to important parameters, particularly those that both have a large impact on gait function but are not easily perceived. Although the resulting gait may have some degree of asymmetry in all measures, the gait pattern will likely be less visually noticeable and meet the functional walking goals of individuals with asymmetric impairments.
ABSTRACT
As spaceflight missions increase in both duration and complexity, responses to environmental changes become a crucial focus for astronaut health and success. For long-term human spaceflight to the Moon or Mars, new smaller exercise equipment used for countermeasures for issues such as bone density loss, muscle atrophy and decreased aerobic capacity will need to be designed. Paired with these exercise devices, a vibration isolation system (VIS) will also need to be developed in order to prevent cyclic exercise forces from impacting the space vehicle. The proposed project seeks to study the human response (kinematics and kinetics) to ground perturbations in order to inform computational modeling of human exercise used to determine vibration isolation system (VIS) parameters and design considerations for long term human space flight.
ABSTRACT
In an environment of gravity levels so low that the general orientation of the body can be easily altered, astronauts face enough challenges, and often training is necessary for preparation in such environments. From past lunar missions, it has been observed that there is the potential for astronauts to fall due to this alteration caused by the drop of gravity to about 17% of what Earth’s gravity is, as humans can properly orient themselves with at least 15% of the gravitational force present on Earth. Due the increased struggle to maintain balance in addition to other factors such as lack of proper nutrition, muscle and bone lose, a shift in body fluid distribution, and inadequate amounts of sleeps, astronauts often struggle to complete tasks due to high fatigue levels, even with the amounts of countermeasures in place. In order to assess the feasibility of these lunar tasks, motion and force plate data can be collected using the Vicon Motion Capture system and analyzed for enhancement to avoid increased exhaustion. The motion capture system allows for a clear view of the body’s mechanisms as different tasks such as walking, running, hopping, digging, lifting, and climbing stairs are completed.
ABSTRACT
The purpose this project is to develop a body-as-a-network monitoring and alert system. The project will determine if measuring changes in biosignals and emotions can be used to create an alert system to help Service Members with situational awareness and emergency decision making in the field. The study will create combat simulations using the CAREN virtual reality system that will require the user to make quick decisions while measuring biosignals to determine which wearable sensors would be feasible as an alert system and communication device to improve situational awareness. Warnings, triggered by changes in biosignals, can be sent to other team members via text, auditory cues, tactile or blinking lights prompting them to heighten their focus.
ABSTRACT
There is a need to better understand how lower limb prosthesis or orthosis users function in the community outside the lab to inform clinical decisions. The goal of this project is to verify a portable monitoring system’s ability to measure prosthetic and orthotic function in the community and in return to duty situations. This project will address the DOD focus area of prosthetic and orthotic device function by testing a portable monitoring system that can be used in community and military relevant activities to analyze variables that are relevant to measuring patient outcomes and prosthetic/orthotic use outside of a laboratory or clinic.
Abstract
This paper presents a low-cost and self-contained robotic gripper that can be integrated into various robotic systems. This gripper can be used to accomplish a wide array of object manipulation tasks, made easy by its design and features a wide array of sensors that can be used to help accomplish these tasks. Furthermore the gripper has been made to be compatible with many robotic systems.
Motivation
Many existing grippers used in robotics today have varying capabilities which only allow them to perform certain tasks or grab certain objects for manipulation. These grippers may lack additional features (i.e. cameras or sensors) that allow them to better perform tasks. Additionally, many of these robotic grippers will only interface with the system they were designed for. To get these features that we needed for our projects, along with the flexibility to migrate the gripper from system to system, a new gripper had to be made.
Design Features
The gripper features a small 24v DC motor with an encoder that can be used to read the position and speed of the gripper. It also has a slip clutch to prevent over extending of the gripper. The unique cupped design of the gripper, emulates a human hand, allowing for more accurate grasping. Furthermore, the gripper contains a camera, and a distance sensor for finding the objects' position. Finally, the gripper has a current sensor that can be used to shut off the gripper if the force on the gripper exceeds a certain threshold for a given object.
Design Features
Packaging
In order for this gripper to be able to interface with other robotic systems without modification to the design of the gripper it was necessary to bundle all of the required components into a single control box. This control box consists of the microcontroller, motor controller, voltage regulators, and current sensing equipment required for the system to function. Additionally, the microcontroller allows for interfacing with the gripper via Ethernet, Wi-Fi, Bluetooth, or
USB. The Gripper assembly is powered via an external Power Supply (battery or DC adapter). In order to interface with the various added components in this gripper, a full software library was written in C++ that allows users to control the gripper, grab sensor information, and view the camera, without any additional programming. This library is cross-platform and can be used on mobile devices.
Operation
Above: The gripper control overview showing the flow of control and sensory information when using a TCP/IP based interface within the gripper system, including the transmission with the users.
The gripper system has several different interfaces through which it can be controlled. The gripper can either be controlled internally, in which the control programs are stored and run on the gripper's controller automatically, or remotely, in which the user is controlling the gripper from another device. The gripper system supports remote operation via Ethernet, Wi-Fi, and Bluetooth, with expansion to other communication formats available via the gripper controller's onboard USB ports.
Included with the gripper system is a custom Gripper Operation Program that grants the user easy operation of the gripper from a computer. The Gripper Operation Program allows the user to control the openness / closeness of the gripper via a simple slider bar, as well as the speed at which the gripper opens / closes. Additionally, the Operation Program allows the user to see all of the sensor information coming from the gripper, including the distance sensor, pressure
sensors, and camera. The Operation Program also allows the user to set pressure cutoff limits. These cutoff limits force the gripper to stop closing / opening when a pressure threshold has been reached. This feature prevents the gripper from exerting too much / too little force on an object. The Operation Program is available for Windows, Mac OS X, and Linux.
Above: The Gripper Operation Program showing the camera view, gripper position controls, gripper speed adjustment, and distance sensor information
Results
While the all-in-one control system for the WMRA Gripper was still being developed, the Gripper itself has been used in the Wheelchair Mounted Robotic Arm (WMRA) as the system's primary gripper. Using this Gripper, the WMRA has successfully manipulated objects of different classifications including bottles, cups, markers, etc. With the addition of the current feedback and shutoff, the Gripper should gain the ability to grasp objects that can be crushed by the gripper. Finally, the high resolution and high frame rate of the camera would better allow for the usage of computer vision algorithms to perform object detection and classification of the target objects, and more accurate grasping.
Conclusion
This gripper is an effective, fully featured, and cost effective device that can be used in a wide array of robotic manipulation tasks. Additionally, the gripper can be added to existing robotic systems with little (if any) modification to the existing system or the gripper.
ABSTRACT
This work focuses on the research related to enabling individuals with speech impairment to use speech-to-text software to recognize and dictate their speech. Automatic Speech Recognition (ASR) tends to be a challenging problem for researchers because of the wide range of speech variability. Some of the variabilities include different accents, pronunciations, speeds, volumes, etc. It is very difficult to train an end-to-end speech recognition model on data with speech impediment due to the lack of large enough datasets, and the difficulty of generalizing a speech disorder pattern on all users with speech impediments. This work highlights the different techniques used in deep learning to achieve ASR and how it can be modified to recognize and dictate speech from individuals with speech impediments.
NETWORKS ARCHITECTURE AND EDIT DISTANCE
The project is split into three consecutive processes; ASR to phonetic transcription, edit distance and language model. The ASR is the most challenging due to the complexity of the neural network architecture and the preprocessing involved. We apply Mel-Frequency Cepstrum Coefficients (MFCC) to each audio file which results in 13 coefficients for each frame. The labels (text matching the audio) is converted to phonemes using the CMU arpabet phonetic dictionary. The Network is trained using the MFCC coefficients as inputs and phonemes’ IDs as outputs. The Network architecture implemented is a Bidirectional Recurrent Deep Neural Network (BRDNN – fig.1), it consists of 2 LSTM cells (one in each direction) with 100 hidden blocks in each direction. The network is made deep by stacking two more layers, which results in a 3 layers network in depth. Two fully connected layers were attached to the output of the recurrent network with 128 hidden units in each. This architecture resulted in a 38.5% LER on the Test set.
Figure 1: Deep Bidirectional LSTM
Levenshtein edit distance (fig. 2) is used to generate potential words from phonemes. Edit distance of one means a maximum change of one phoneme is allowed, edit distance of two means a change of one or two phonemes is allowed when generating the potential words, and so on. These changes can be inserts, deletes or replacements. The language model uses the potential words to generate sentences with the most semantic meaning. The language model is another recurrent neural network model trained on full sentences. The model outputs the probability of a word occurring after a given word or sentence. It is simpler than the main speech recognition model because it is not bidirectional and not as deep. The language model uses beam search decoding to find the best sentences.
Figure 2: A) Edit operations B) Dynamic programming of Edit Distance C) Algorithm from Wikipedia |
WMRA is a project that combines a wheelchair's mobility control and a 7-joint robotic arm's manipulation control in a single control mechanism that allows people with disabilities to do many activities of daily living (ADL) with minimum or no assistance, some of these activities and tasks are otherwise hard or impossible for people with disabilities to accomplish.
This is a novel method of using laser data to generate trajectories and virtual constraints in real time that assist the user teleoperating a remote arm to execute tasks in a remote unstructured environment.
The laser also helps the user to make high-level decisions such as selecting target objects by pointing the laser at them. The trajectories generated by the laser enable autonomous control of the remote arm and the virtual constraints enable scaled teleoperation and virtual fixtures based teleoperation. The assistance to the user in scaled and virtual fixture based teleoperation modes is either based on position feedback or force feedback to the master. The user has the option of choosing a velocity control mode in teleoperation in which the speed of the remote arm is proportional to the displacement of the master from its initial position. At any point, the user has the option of choosing a suitable control mode after locating targets with the laser. The various control modes have been compared with each other, and time and accuracy based results have been presented for a 'pick and place' task carried out by three healthy subjects. The system is intended to assist users with disabilities to carry out their ADLs (Activities of Daily Living) but can also be used for other applications involving teleoperation of a manipulator. The system is PC based with multithreaded programming strategies for Real Time arm control and the controller is implemented on QNX.
Through collaboration with the School of Theatre & Dance and the School of Physical Therapy and Rehabilitation Sciences, adaptive recreational devices have been designed and developed to assist people with disabilities and amputees in various recreational activities, including dance and exercise.
A completely hands-free operated wheelchair that responds to one's body motion was developed primarily for use in the performing arts; however its unique user interface offers endless possibilities in the fields of assistive devices for daily activities and rehabilitation. This powered wheelchair modification provides greater social interaction possibilities, increases one's independence, and advances the state of the art of sports and recreation, as well as assistive and rehabilitative technologies overall. Various prototypes of this project have been developed, including a mechanical design and a sensor-based design. A new design is underway that utilizes an iPod or other hand held devices to control the wheelchair using the gyroscope capabilities of these devices.
This project involves the design, development, and testing of a stand-alone Omni-directional mobile dance platform with an independently rotating top. A robust, remote controlled, compact, transportable, and inexpensive moving platform with a rotating top is designed. This platform adds an additional choreographic element to create a unique style of dancing, which involves the use of a variety of mobility devices and performers including dancers with disabilities. The platform is designed to hold up to five-hundred pounds with an independently rotating top while the base moves forward/backward, sideways, or diagonally using Omni-directional wheels. The existing design has a removable top surface, folding wing sections to collapse the unit down to fit through an average size doorway, and detachable ramp ends for wheelchair access. The top of the platform is driven by a compact gear train designed to deliver maximum torque within the limited space.
Various terminal devices have been developed to assist prostheses users in their recreational activities. These terminal devices are designed to improve the user's capabilities to play Golf, kayaking, rock climbing and other activities.
A driver training system that combines a hand controlled modified van with a driving simulator has been developed. This system enables individuals to overcome transportation barriers that interfere with employment opportunities or access to daily activities. With the combination of AEVIT (Advanced Electronic Vehicle Interface Technology) and virtual reality driving simulator known as SSI (Simulator Systems International), an environment is created where a user can have different interfaces to learn to operate a real time motor vehicle. Various adaptive controls are integrated to the system. Analysis of various controls with various user abilities can be used to recommend specific devices and to train users in the virtual environment before training on their modified vehicle.
Passive dynamic walkers (PDW) are devices that are able to walk down a slope without any active feedback using gravity as the only energy source. In this research, we are examining asymmetric walking in a similar, but different approach, as the above Gait Enhancing Mobile Shoe Project. Typically, PDWs have used symmetric walkers (i.e., same masses and lengths on each side), which generally results in symmetric gaits. However, individuals with a stroke and individuals that wear a prosthetic do not have physical symmetry between both sides of their body. By changing one physical parameter on one of the two legs in the PDW, we can show a number of stable asymmetric gait patterns where one leg has a consistenty different step length than the other, as shown on the right. The figure on the right has the right knee moved up the leg. This asymmetric model of walking will enable us to test the effect of different physical changes on how individuals will alter their gait.
Many daily tasks require that a person simultaneously use both hands, such as opening the lid on a jar or moving a large book. Such bimanual tasks are difficult for people who have a stroke, but the tight neural coupling across the body can potentially allow individuals to self-rehabilitate by physically coupling their hands. To examine potential methods for robot-assisted bimanual rehabilitation, we are performing haptic tracking experiments where individuals experience a trajectory on one hand and attempt to recreate it with their other hand. Despite the physical symmetries, the results show that joint space motions are more difficult to achieve than motions in the visually centered space.
Certain types of central nervous system damage, such as stroke, can cause an asymmetric walking gait. One rehabilitation method uses a split-belt treadmill to help rehabilitate impaired individuals. The split-belt treadmill causes each leg to move at a different speed while in contact with the ground. The split-belt treadmill has been shown to help rehabilitate walking impaired individuals on the treadmill, but there is one distinct drawback; the corrected gait does not transfer well to walking over ground. To increase the gait transference to walking over ground, I designed and built a passive shoe that admits a motion similar to that felt when walking on a split-belt treadmill. The gait enhancing mobile shoe (GEMS) alters the wearer's gait by causing one foot to move backward during the stance phase while walking over ground. No external power is required since the shoe mechanically converts the wearer's downward and horizontal forces into a backward motion. This shoe allows a patient to walk over ground while experiencing the same gait altering effects as felt on a split-belt treadmill, which should aid in transferring the corrected gait to walking in natural environments. This work is funded by the Eunice Kennedy Shriver National Institute of Child Health & Human Development, NIH NICHD, award number R21HD066200 and is in collaboration with Amy Bastian at the Kennedy Krieger Institute and Erin Vasudevan at the Moss Rehabilitation Research Institute.
The goal of this project is to improve the effectiveness of vocational rehabilitation services by providing an environment to assess and train individuals with severe disabilities and underserved groups in a safe, adaptable and motivating environment. Using virtual reality, simulators, robotics, and feedback interfaces, this project will allow the vocational rehabilitation population to try various jobs, tasks, virtual environments and assistive technologies prior to entering the actual employment setting. This will aid job evaluators and job coaches assess, train and place persons with various impairments. Using virtual reality, simulators, robotics, and feedback interfaces the proposed project will:
The proposed project will simulate job environments such as a commercial kitchen, an industrial warehouse, a retail store or other potential locations that an individual will likely work. Features of the simulator could include layering of colors, ambient noise, physical reach parameters and various user interfaces. The complexity of the simulated job tasks could be varied depending on the limitations of the user to allow for a gradual progression to more complex tasks in order to enhance job placement and training.
In collaboration with Draper Laboratories and the Veterans Administration Hospital, wearable sensors research has been conducted in two different projects. A balance belt project, and a portable motion analysis project.
The purpose of this study is to develop a wearable Balance Belt to alert patients with abnormal vestibular function for injury and fall prevention. The user will be alerted using four vibrotactiles situated around the belt in case the inertial measurement unit (IMU) senses a good potential of misbalance.
The purpose of this study is to develop a wearable motion analysis system (WMAS) using commercially available inertial measurement units (IMU) working in unison to record and output gait parameters in a clinically relevant way. The WMAS must accurately and reliably output common gait parameters such as gait speed, stride length, torso motion and head rotation velocities which are often indicators of TBI. Validation of the wearable motion analysis system capabilities has been conducted using the Vicon optical based motion analysis system with healthy subjects during various gait trials including increasing and decreasing cadence and speed; and turning. A graphical user interface (GUI) that is clinically relevant will be developed to make this system usable outside of clinical settings.
Through collaboration with the School of Theatre & Dance and the School of Physical Therapy and Rehabilitation Sciences, biomechanics of human body motion is analyzed for various activities using Vicon motion analysis system, leading to fewer injuries, and better training practices. These activities include upper and lower body motion practices used by athletes and dancers, as well as prosthetics users when performing recreational or daily activities.
Current upper-limb prosthetic devices have powered wrist rotation only, making it difficult to grasp and manipulate objects. The wrist and shoulder compensatory motions of people with transradial prostheses have been investigated in the eight-camera infrared Vicon visual system that collects and analyzes three-dimensional movement data. This information helps clinicians, researchers, and designers develop more effective and practical prosthetic devices. The intact joints of the upper limb compensate for the limitations of the prosthesis using awkward motions. By analyzing the compensatory motions required for activities of daily living due to limitations of the prosthesis we hope to be able to improve the design and selection of prostheses.
This project is dedicated to the development of a simulation tool consisting of a robotics-based human body model (RHBM) to predict functional motions, and integrated modules for aid in prescription, training, comparative study, and determination of design parameters of upper extremity prostheses. The simulation of human performance of activities of daily living while using various prosthetic devices is optimized by data collected in the motion analysis lab.
The current generation of the RHBM has been developed in MATLAB and is a 25 degree of freedom robotics based kinematic model, with subject specific parameters. The model has been trained and validated using motion analysis data from ten control subjects and data collected from amputee subjects is being integrated as it is collected.
This project concentrates on measuring and predicting motion occurring at the socket residual limb interface. The current model will be a 4 degree of freedom robotics based kinetic model. Movement between the residual limb and prosthetic socket will be collected by a motion capture system (socket rotations and translations) and a new optics based device (relative slip between internal socket face and residual limb skin surface).
The goal of this project is to develop a robotics based human upper body model (RHBM), and associated constraints for the prediction and simulation of human motion in confined spaces, and under microgravity conditions to aid astronaut training. A force based component with an adjustable gravity term will also be added to the current kinematic based RHBM to allow for the simulation of external forces at varying levels of gravity: moon gravity; and microgravity. Statistically based probability constraints from motion capture data will also be incorporated to determine if a mixed method of modeling is more accurate and more efficient for studying upper limb movements such as using tools and moving objects. A motion analysis system will be used to collect kinematic data of subjects performing astronaut based activities of daily living in a confined space similar to the International Space Station. Analysis of this data will then be used to derive the model parameters. Functional joint center estimations will be used to find the geometric parameters of the model, and a variety of control methods including using force fields and statistical processes to generate microgravity will be used to determine the control parameters.
"Center for Assistive, Rehabilitation & Robotics Technologies"
Research - Education - Service
Center for Assistive, Rehabilitation& Robotics Technologies
Copyright © 2012, College of Engineering, USF, 4202 E. Fowler Avenue, ENB 118 , Tampa, FL 33620
Direct questions or comments about the Web site to www@eng.usf.edu