CNS-2117785, MRI Testbed: Development of High-Confidence Medical Cyber-Physical System Research Instrument with Benchmark Security Software
Introduction
The remarkable precision and dexterity enabled by modern surgical robots has led to myriad patient benefits and an uptick of interest in robotic surgery. However, as a relatively nascent technology, there are still many questions regarding best practices surrounding safety, security, and usability. For example: What additional channels does robotic teleoperation leave open for security threats? What is the best way to visualize the operation site to the surgeon while providing optimal comfort and site visibility? How can we render haptic feedback to surgeon to make them more aware of tissue forces without adding harmful hand jitter?
We are addressing these and other pressing questions in the field of medical cyber-physical systems (MCPS) with our surgical robotics test bed. The test bed consists of a Raven II surgical robot, a haptic user interface for surgeon operation, and software for collecting rich data (vision, haptics, IMU, force, etc…). The Raven device is typically used to research, for example, autonomous surgical movement, novel visualization interfaces, or novel sensor architecture meant to be later translated to practical in-use surgical robots such as the da Vinci. At present, a local user of the surgeon interface can control the Raven’s arms and grippers with two 6- or 7-DOF haptic stylus input devices to complete basic surgical-style tasks such as suturing practice, needle transfer, or peg-in-hole.
External parties are invited to collaborate through use of our Raven and associated datasets. The robot can be remotely teleoperated to test novel input methods or path-planning AI models. Datasets including video and robot movement are automatically generated and are made immediately available to the remote user for further study of human factors and training AI models.


Remote Access
Our test bed exists physically on UL Lafayette’s campus in Oliver Hall room 103. Interested parties can request access in person or remotely. Remote access will require VPN access through the University’s UConnect VPN (an account must be created first), after which the Surgeon Station software can be downloaded, which is pre-configured to connect to the Raven’s control computer to send inputs and receive video output. New users will be guided through an onboarding process to help become familiar with the system. To inquire about starting with the test bed, please email jason.woodworth1@louisiana.edu.

An undergraduate lab student operating the Raven with two Phantom Omnis to practice a bi-manual object transfer task

A user operating the Raven with two Force Dimension Omega.7 haptic controllers and viewing the site through a VR camera.
Operation of the Surgeon Station is most optimally controlled by either 2 Phantom Omni haptic styluses (Windows 10 and below) or 2 Force Dimension Omega.x series haptic styluses (Windows 11). To democratize access and expand beyond expensive haptic stylus controllers we have also added additional input options, including joystick, keyboard, controller, and 3D mouse input. Site imagery can be viewed on either a standard monitor showing streams from two webcams mounted on the Raven’s 80/20 extrusions, or in a VR headset showing imagery from a 180 VR camera. VR viewing allows for additional comfort and free head rotation without the need to rotate the actual site camera.
DaVinci Compatibility
The Raven is also compatible with commercial DaVinci tools, as pictured below, allowing for the use of scissors and hooks in addition to the standard Raven graspers. Tools can be switched out on request.

Dataset Features
Data is automatically collected from the Raven Station and Surgeon Station while performing tasks. Data types and collection rates are listed in the table below. Custom python scripts are able to merge data from different sources into a consistent dataset with matched timestamps based on NTP. These datasets can be used to study the nature of human surgical performance under a variety of different conditions, to train AI models to autonomously complete basic surgical tasks, and much more.
Data Type | Source | Hz | Characteristics |
End effector pose | Raven II | 1000 | Direct record of tool tip movement and rotation, plus gripper state |
Joint angles | Raven II | 1000 | Rotation of each robotic joint (6+1 linear actuator per arm) |
Video | Site Camera | 30 | Recorded from 2 site webcams + the 180 VR camera |
Force vectors | Custom force sensor | 1000 | 2D vector describing horizontal forces acting on the Raven’s tool tip |
User manipulator pose | Haptic input stylus | 1000 | Position and rotation of the stylus used to control the Raven arms |
User physiology | VR headset and peripherals | 4-200 | HR, eye movements, etc… of the user while performing tasks |
Raven Station
All information given to ROS from the Raven control source code is saved through the rosbag command. This includes end effector poses, but also deeper data such as current supplied to each joint’s motor. Video from the operation site is recorded, with tool tip POV cameras to be added in the near future. Such data has been frequently used to train AI models autonomous task planning. Force vectors from a custom built force sensor, described below, are also recorded, allowing trained models to take into account and minimize force exerted on “tissue.”
Surgeon Station
All information sent to the Raven Station, including movement and rotation of the haptic stylus device, clutch status, and user options, are recorded. In addition, the user can wear a number of supported physiological devices to record associated physiological data throughout the task. Current supported devices include the Vive Pro Eye (eye tracking), Varjo VR/XR-3 (eye tracking), EmotiBit (heart rate, electrodermal activity, and skin temperature), and Polar Verity/H10 (heart rate). Such physiological data has been associated with factors like surgeon skill and confidence; in future projects we will use this data to automatically detect surgeon skill to improve surgical training modules.
Demo
A user attempting a basic peg transfer task with the Raven II station.
Future Content
Future work aims to improve remote accessibility and make the Raven test bed AI-ready. With an AI-ready test bed, guest researchers will be able to record human motions performing basic surgical tasks and quickly train an AI model using algorithms and architectures of their choice. These models will then be testable on a digital twin simulation of the Raven, and after passing safety checks, testable on the physical Raven to compare its performance against benchmarks set by other AI models. We plan to build additional security attack simulations that attack cyber-physical components of the actual robot, allowing users to test their models security against threats that can’t be accounted for in simulations alone.
Associated Projects
The construction of this test bed is meant to facilitate other projects improving surgical robotics. These projects aim to improve remote accessibility, make the Raven testbed more useful to AI researchers, and enhance the surgeon experience. Some such projects are described below.
AI-Ready Accessibility
We are adding additional AI features to make our testbed more useful to external researchers. With an AI-ready test bed, guest researchers will be able to record human motions performing basic surgical tasks and quickly train an AI model using algorithms and architectures of their choice on our local hardware. These models will then be testable on a digital twin simulation of the Raven, and after passing safety checks, testable on the physical Raven to compare its performance against benchmarks set by other AI models. We are building additional security attack simulations that attack cyber-physical components of the actual robot, allowing users to test their models security against threats that can’t be accounted for in simulations alone.
Low-Cost Indirect Force Sensing
Current mainstream surgical robots require surgeons to only rely on visual feedback, foregoing the vital haptic feedback available in traditional surgery. Research has attempted to address this by adding force sensors to detect forces acting against the robot’s arms and rendering proportional forces on the haptic input stylus. Typical strain-gauge based force sensors used for this are costly and present challenges with sanitation when entering the human body. Our approach uses 4 low-cost piezoresistive filament strips arranged higher up on the tool shaft to detect planar forces acting on the wrist below, avoiding placing the sensor in the area that directly touches the surgical site. We have verified the validity of the sensor’s force output and current work is looking to minimize the sensor’s footprint and validate this approach from the user’s perspective.

The custom force sensor composed of 4 FlexiForce piezoresistive strips.

Displaying Patient Vitals to Surgeons
Monitoring patient vitals is essential for reacting to anomalies in the surgical process. Vitals are currently monitored primarily by an on-site nurse or anesthetist and communicated to the surgeon either verbally or visually, requiring the attention of an additional human operator and potentially distracting the system. More advanced systems may algorithmically interpret anomalous events and attempt to alert the surgeon with a visual or audible popup. We are investigating the benefits of using vibrotactile feedback as an alert system in comparison to or in harmony with an auditory system. A user study is underway in which a user will operate a virtual surgical needle while a simulated patient experiences events like blood pressure drops or cardiac arrests. Haptic alerts then prompt the user to respond accordingly.
Controller Comparison
6- or 7-DOF stylus-style controllers are the current most popular choice for operating surgical robotics due to their ability to naturally capture human motions most associated with needles, scalpels, and other common surgical tools. These devices are typically expensive and may require custom ordering, meaning they may be difficult to obtain for all emergency operation situations and create a high bar for entry for researchers looking to get started in the field of surgical robotics. We assume these styluses result in the best performance, but it is not known by how much they are superior. Additionally, it is not known if training on lower cost controllers, e.g. game controllers or joysticks, can translate to the haptic stylus devices.
We are running a user study to address these gaps and determine the effect sizes of performance differences between different devices and analyze the skill acquisition patterns of users across devices. Our first study examines the difference between traditional 7-DOF devices (Omega.7), standard joysticks, and hand tracking. Adding hand tracking with the Ultraleap Haptics Developer Kit allows us to also study the effects of midair haptics as opposed to force feedback from the Omega.7s.

Additional media

A user attempting the pea-on-peg task

Zoomed out view of the same task