Johns Hopkins surgeon-scientist Francis Creighton is leading research to develop cooperative robots that can help surgeons avoid critical structures
Drilling through the temporal bone is a common element of many procedures that neurotologist and lateral skull base surgeon Francis Creighton performs. But this structure’s complex nature — three-dimensional osseous anatomy with narrow corridors full of arteries and other key structures, including the facial, vestibular and cochlear nerves — requires a delicate hand, he says.
“We’re taking a drill running at 80,000 revolutions per minute and putting it extremely close to critical structures,” Creighton says. “A wrong move as slight as a millimeter in this space could mean the difference between success and catastrophe.”
A better way to perform these procedures could be on the horizon — with the help of artificial intelligence (AI), Creighton says. For the past several years, he and his colleagues — including fellow physician-scientists Masaru Ishii and Deepa Galaiya in the Department of Otolaryngology—Head and Neck Surgery, and researchers at the Laboratory for Computational Sensing + Robotics at the Johns Hopkins Whiting School of Engineering — have worked on developing new technologies that could eventually lead to cooperative surgical robots that actively augment and improve surgeons’ performance and decrease their cognitive and physical load while they work in sensitive anatomy, such as the temporal bone.
All existing surgical robots are of a type called primary-secondary robots, Creighton explains — they simply follow surgeons’ commands without added influence. In contrast, a cooperative robot could enhance surgical motion, suppress tremors or stop a surgeon from treading into dangerous territory in the surgical field.
“It’s like bowling with bumpers,” Creighton says.
To develop this technology, he and his colleagues are relying on deep learning to create a “digital twin” — a virtual counterpart of the patient and procedure that can provide real-time feedback based on information gathered before and during the procedure. The researchers are collecting a wealth of data from simulated procedures in the Whiting School’s Swirnow Mock Operating Room, a state-of-the-art re-creation of a real-life operating room. There, they can drill through cadaver temporal bones, obtaining data on the changing visual scene, the drill motion and force applied, and the distance from critical structures collected through periodic scans, among other factors.
This data, and more collected from virtual reality, are being incorporated into AI algorithms that will eventually update the virtual twin continuously during procedures to guide the cooperative robot. This concept has proved viable, Creighton says, with tests showing that the preliminary cooperative robot that he and his colleagues developed can guide a surgical drill with about a 2 millimeter range of accuracy. To be a viable tool in the operating room, this number needs to shrink to 1 millimeter or less.
Eventually, such cooperative robots could aid surgeons in a variety of fields other than otolaryngology—head and neck surgery. Johns Hopkins is one of the few places in the country or even the world that have the combination of factors necessary to build this technology, Creighton says, including the vast breadth of skill sets, high tech facilities and collaboration among researchers with different expertise.
We have a culture here of developing innovations that really push the cutting edge,” he says. “The vast promise of AI is finally coming closer to reality.”
Learn more about research projects in the Department of Otolaryngology–Head and Neck Surgery.