All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links.
The future of robotics in surgery will involve an increasingly powerful virtual environment, where surgeons are able to see through the body and potentially work side by side with autonomous robotic assistants.
Rows of gruesome medical artifacts inhabit the shelves of the Hunterian Museum in London. From human skulls to terrifying surgical devices, the horrors and triumphs of hundreds of years of medical advancement sit silently on shelf after shelf over two floors.
Upstairs, nestled beside the sharp blades and heavy knives once used in surgical amputations, sits a rusty and ferocious-looking automatic circular saw. It would have been used for churning through human bone in the days before the widespread use of anaesthetics.
A prototype device invented in 1850 by William Winchester, the saw is more than a gruesome medical curiosity. It is an early example of the ways in which machines have been used to augment the natural human ability of surgeons.
Transferring the exertion of sawing from the surgeon to the machine, the saw was perilously uncontrollable. Around 15cm in diameter, with a large wooden handle, the saw had no kill switch or brake; no way of safely stopping it once it had been wound up and set into motion.
As the museum caption dryly notes, this simple machine, this primordial robot, "unsurprisingly didn't catch on".
Today, our tools for augmenting a surgeon's natural ability are lightyears away from Winchester's dumb saw. Surgical robots in the 21st century allow humans to operate on one another with superhuman precision. The elimination of jitters and shaking give every surgeon a rock steady hand, while even the most minute incisions are made possible by the robot's careful movements.
"[Robot-assisted surgery] is more controlled, more precise, better able to dissect tissue, better able to control bleeding and preserve important structures," says Ben Challacombe, consultant urologist at Guy's and St Thomas' Hospital in London. He has been using surgical robots since 2006, and has carried out almost 500 robot-assisted operations.
But more than simply augmenting our physical capabilities, the future of robotics in surgery will involve an increasingly powerful virtual environment, where surgeons are able to see through the body, control robots directly with their mind and potentially work side by side with autonomous robotic assistants.
Virtual reality surgery
The human torso, or the thoracic cavity in medical speak, is a messy place. Your intestines compete for space with your stomach, liver and kidneys in a swamp of organs and tissue.
Whereas once standard surgical procedure would be to cut open your body, opening the door to this messy and cluttered broom closet, keyhole surgery is now widespread. Surgeons operate through tiny incisions in the body, reducing scarification and healing times.
A generation of robots, like the da Vinci surgical robot, has given surgeons the dexterity and mobility to operate in the confined and claustrophobic interior of the body. But combined with advances in imaging, the technology allows surgeons to see through the body like never before. "Augmented reality gives us the opportunity to superimpose information such as, here are vessels or here are nerves that you don't want to disturb, over the view of the actual tissue," says Catherine Mohr, Director of Medical Research at Intuitive Surgical, the company that makes the da Vinci surgical robot.
Not only can MRI and CAT scans be overlaid onto the surgeon's video feed, providing an information-rich 3D virtual environment that allows the surgeon to see structures that would otherwise be invisible, but specific areas can be "lit up", allowing them to be targeted easily by the surgeon. "[Take] for instance a kidney with a tumour," says Challacombe. "You could have the tumour in green and you could portray where, beneath the surface, the tumour was in the kidney."
The technique he's referring to is fluorescence imaging. Before the procedure, the tumour is injected with a fluorescent dye, which then shows up under certain wavelengths of light, like near-infrared or UV, depending on the dye.
It's analogous to a soldier using infrared goggles at night to spot the enemy. It lights up tumours, making them stand out against the background morass of flesh and connective tissue.
But more than simply a beacon, this technique can be used to track the movement of cancer to the lymph nodes, allowing the surgeon to stem the spread of the cancer before it becomes more serious.
The net effect is to create an augmented reality environment, where the surgeon is able to see so more than is simply visible on the surface. Indeed, at Technische Universität München in Germany are using a virtual reality surgical headset to overlay live anatomical data over the surgeon's field of view.
One example is the introduction of visual constraints, which sound comically similar to the board game, Operation. "We're can put a 2mm safety margin around [the tumour]," says Challacombe, "so the screen is going to go red or green if you go into that area.". "[It's almost like] an idiot-proof, surgeon-proof safety guide," says Challacombe -- whether it's reassuring to know that an operation is "surgeon-proof" is another matter entirely.
But while robots are helping surgeons to see and do more than would be humanly possible, the question of what we allow the robots to do themselves is a is an unanswered one, both technically and morally.
Smart, autonomous knives The same messiness and non-linear nature of soft tissue in the body that makes imaging so useful, also makes robotic autonomy exceedingly difficult.
In some surgeries, a degree of automation is already possible.
Certain eye surgeries, for example, involve the use of a pre-programmed robot that performs precise incisions on the cornea.
Orthopaedic procedures that require the milling away of bone use automated robots. It's in these types of surgeries, where the body is straightforwardly defined, and the outcomes and processes can be clearly measured, that robotic automation is most easily implemented.
But some in the field object to idea that autonomy is positive end in itself. "I personally don't think that fully autonomous surgical robots is the right approach to take," says Guang-Zhong Yang, Director of the Hamlyn Centre for Robotic Surgery at Imperial College London. "You have a human, which is pretty good in terms of decision-making, and learning. You have a robot, which is good at doing precise movements," says Yang, who argues that one or the other acting independently is an inferior solution. "why not use a combination of both?"
One of the ways to combine humans and robots is an experimental technique called " perceptual docking", where the eye movements of the surgeon are tracked in order to teach the robot the cognitive processes and decision-making paths involved in surgery. Developed at Imperial College London, the idea is the first step on the road to more invasive and direct neural interfacing between the robot and the surgeon.
Surgical robots today are, in essence, very advanced tools. With no decision-making functions of their own, they merely channel the surgeon's actions. In Yang's words, they are "complicated scalpels".
But after the technical obstacles have been surmounted, and these complicated scalpels become autonomous knives, new moral questions will arise, similar to those posed by the use of robots in warfare. "There's a debate at the moment about autonomous robots in warfare," says Richard Ashcroft, Professor of Bioethics at Queen Mary University. "[But] put crudely, warfare is much less difficult than surgery."
The stakes are equally as high, of course. Both involve a responsibility for the lives, and deaths, of human beings. But while we might be comfortable with the use of autonomous robots in far off battlefields -- out of sight, out of mind -- convincing family members to let a machine operate on their loved ones is perhaps more challenging.
Part of the problem is responsibility. A human surgeon is responsible for his or her actions, but for a robot guided by a cloud database of medical information and complicated learning algorithms, it's less clear where responsibility lies. "Where the surgeon makes a mistake, you can point to the man or the woman and say that it was you," says Ashcroft. "Where a robot makes a mistake, one of the difficulties for patients is that the location of responsibility becomes much more diffuse."
We're already beginning to see that argument play out in the US, where Intuitive Surgical is being sued in a series of medical negligence cases regarding the training of surgeons in the use of the da Vinci robot. The first of at least 26 lawsuits concerning the company was recently dismissed, but the litigation is a taste of what might be expected when we begin to invest robots with decision-making capabilities.
For the wealthy alone?
Despite the technological advances, surgical robots remain the reserve of wealthy hospitals in wealthy countries. Of the 2,500 da Vinci robots sold to date, almost 2,000 are in the United States. The robot costs up to $2 million (£1.3 million), not to mention the yearly running costs, the cost of the associated instruments -- batteries are not included, so to speak -- and the cost of training surgeons to use the complicated equipment.
There are currently 31 da Vinci robots in the UK, all of them in England. Chelsea and Westminster Hospital recently purchased one for use in paediatrics. It's the first da Vinci robot in the UK intended for sole use on children.
The Children's Hospital Trust Fund, a charity based at the hospital, spent four years raising the funds to purchase the robot.
Munther Haddad, a consultant paediatric surgeon and chair of the Fund, says that resource constraints in the NHS inevitably make big equipment purchases difficult.
And despite making the purchase, the Fund still needs to raise another £500,000 to cover equipment and running costs. The great potential of surgical robotics comes at a steep price. "Surgical robotics today is very similar to digital computers in the late 70s and early 80s," says Guang-Zhong Yang. "Certainly, the machines have come a long way, but they are big, bulky and they're expensive."
But as with computers, they will inevitably become smaller and more powerful in terms of their mechanical, sensing, and imaging capabilities.
Crucially, says Yang, increased affordability will democratise the technology. Not just the elite few will have access to it, but instead it will be used routinely in all manner of surgical procedures.
And we'll one day look at today's robots with the same base curiosity with which we view the tools in the Hunterian Museum, wondering how we ever let surgeons do operations with their bare hands.
This article was originally published by WIRED UK