Robots proved to be far better than human doctors at predicting whether a high-risk lesion is benign or a malignant predecessor to breast cancer. An artificial intelligence (AI) system developed by MIT, Massachusetts General Hospital and Harvard Medical School correctly predicted 97% of malignant cancers in a 2017 test. That’s nearly a third better than current diagnostic procedures that typically begin with a mammogram and end up being interpreted by human doctors.
It’s not that doctors miss the high-risk lesions, but current procedures give too many false positives that result in biopsies and unnecessary surgeries about 90% of the time. The test is cited in a new report from the Brookings Institution on how robotic medicine may (or may not) make health care better in the future.
The report aptly begins with the data sets and programs that new computing technology makes possible. The development of new machine learning and deep learning technologies to analyze, make predictions and recommend treatment depends on the quality of the massive data sets available to medical professionals.
Those data sets are both a blessing and a curse. The blessing is that they exist and contain data that describes in full detail a patient’s health records, lab data, prescription data and demographic data. That is also their shortcoming, according to Brookings’ authors Bob Kocher and Zeke Emanuel: “A major concern about all our health care datasets is that they perfectly record a history of unjustified and unjust disparities in access, treatments, and outcomes across the United States.”
Using these data sets to train AI models and robots could perpetuate those disparities, if not make them worse. Baking those disparities into an AI system neither changes access to treatment nor outcomes. More non-white Americans will continue to have higher rates of infant mortality, heart disease and other ailments. As the authors note, “Biases based on socioeconomic status may be exacerbated by incorporating patient-generated data from expensive sensors, phones, and social media.”
Kocher and Emanuel also note that robots, at least at first, won’t be able to be trained to take advantage of an experienced doctor’s clinical judgment when it comes to treating two patients differently even though both may be suffering from the same symptoms or malady.
What the authors don’t say is that left to their own devices, computer technologists often think that hardware and the right software will solve any of the world’s problems. To build trust in robotic medicine, the first thing that has to happen is a recognition that existing data sets are biased and need to be fixed.
That may sound easy, but if it were, someone would already have done it. The old computer industry axiom still holds: Garbage in, garbage out. Removing the garbage before it gets into the AI model is critical. That may still be years away.