PHOENIX – Businesses and research groups know a lot more about you than you might think.
The recent Cambridge Analytica data breach at Facebook, where a political data firm connected to Donald Trump’s 2016 presidential campaign was able to access private information on millions of users, shows the immediacy and power of data-collecting and aggregating technology.
Data also is at risk of being stolen in other industries, such as medical records and other health information. Experts discussed the benefits and drawbacks of medical technology at a recent conference in Phoenix. Here’s some key takeaways on the good and the bad about the use of artificial intelligence and other technology in health care:
Personalized care, but diversity challenges
The good news: Health providers and researchers can increasingly use data to determine what works and what doesn’t for patients, allowing for more personalized care.
Brad Tritle, a conference speaker from health advisory firm Alira Health, recounted an experience he had working with the chairman of a Brooklyn company whose daughter was diagnosed with Crohn’s disease about 10 years ago.
“Her father said to the doctor, ‘So tell me, what are the likely outcomes of these various options given my daughter’s age, etcetera?” Tritle said. “And the doctor said, we don’t know. No one’s ever followed people through these treatment options.”
The company began an effort to collect and combine records on people with Crohn’s disease, the treatments they received and the various outcomes of those treatments.
The bad news: Using data to determine how to address illness and disease isn’t failsafe, warned University of Wisconsin law school professor Pilar Ossorio.
She said an elaborate, expensive-to-produce algorithm that used data to determine the proper starting dose of Warfarin, a blood thinner, was less accurate for African-Americans.
“That is all certainly because they trained on data sets from places in the Midwest where the population of all people of color is, like, 3 or 4 percent,” Ossorio said. “So it matters what kind of data the algorithm is trained on.”
Owning your health data to avoid privacy breaches
The good news: Researchers and companies are looking into ways to improve medical data sharing, which can help personalize care. There are plenty of logistical barriers to get around: For example, privacy laws could mean a patient’s primary care provider wouldn’t have access to the X-ray results when she visited the emergency room for a broken knee.
But that could soon change. Apple is one company looking into ways to put medical data in one place so that doctors can better understand patients’ needs. The multinational tech business has just announced a new feature to combine Health app data with medical records, currently in beta testing.
Conference speaker Melissa Buckley, director of the California Health Care Foundation’s Health Innovation Fund, works with companies to improve access to care for underserved Californians through Medicaid. She said data sharing between clinics, hospitals and housing organizations can make care more economically efficient and more convenient for patients.
“They need to understand: Who’s out there, who’s my whole panel of patients, who’s either sickest or most likely to become sick in the near future, and how do I work with them in order to improve their health or lower their costs?” Buckley said.
From there, she said, the population could be segmented into risk groups that could help doctors determine the most accurate diagnoses and most effective treatments for individual patients.
The bad news: Privacy advocates say keeping all patient data in one place poses unique risks, because hackers who gained access to a large network of information – from hospitals, specialists and government entities – could have greater potential for damage than if the data sets were kept separate.
“One day there’s going to be a major breach and people are going to say, ‘What? I didn’t even know my data was there,'” Tritle said.
Tritle, who has expertise in information privacy, said giving patients ownership over their own information – maybe even by requiring companies to pay for personal data – could reduce risks.
“Why should some other third party be making money off my data?” Tritle said. “If you are able to engage patients and consumers up front about their data, you can actually help create more opportunities for research, for health and wellness.”
Tritle said privacy policies for health, finance and social media applications should have a standardized summary at the top, similar to the FDA’s requirement for nutrition facts on packaged food items. That could help patients understand how their data would be used.
Machines hold human doctors accountable
The good news: Through an emerging technology called “machine learning,” researchers can feed data to a computer that then generates original, and sometimes unpredictable, solutions to problems.
In health care, doctors could potentially use this type of artificial intelligence to diagnose patients. Programmers can feed algorithms data on healthy and unhealthy scans, and the system can then use that data to come up with its own methods for detecting problems.
Radiology companies are developing technology to determine whether a patient has lung cancer. Barani pointed out a study where a system developed to detect cancerous nodules used data to learn complex concepts on its own – and proved more consistent than a human radiologist.
Data-driven technology is useful in direct care applications, Barani said, and provides a new way to hold doctors accountable.
“These algorithms allow us to measure a subjective aspect of the physician’s performance,” Barani said. “So the so-called ‘art of medicine ‘– you could actually measure performance with more objective data.”
The bad news: Machine learning doesn’t always lead to the correct answers, though, Ossorio said. Sometimes, it takes a human health care professional to detect a machine’s mistakes.
She pointed out one instance where researchers “trained” an algorithm to predict fatality risk for patients with pneumonia. The algorithm used data that included asthma patients, and found that these patients were less likely to die from pneumonia than others with the same diagnosis.
“Every single doctor who looked at that, who treats people with pneumonia, thought that was ridiculous and clearly wrong,” Ossorio said.
Researchers eventually discovered the problem with the data: Patients who already had a lung problem were more likely to pay attention to their breathing and noticed symptoms sooner. That improved their survival rate.
“It wasn’t, probably, that the algorithm was making a mistake,” Ossorio said. “The algorithm was learning something real in the data.”
She advised questioning how, and with what data, researchers use artificial intelligence to detect problems.
Connect with us on Facebook.