AI may not care whether humans live or die, but tools like ChatGPT will still affect life-and-death decisions — once they become a standard tool in the hands of doctors. Some are already experimenting with ChatGPT to see if it can diagnose patients and choose treatments. Whether this is good or bad hinges on how doctors use it.
These systems are developing image processing capacity as well. At this point you still need a real doctor to palpate a lump or assess a torn ligament, but AI could read an MRI or CT scan and offer a medical judgment. Ideally AI wouldn’t replace hands-on medical work but enhance it — and yet we’re nowhere near understanding when and where it would be practical or ethical to follow its recommendations.
Andrew Beam, a professor of biomedical informatics at Harvard, has been amazed at GPT-4’s feats, but told me he can get it to give him vastly different answers by subtly changing the way he phrases his prompts. For example, it won’t necessarily ace medical exams unless you tell it to ace them by, say, telling it to act as if it’s the smartest person in the world.
“The amazing thing, and the thing I think few people predicted, was that a lot of tasks that we think require general intelligence are autocomplete tasks in disguise,” he said. That includes some forms of medical reasoning.
Isaac Kohane, a physician and chairman of the biomedical informatics program at Harvard Medical School, had a chance to start experimenting with GPT-4 last fall. He was so impressed that he rushed to turn it into a book, The AI Revolution in Medicine: GPT-4 and Beyond, co-authored with Microsoft’s Peter Lee and former Bloomberg journalist Carey Goldberg. One of the most obvious benefits of AI, he told me, would be in helping reduce or eliminate hours of paperwork that are now keeping doctors from spending enough time with patients, something that often leads to burnout.
For him, the value was in offering a second opinion — not replacing him — but its performance raises the question of whether getting just the AI opinion is still better than nothing for patients who don’t have access to top human experts.
You can also get GPT-4 to give different answers by asking it to pretend it’s a doctor who considers surgery a last resort, versus a less-conservative doctor. But in some cases, it’s quite stubborn: Kohane tried to coax it to tell him which drugs would help him lose a few pounds, and it was adamant that no drugs were recommended for people who were not more seriously overweight.
Disclaimer: This is a Bloomberg Opinion piece, and these are the personal opinions of the writer. They do not reflect the views of www.business-standard.com or the Business Standard newspaper
Note:- (Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor. The content is auto-generated from a syndicated feed.))