ChatGPT Latest Version Passes US Medical Certification Tests

A Harvard computer scientist and a doctor have opined that GPT-4 possesses superior clinical judgment compared to many medical professionals.

He asserted that the chatbot could diagnose rare conditions, just like he would.

GPT-4 can make errors and has not promised to adhere to the Hippocratic oath.

Dr. Isaac Kohane, a computer scientist and physician at Harvard, collaborated with two other colleagues to examine GPT-4. Their primary objective was to assess the abilities of OpenAI’s current artificial intelligence model in a medical environment.

Dr. Isaac Kohane says:

“I’m stunned to say: better than many doctors I’ve observed,”

“The AI Revolution in Medicine,”

The book was jointly written by Carey Goldberg, an independent journalist, and Peter Lee, Microsoft’s Vice President of Research. Although Microsoft has put billions of dollars into building OpenAI’s technologies, both authors state that neither OpenAI nor Microsoft imposed editorial control over the book.

Kohane states that GPT-4, available to paying customers since March 2023, can correctly answer American medical exam licensing questions with a success rate higher than 90%. It surpasses earlier ChatGPT AI models, such as GPT-3 and -3.5, for taking tests, even outperforming some licensed doctors.

GPT-4 is not only an excellent test-taker and source of facts. It also serves as a highly effective translator. For example, the book demonstrates its ability to accurately translate discharge information for a Portuguese-speaking patient and transform difficult technical jargon into language that even 6th graders can easily understand.

The authors provide vivid examples to demonstrate that GPT-4 can provide doctors with useful advice on communicating with their patients in a caring and straightforward manner. Additionally, it can rapidly read extensive reports or studies and give summaries. Moreover, it can explain its reasoning behind problems as if it were demonstrating human intelligence.

When the book’s authors posed the question to GPT-4, inquiring if it could reason causally, it answered that it could only detect patterns in data and not any true understanding or purpose. Despite these restrictions, Kohane discovered through his work that GPT-4 can imitate how physicians diagnose conditions with remarkable accuracy – even though still imperfect.

How GPT-4 AI Can Help Doctors Diagnose And Treat Patients

Dr. Isaac Kohane is a medical professional and computer scientist who has dedicated his career to combining medicine and artificial intelligence.

In the book, Kohane performs a simulated thought experiment using GPT-4 based on a real-life case he handled involving an infant patient some years ago.

The bot was fed some key details of the baby from a physical examination, plus ultrasound and hormone level information which enabled it to diagnose an uncommon condition known as congenital adrenal hyperplasia – something Kohane claims he could have done with all his years of study and experience.

A mix of admiration and shock struck the doctor.

Dr. Isaac Kohane went on to say:

“On the one hand, I was having a sophisticated medical conversation with a computational process,”

“on the other hand, just as mind blowing was the anxious realization that millions of families would soon have access to this impressive medical expertise, and I could not figure out how we could guarantee or certify that GPT-4’s advice would be safe or effective.”

GPT-4 Isn’t Always Right: Learn About Its Ethical Challenges

GPT-4 has been known to make mistakes, and this book contains numerous examples of its errors. These range from minor clerical mistakes like incorrectly stating a BMI that had already been accurately computed earlier to mathematical slip-ups such as erroneously “solving” a Sudoku puzzle or omitting to square an expression in an equation.

Occasionally, the errors may be minor, and the system will likely maintain that it is correct despite being questioned. It is conceivable that a single mistaken digit or misjudged measurement could result in major problems with prescribing medication or diagnosing correctly.

GPT-4, similar to earlier GPTs, cancan “imagine” or generate answers that are not requested, an expression used in a technical context when artificial intelligence makes up responses.

GPT-4 clarified that while it does not knowingly intend to deceive or mislead anyone, it may make mistakes due to incomplete or inaccurate data. Furthermore, GPT-4 acknowledged that it does not have the same clinical judgment or ethical responsibility as a human medical professional.

The book’s authors propose a cross-check technique where GPT-4 is asked to review its work with a “fresh set of eyes” in a new session. This can sometimes uncover errors, even though GPT-4 may not be very keen on admitting when it has made an error. Additionally, one can command the bot to display its output so that it can be examined from a human perspective.

GPT-4 has the potential to provide clinicians with more time and resources to focus on their patients rather than their computer screens, according to the authors. They insist that we must imagine a world in which machines become increasingly intelligent and could even surpass human intelligence across various domains. We must then contemplate how this future would work out.

The performance of ChatGPT in passing the US medical licensing exam and diagnosing a rare medical condition in seconds is a significant breakthrough that can potentially transform healthcare delivery. The achievement emphasizes the importance of collaboration between the technology and medical sectors to harness the power of AI and drive innovation in medicine.

Source: Insider

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top