Is ChatGPT better at responding to patients than human doctors?

Could it be time for the doctors to be afraid? A new study shows that ChatGPT may be better at responding to patients than human doctors.
 Is ChatGPT better at responding to patients than human doctors?
READING NOW Is ChatGPT better at responding to patients than human doctors?

ChatGPT can be more empathetic than human doctors, according to a new study examining patient ratings. While many people assume that artificial intelligence (AI) will provide reckless, fact-based advice when faced with health problems, it actually appears to rank better than real doctors when it comes to tact.

The idea of ​​using AI as a way to make healthcare accessible to everyone has come up many times, as language models show impressive accuracy, but whether they have the necessary empathy to speak directly to the patient has been one of the most pressing questions. Medicine requires people skills that take cultural and social contexts into account, and language models have proven to be terrible at these tasks in the past.

But one study tried to find out how much people really like being alone with AI as a health “expert.”

University of California San Diego researchers took a sample of 195 patient questions randomly selected from Reddit, each with a verified doctor answering the questions. The team then posed the same questions to ChatGPT and collected their answers before randomly combining them with the original human responses. This random set of responses was given to licensed healthcare professionals to be rated for accuracy of information, which responses were better, and how empathetic the responses were (the better their attitude towards the patient).

78.6 percent to ChatGPT, 22 percent to real doctors

Surprisingly, raters 78.6 percent favored the responses of the ChatGPT, which were considered to be of higher quality and were often much longer, over the responses of the physicians. The enormous difference in responses was surprising. The proportion of responses rated as “good” or “very good” was about 80 percent for the chatbot, compared to just 22 percent for doctors.

And when it comes to empathy, the chatbot continued to outpace doctors. While 45 percent of ChatGPT’s responses were rated as “empathetic” or “very empathetic,” only 4.6 percent of doctors’ responses were rated the same.

The results showed ChatGPT to be a highly effective online health assistant, but it’s worth noting that this research has design issues. First, the fact that respondents’ responses were obtained from an online forum, where doctors respond in their spare time and are completely disconnected from the person asking the question, greatly increases the likelihood of resulting in inadequate and impersonal responses, which could explain some of the empathy differences.

Also, ChatGPT is a very efficient way to scan and transfer information online. But he cannot think or reason logically. Physicians may be faced with new cases that fall outside the current understanding of previous case studies, which can cause ChatGPT to give incorrect advice or fail to understand the problem when they do not have solid basic knowledge.

Therefore, it may be possible that ChatGPT, while not the only point of contact with healthcare, is an excellent way to forward cases and prioritize workloads for already overwhelmed doctors. The researchers suggest that they can draft responses and then doctors edit them for the best results.

The study was published in the journal JAMA Internal Medicine.

Comments
Leave a Comment

Details
149 read
okunma51225