However, its accuracy requires thorough evaluation for optimal future use.

Dr. Daisuke Horiuchi and Associate Professor Daiju Ueda from Osaka Metropolitan University School of Medicine led a research team to compare the diagnostic accuracy of ChatG with that of radiologists.

The study involved 106 musculoskeletal radiology cases, including patient medical histories, images, and imaging findings.

For the study, case information was entered into two versions of the AI ​​model, GPT-4 and GPT-4 with vision (GPT-4V), to generate diagnoses. The same cases were presented to a radiology resident and a board-certified radiologist, who were assigned the task of determining the diagnoses.

The results revealed that GPT-4 outperformed GPT-4V and matched the diagnostic accuracy of radiology residents. However, ChatGPT's diagnostic accuracy was found to be lower than that of certified radiologists.

Dr. Horiuchi commented on the findings, saying: “While the results of this study indicate that ChatG may be useful for diagnostic imaging, its accuracy cannot be compared to that of a board-certified radiologist. Furthermore, this study suggests that its performance as a diagnostic tool must be fully understood before it can be used.”

He also emphasized the rapid advances in generative AI, noting the expectation that it could become an auxiliary tool in diagnostic imaging in the near future.

The study's findings were published in the journal European Radiology, highlighting the potential and limitations of generative AI in medical diagnosis and underscoring the need for more research before widespread clinical adoption, although it serves its purpose well in this technological era. fast growing.