El desempeño del ChatGPT en la resolución de un examen de residencia médica: ¿un indicador de la evolución de inteligencia artificial en educación médica?
Date
Subject
ChatGPT
inteligencia artificial
educación médica
lenguaje natural
examen de residencia médica
ChatGPT
artificial intelligence
medical education
natural language
medical residency exam
inteligencia artificial
educación médica
lenguaje natural
examen de residencia médica
ChatGPT
artificial intelligence
medical education
natural language
medical residency exam
Language:
Journal Title
Journal ISSN
Volume Title
Publisher
Intituto Tecnológico de Santo Domingo (INTEC)
Introducción:
ChatGPT (Generative Pre-trained Transformeres) una herramienta de procesamiento de lenguaje natural desarrollada por OpenAI que utiliza el modelo de lenguaje GPT para generar respuestas similares al lenguaje humano natural. Esta tecnología ha demostrado su capacidad para completar tareas complejas y ha atraído la atención en el ámbito educativo, especialmente en la medicina. El objetivo de este estudio es evaluar el desempeño de ChatGPT en la resolución de preguntas del examen de residencia médica para optar por una especialidad (ENURM) en la República Dominicana en 2023.
Métodos:
Se ingresaron las 100 preguntas del examen ENURM de 2023 en formato de preguntas de selección múltiple en ChatGPT 3.5, con la instrucción de "seleccionar la respuesta correcta a la siguiente pregunta del examen ENURM 2023". Se realizó un estudio descriptivo transversal para evaluar el desempeño de la herramienta.
Resultados:
ChatGPT logró una precisión del 77% en las respuestas proporcionadas, mientras que el 23% de las preguntas no fueron respondidas correctamente. Al desglosar el rendimiento por tipo de pregunta, ChatGPT mostró una eficacia del 74.6% en preguntas directas y del 88.2% en casos clínicos. las especialidades en las cuales se identificaron respuestas incorrectas incluyen hematología, gastroenterología, cardiología, anatomía, genética, cirugía, pediatría, ginecología e infectología. A pesar de estas limitaciones, es relevante destacar que el desempeño de ChatGPT superó el promedio general de los aspirantes a residencias médicas en términos de precisión de respuestas.
Conclusiones:
ChatGPT demostró un buen desempeño en la respuesta a preguntas de examen ENURM. Esta herramienta puede ser útil para el procesamiento del lenguaje natural en la educación médica aún con sus limitaciones y no puede reemplazar la enseñanza tradicional y la experiencia clínica.
Introduction: ChatGPT (Generative Pre-trained Transformers) is a natural language processing tool developed by OpenAI that utilizes the GPT language model to generate human-like natural language responses. This technology has proven its capability in completing complex tasks and has garnered attention in the educational field, especially in medicine. The aim of this study is to evaluate the performance of ChatGPT in solving questions from the medical residency exam to opt for a specialty (ENURM) in the Dominican Republic in 2023. Methods: The 100 questions from the 2023 ENURM exam in multiple-choice question format were entered into ChatGPT 3.5, with the instruction to "select the correct answer to the following ENURM 2023 exam question." A cross-sectional descriptive study was conducted to assess the tool's performance. Results: ChatGPT achieved a 77% accuracy in the responses provided, while 23% of the questions were not answered correctly. When breaking down performance by question type, ChatGPT showed an effectiveness of 74.6% in direct questions and 88.2% in clinical cases. The specialties in which incorrect answers were identified include hematology, gastroenterology, cardiology, anatomy, genetics, surgery, pediatrics, gynecology, and infectious diseases. Despite these limitations, it is relevant to highlight that ChatGPT's performance exceeded the overall average of medical residency applicants in terms of response accuracy. Conclusions: ChatGPT demonstrated good performance in answering ENURM exam questions. This tool can be useful for natural language processing in medical education despite its limitations and cannot replace traditional teaching and clinical experience.
Introduction: ChatGPT (Generative Pre-trained Transformers) is a natural language processing tool developed by OpenAI that utilizes the GPT language model to generate human-like natural language responses. This technology has proven its capability in completing complex tasks and has garnered attention in the educational field, especially in medicine. The aim of this study is to evaluate the performance of ChatGPT in solving questions from the medical residency exam to opt for a specialty (ENURM) in the Dominican Republic in 2023. Methods: The 100 questions from the 2023 ENURM exam in multiple-choice question format were entered into ChatGPT 3.5, with the instruction to "select the correct answer to the following ENURM 2023 exam question." A cross-sectional descriptive study was conducted to assess the tool's performance. Results: ChatGPT achieved a 77% accuracy in the responses provided, while 23% of the questions were not answered correctly. When breaking down performance by question type, ChatGPT showed an effectiveness of 74.6% in direct questions and 88.2% in clinical cases. The specialties in which incorrect answers were identified include hematology, gastroenterology, cardiology, anatomy, genetics, surgery, pediatrics, gynecology, and infectious diseases. Despite these limitations, it is relevant to highlight that ChatGPT's performance exceeded the overall average of medical residency applicants in terms of response accuracy. Conclusions: ChatGPT demonstrated good performance in answering ENURM exam questions. This tool can be useful for natural language processing in medical education despite its limitations and cannot replace traditional teaching and clinical experience.
Description
Type
info:eu-repo/semantics/article
info:eu-repo/semantics/publishedVersion
info:eu-repo/semantics/publishedVersion
Source
Science and Health; Vol. 8 No. 2 (2024): Science and Health, april-june; 47-55
Ciencia y Salud; Vol. 8 Núm. 2 (2024): Ciencia y Salud, abril-junio; 47-55
2613-8824
2613-8816
10.22206/cysa.2024.v8i2
Ciencia y Salud; Vol. 8 Núm. 2 (2024): Ciencia y Salud, abril-junio; 47-55
2613-8824
2613-8816
10.22206/cysa.2024.v8i2