Cornell Current Club

View Original

The Future of AI in Healthcare

Anwei Mi ‘25

Potential for AI in the Healthcare Industry

The implementation of AI in healthcare has the potential to revolutionize medicine. There is a 39% increase projected in nursing occupations by 2030, even allowing for 10% of nursing activities to be freed up by automation. This would be a significant step towards bettering as the burnout rate for healthcare professionals is high. Furthermore, data shows that there are 250,000 mistakes in diagnosis each year that can have harmful consequences.

Applications in Healthcare

To test the precision of AI, researchers at Drexel University used AI behind ChatGPT to analyze speech, and they found that the system correctly identified Alzheimer’s patients 80% of the time. The high accuracy AI has detecting Alzheimer's provides a positive outlook into the potential for AI to be at the forefront of healthcare. While there is no definite cure for Alzheimer’s disease, early diagnosis is important as there are methods to delay the onset of some of its effects. To demonstrate how AI technology can be used in the diagnosis phase of care, the dean of the Stanford University School of Medicine provided a three-step process. Step 1 includes a physician who notices a patient with a growth on the skin upon their visit. They then take a picture of it with their smartphone. With the information, AI gives an in-depth analysis of the possible diagnoses and the likelihood that it represents a malignancy, a cancerous tumor. Instantly, the physician can make appropriate referrals to the patient and advise them to seek treatment if necessary.

Obstacles to Successful Implementation

Despite the magnitude of benefits AI has in healthcare, there are undoubtedly downsides to implementation. One major concern is AI’s capability of significant intrusions into privacy. As healthcare providers store and transmit large quantities of personal patient data, they can be susceptible to data breaches and become targets for cybercriminals. This brings up ethical concerns about how to keep data confidential and away from malicious intents. Another concern is the introduction of bias into AI algorithms. AI systems learn from the data on which they are trained, and they can incorporate biases from those data. For instance, if the data available for AI are gathered in academic medical centers, the resulting AI systems will know less about patients from marginalized populations as they have less access to healthcare. Consequently, AI may provide a less effective analysis of treatment. Similarly, if speech-recognition AI systems are used to transcribe notes, AI may perform worse when the provider is of a race or gender underrepresented in training data.