JUST ASK CHAT...FOR A CANCER SCREENING?
Art by Max Weinstein
Artificial intelligence burst into public discourse in 2022 with the release of OpenAI’s ChatGPT. Today, bringing up large language models (LLMs) or any other generative platform is a pretty good way to spark a few arguments, or at least some suppressed groans in a crowded room.
Try at your own risk.
AI has actually been in use for decades, especially in technical fields, just under different names. Predictive modeling, machine learning, whatever the title—it all boils down to the same basic principles. These technologies have been optimizing patient outcomes in healthcare since at least the 1980s, and modern developers with similar aims have also jumped on the frenzy of the last three years.
We seem to hear endless chatter about Sora, Gemini’s Nano Banana and Claude AI for students, yet virtually nothing about how AI can be integrated into scientific progress.
Generative models tend to be backed by billion dollar companies, who already have a wide reach and the media influence to oversaturate discourse with news of their developments. By comparison, smaller, clinical models could rely on data from a single hospital, making fewer waves and attracting less attention.
Proponents on either side of the AI debate tend to take an extreme stance on the benefits or deficits of generative models. Whether positive or negative, it is irresponsible to champion a hardline viewpoint on AI technology without a full, critical understanding of its breadth of applications. This means looking past the shiny new text, image and video generations, as well as the almost- too-good-to-be-true medical advancements that occasionally make headlines.
While there have been breakthroughs in medical care powered by predictive and classification AI models—which are trained on data from patients, not the text that LLMs learn from—emerging technologies still need to be rigorously evaluated before being paraded as miracles. The way to do this is through randomized, pragmatic testing of AI tools. Patients should be split into random groups who receive either usual care or usual care integrated with AI and analyzed for outcomes, said Daniel Byrne, who teaches an AI in Healthcare course at Johns Hopkins University.
“In healthcare, with AI, it’s like people are driving in the dark with the lights off. And I’m saying, you need to turn on the headlights and know where you’re going,” Byrne said. “Patients are going to flock to hospitals that do it the right way, and then the others are going to have to follow them.”
Byrne has been involved in the development of several AI tools across the healthcare industry, from those increasing surgical patient survival odds in 1983 to reducing autoimmune disease diagnosis time in 2024. He says prediction of complications and diseases through AI will allow for more effective treatment, saving lives by reducing the number of preventable medical errors.
Stepping away from predictive models into the realm of text generation for patient care can be problematic. LLMs trained on medical literature without the critical thinking skills to evaluate flaws in research design or the potential for false conclusions will often spit out inaccuracies.
This should not be taken as a blanket dismissal of all AI as unreliable—rather, it is a further reason to separate types of technologies when discussing their merits.
“There are people on the spectrum who say AI is evil and it’s biased, we need to regulate it, we need to stop it,” Byrne said. “On the other end of the spectrum, there are people who say AI is magical. We don’t even need to study it, we just need to use it. That leads to short-term success, but long-term success requires being in the middle of this scientific balanced approach.”
What much of this comes down to is establishing AI literacy, an increasingly important skill in any industry. This cannot be achieved by refusing to engage with any form of the technology, nor by embracing each new development without pausing to consider its implications.
The overpowering topic of generative AI has also impacted Syracuse University students, faculty and staff.
Nolan Singh, a sophomore at SU, is currently contesting an AI violation with the Academic Integrity Office. Singh used ChatGPT to create an outline of his thoughts for a writing assignment, which he disclosed in accordance with his professor’s stated AI policy. After this, the professor sent an email voicing her concerns with his presumed AI usage for the actual writing, held a meeting with him to discuss, then filed a violation. Then it happened again with a second assignment.
“She didn’t say it was ever flagged for AI, she was really just saying, ‘I really just have a feeling that you used AI,’” Singh said. “I basically said, ‘I’m going to take your accusations as just a positive testament to my writing abilities,’ to which she kind of backed off.”
This is just one instance where people’s obsession with generative AI has bled into academic decision-making. Beyond classwork, Singh said generative platforms are also the most commonly discussed among his friends and peers. For those in creative industries, they represent a threat to the integrity of human art.
On the other hand, tools like AlphaFold (a 3D protein-modeling software powered by Google DeepMind) and Anara can be used in scientific settings, building potentially more positive relationships with students.
Joe Martino, a medicinal chemistry senior, uses Anara to help summarize and understand research papers. This platform only pulls information from what the user uploads, and is meant to aid in analysis of topics instead of replacing the user’s thought process.
“There are a lot of uses of AI that are just a waste, like when people just sit and talk to ChatGPT, that’s weird and a waste of resources,” Martino said. “If you’re driving scientific discovery—let’s say somebody uses it and can, hypothetically, cure cancer—that’s a trade that could very well be worth it, for the right reasons.”