One of the most frequent questions we get at the library in recent months is in regards to A.I. What is A.I? Is A.I the future? Are we all about to be replaced by robots? In this month's comic strip, we simplify A.I. in order to make sense of what's realistic, what's plausible and what's still science fiction.
Comic written and drawn by: Rebecca Kyser
Citations:
1.Christian B. The Alignment Problem : Machine Learning and Human Values. W.W. Norton & Company; 2021.
For a moment, let’s entertain a hypothetical. Let’s say you have an excellent paper on your hands about the impact of smoke on the lungs. Your team is about to submit it for publication: pretty exciting! When you get your paper back from the publisher, it’s mostly good news: they’re willing to publish your paper with the caveat that you add a diagram of the lungs to your paper as a visual aid of the systems impacted. The problem? You have no idea where you could acquire an image that would suit this task that wouldn’t potentially violate copyright.
With this conundrum, one of your coauthors suggests a solution: why not generate one? They have a subscription to Midjourney, the AI software that can generate images from text. Why not give Midjourney a summary of the diagram you need, have it generate it, and then use that for your paper. After checking the journal’s policies on AI (it’s allowed with disclosure), you do just that, glad to have quickly moved past that stumbling block.
Pretty great, right? It sure sounds like it, until you take a look at the images Midjourney generated. Because on closer inspection, there are some problems.
Below is an image I generated in CoPilot for this blog post. I didn’t ask it to do something as complicated as making a diagram of how smoking impacts the lungs; instead I asked for just a diagram of human lungs. Here is what I got, with my notes attached.
Alright, so this might not be our best image. Thankfully, we have others. Let’s take a look at another image from the same prompt and see if it does a better job.
So what happened here? To explain how this image went terribly wrong, it’s best to start with an explanation of how AI actually works.
When we think of AI, we generally think of movies like The Terminator or The Matrix, where robots can fully think and make decisions, just like a human can. As cool (or terrifying depending on your point of view) as that is, such highly developed forms of artificial intelligence still solely exist in the realm of science fiction. What we call AI now is something known as generative AI. To vastly simplify the process, generative AI works as follows: you take a computer and feed it a large amount of information that resembles what you want it to possibly generate. This is known as “training data.” The AI then attempts to replicate images based on the original training data. (Vox made a video explaining this process much better than I can). So for example, if I feed an AI picture of cats, over time, it identifies aspects of cats across photos: fur, four legs, a tail, a nose,etc. After a period of time, it then generates images based on those qualities. And that’s how we get websites like “These Cats Do Not Exist.”
If you take a look at “These Cats Do Not Exist” you might notice something interesting: the quality of fake cat photos varies widely. Some of the cats it generates look like perfectly normal cats. Others appear slightly off; they might have odd proportions or too many paws. And a whole other contingent appears as what can best be described as eldritch monstrosities.
The reason for the errors in both our above images and our fake cats is due to the fact that the AI doesn’t understand what we are asking it to make. The bot has no concept of lungs as an organ, or cats as a creature; it merely recognizes aspects and characteristics of those concepts. This is why AI art and AI images can look impressive on the surface but fall apart under any scrutiny: the robot can mimic patterns well enough, but the details are much harder to replicate, especially when details vary so much between images. For example, consider these diagrams of human cells I had AI generate for this blog post.
Our AI doesn’t do bad in some regards: it understands the importance of a nucleus, and that a human cell should be round. This is pretty consistent across the images I had it make. But when it comes to showcasing other parts of the cell we run into trouble, given how differently such things are presented in other diagrams. The shape that one artist might decide to use for anaspect of a cell, another artist might draw entirely differently. The AI doesn’t understand the concept of a human cell, it is merely replicating images it’s been fed.
These errors can lead to embarrassing consequences. In March, a paper went viral for all the wrong reasons; the AI images the writers used had many of the flaws listed above, along with a picture of a mouse that was rather absurd. While the writers disclosed the use of AI, the fact these images passed peer review with nonsense text and other flaws, turned into a massive scandal. The paper was later retracted.
Let’s go back to our hypothetical. If you need images for your paper or project, instead of using AI, why not use some of Himmelfarb’s resources? On this Image Resources LibGuide, you will find multiple places to find reputable images, with clear copyright permissions. There are plenty of options from which to work.
As for our AI image generators? If you want to generate photos of cats, go ahead! But leave the scientific charts and images for humans.
Sources:
Ai Art, explained. YouTube. June 1, 2022. Accessed April 19, 2024. https://www.youtube.com/watch?v=SVcsDDABEkM.
Wong C. AI-generated images and video are here: how could they shape research? Nature (London). Published online 2024.
Himmelfarb Library’s Scholarly Communications Committee produces short tutorial videos on scholarly publishing and communications topics for SMHS, GWSPH, and GW School of Nursing students, faculty, and staff. Five new videos are now available on our YouTube channel and Scholarly Publishing Research Guide!
2023 NIH Data Management and Sharing Policy Resources by Sara Hoover - Sara is our resident expert on data management policy and resources. She provides an overview of the NIH policy, the essential elements of a data management and sharing plan, and highlights GW and non-GW resources that can aid you in putting together a data management and sharing plan. The video is 10 minutes in length.
Animal Research Alternatives by Paul Levett - Paul demonstrates how to conduct 3Rs alternatives literature searches for animal research protocols. He defines the 3Rs and explains how to report the search in the GW Institutional Animal Care and Use Committee (IACUC) application form. Paul is currently a member of the GW IACUC. The video is 13 minutes long.
Artificial Intelligence Tools and Citations by Brittany Smith - As a Library Science graduate student, Brittany has an interest in how AI is impacting the student experience. She discusses how tools like Chat GPT can assist with your research, the GW policy on AI, and how to create citations for these resources. The video is 6.5 minutes in length.
UN Sustainable Development Goals: Finding Publications by Stacy Brody - Stacy addresses why the goals were developed, what they hope to achieve, and shows ways to find related publications in Scopus. The video is 5 minutes long.
Updating Your Biosketch via SciEncv by Tom Harrod - Tom talks about the differences between NIH’s SciEncv and Biosketch and demonstrates how to use SciEncv to populate a Biosketch profile. Tom advises GW SMHS, School of Nursing, and GWSPH researchers on creating and maintaining research profiles and he and Sara provide research profile audit services. The video is 5 minutes long.
Artificial intelligence is on the cusp of radically transforming many aspects of our lives, including healthcare. AI tools can be used to aid diagnosis, recommend treatments, and monitor patients through wearables and sensors. A study published in May of this year found 47 FDA-approved AI remote patient monitoring devices. The majority monitor cardiovascular functions, but the study also found diabetes management and sleep monitors (Dubey and Tiwari, 2023).AI-enabled surgical robots are in various phases of testing and adoption. Partially autonomous systems like da Vinci and TSolution One® are in use for hard tissue procedures and the NIH reported on the successful use of a soft tissue robot last year (Saedi, et al., 2022).
AI can track trends in health or make predictions about it in populations. For example, the earliest warnings about the Covid pandemic came from two AI applications, HealthMap and BlueDot in December of 2019 (Chakravorti, 2022). A recent editorial in Pathogens discusses how AI machine learning can be used to analyze large data sets to identify patterns and trends in infectious disease, identify potential drug targets, and build predictive models to prevent or mitigate outbreaks (Bothra, et al., 2023).
AI administrative tools can greatly reduce the burden of paperwork through digital note taking with speech recognition software and filing insurance claims with systems like Medicodio. They can also be used to optimize scheduling, staffing, and resource allocation. AI robots that can gather and deliver supplies and equipment, reducing the burden on nurses and other clinical staff, are being adopted in hospitals (Gaines, 2023).
A 2020 GAO report on AI in healthcare identified challenges to building effective and safe AI applications. Accessing quality data headed the list. Incomplete and inconsistent data sets hampered AI decision tools during the Covid pandemic response (Chakravorti, 2022). Bias in data, lack of transparency, risks to patient privacy, and potential liability were also identified as barriers.
Another important factor is lack of trust in or acceptance of AI applications in healthcare by health consumers. A recent Pew Survey found that 60% of Americans are uncomfortable with AI being used in their healthcare and fewer than half believed that AI would improve health outcomes. The findings were not all negative. A majority thought that AI would reduce the number of mistakes made by healthcare providers and that it could also help eliminate bias and unfair treatment in healthcare. Respondents were comfortable with AI tools for skin cancer detection, but decidedly less comfortable with AI surgical robots and use of chatbots for mental health screenings. They were also concerned that the pace of adoption of these technologies will be too fast before risks to patients are understood and minimized.
References
Dubey, A. and Tiwari, A. (2023). Artificial intelligence and remote patient monitoring in US healthcare market: a literature review. Journal of Market Access & Health Policy, 11(1), 2205618. https://doi.org/10.1080/20016689.2023.2205618
Saeidi, H, Opfermann, J.D., Kam, M, et al.(2022). Autonomous robotic laparoscopic surgery for intestinal anastomosis. Science Robotics 7(62). https://doi.org/10.1126/scirobotics.abj2908
Bothra, A., Cao, Y., Černý, J., & Arora, G. (2023). The Epidemiology of infectious diseases meets AI: a match made in heaven. Pathogens, 12(2), 317. https://doi.org/10.3390/pathogens12020317
OpenAI, an artificial intelligence research and development company, released the latest version of their generative text chatbot program, ChatGPT, near the end of 2022. The program provides responses based on prompts from users. Since its release universities, research institutions, publishers and other educators worry that ChatGPT and similar products will radically change the current education system. Some institutions have taken action to limit or ban the use of AI generated text. Others argue that ChatGPT and similar products may be the perfect opportunity to reimagine education and scholarly publishing. There is a lot to learn about AI and its impact on research and publishing. This article aims to serve as an introduction to this rapidly evolving technology.
In a Nature article, Chris Stokel-Walker described ChatGPT as “a large language model (LLM), which generates convincing sentences by mimicking the statistical patterns of language in a huge database of text collated from the Internet.” (Stokel-Walker, 2023, para. 3) OpenAI’s website says “The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” (OpenAI, n.d., para. 1) ChatGPT may be used to answer simple and complex questions and may provide long-form responses based on the prompt. In recent months, students and researchers have used the chatbot to perform simple research tasks or develop and draft manuscripts. By automating certain tasks, ChatGPT and other AI technologies may provide people with the opportunity to focus on other aspects of the research or learning process.
There are benefits and limitations to AI technology and many people agree that guidelines must be in place before ChatGPT and similar models are fully integrated into the classroom or laboratory.
Van Dis et al. notes that “Conversational AI is likely to revolutionize research practices and publishing, creating both opportunities and concerns. It might accelerate the innovation process, shorten time-to-publication, and by helping people to write fluently, make science more equitable and increase the diversity of scientific perspectives.” (van Dis et. al., 2023, para. 4) Researchers who have limited or no English language proficiency would benefit from using ChatGPT to develop their manuscript for publication. The current version of ChatGPT is free to use making it accessible to anyone with internet access and a computer. This may make scholarly publishing more equitable, though there is a version of the program that is only available with a monthly subscription fee. If future AI technologies require fees, this will create additional access and equity issues.
While ChatGPT can produce long-form, seemingly thoughtful responses there are concerns about its ability to accurately cite information. OpenAI states that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” (OpenAI, n.d., para. 7) There is a potential for AI generated text to spread misleading information. Scholars who have tested ChatGPT also note that the AI will create references that do not exist. Researchers must fact-check the sources pulled by the AI to ensure that their work adheres to current integrity standards. There are also concerns about ChatGPT’s relationship to properly citing original sources. “And because this technology typically reproduces text without reliably citing the original sources or authors, researchers using it are at risk of not giving credit to earlier work, unwittingly plagiarizing a multitude of unknown texts and perhaps even giving away their own ideas.” (van Dis et al, 2023, para. 10)
Students and researchers interested in using AI generated text should be aware of current policies and restrictions. Many academic journals, universities and colleges have updated their policies to either limit the use or institute a complete ban of AI in research. Other institutions are actively discussing their plans for this new technology and may implement new policies in the future. At the time of writing, GWU has not shared policies to address AI usage in the classroom. If you’re interested in using AI generated text in your research papers or projects, be sure to closely read submission guidelines or university policies.
ChatGPT and other AI text generators are having profound impacts and as the technology continues to improve, it will become increasingly difficult distinguishing work written without the aid of an AI and work co-authored with an AI. The long term impacts of AI in the classroom have yet to be fully understood. Many institutions are moving to address this new technology. As we continue to learn about ChatGPT’s benefits and limitations, it is important to remain aware of your institution’s policies on using AI in research. To learn more about ChatGPT, please read any of the sources listed below! Himmelfarb Library will continue to discuss AI technology and its impact on research as more information is made available.
Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: many scientists disapprove. Nature (London), 613(7945), 620–621. https://doi.org/10.1038/d41586-023-00107-z
Stokel-Walker, C. & Van Noorden, R. (2023). What ChatGPT and generative AI means for science. Nature (London), 614(7947), 214-216. https://doi.org/10.1038/d41586-023-00340-6
Thorp, H.H. (2023). ChatGPT is fun, but not an author. Science(American Association for the Advancement of Science), 379(6630), 313. https://doi.org/10.1126/science.adg7879
van Dis, E. A. M., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). ChatGPT: five priorities for research. Nature, 614(7947), 224–226. https://doi.org/10.1038/d41586-023-00288-7
Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: many scientists disapprove. Nature (London), 613(7945), 620–621. https://doi.org/10.1038/d41586-023-00107-z
van Dis, E. A. M., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). ChatGPT: five priorities for research. Nature, 614(7947), 224–226. https://doi.org/10.1038/d41586-023-00288-7