Skip to content

For a moment, let’s entertain a hypothetical. Let’s say you have an excellent paper on your hands about the impact of smoke on the lungs. Your team is about to submit it for publication: pretty exciting! When you get your paper back from the publisher, it’s mostly good news: they’re willing to publish your paper with the caveat that you add a diagram of the lungs to your paper as a visual aid of the systems impacted. The problem? You have no idea where you could acquire an image that would suit this task that wouldn’t potentially violate copyright. 

With this conundrum, one of your coauthors suggests a solution: why not generate one? They have a subscription to Midjourney, the AI software that can generate images from text. Why not give Midjourney a summary of the diagram you need, have it generate it, and then use that for your paper. After checking the journal’s policies on AI (it’s allowed with disclosure), you do just that, glad to have quickly moved past that stumbling block. 

Pretty great, right? It sure sounds like it, until you take a look at the images Midjourney generated. Because on closer inspection, there are some problems. 

Below is an image I generated in CoPilot for this blog post. I didn’t ask it to do something as complicated as making a diagram of how smoking impacts the lungs; instead I asked for just a diagram of human lungs. Here is what I got, with my notes attached.

An image of an AI generated diagram of the lungs in a human women is featured with red text boxes pointing to errors. In the upper left, a box says "nonsense of gibberish text" and a red line points to oddly generated letters that mean nothing. Below it, another box reads "I don't know what this is supposed to be, but I don't think it's in the armpit" with a line pointing to what looks to be an organ with a flower pattern in it. Below that, another box reads "this heart is way too small for an adult" and the red line points to the heart on the diagram. On the left, the top red box reads "now the stomach does not reside in one's hair or arteries" with red lines pointing to a picture of the stomach that is falsely labeled as being in the hair or neck. Below that, a new box reads "what are the gold lines supposed to be in this diagram" and it points to yellow veins that run through the figure like the red and blue that usually denote the circulatory system. The last box on the right says "I have no idea what this is supposed to be" and points to what looks to be bone wrapped around a tube leading out of the bottom of the lungs.

Alright, so this might not be our best image. Thankfully, we have others. Let’s take a look at another image from the same prompt and see if it does a better job. 

An image of an AI generated diagram of the lungs is featured with red text boxes pointing to errors. In the upper left, a box says "more nonsense text" and a red line points to oddly generated letters that mean nothing. On the right side, a box says "bubbles should not be in the lungs!" with a red line pointing to  what looks to be odd bubbles inside the lungs. Below it, a red box reads "what are these small clumps/objects?" and it points to what looks to be red large bacteria and clumps on the lungs.

So what happened here? To explain how this image went terribly wrong, it’s best to start with an explanation of how AI actually works.

When we think of AI, we generally think of movies like The Terminator or The Matrix, where robots can fully think and make decisions, just like a human can. As cool (or terrifying depending on your point of view) as that is, such highly developed forms of artificial intelligence still solely exist in the realm of science fiction. What we call AI now is something known as generative AI. To vastly simplify the process, generative AI works as follows: you take a computer and feed it a large amount of information that resembles what you want it to possibly generate. This is known as “training data.” The AI then attempts to replicate images based on the original training data. (Vox made a video explaining this process much better than I can). So for example, if I feed an AI picture of cats, over time, it identifies aspects of cats across photos: fur, four legs, a tail, a nose,etc. After a period of time, it then generates images based on those qualities. And that’s how we get websites like “These Cats Do Not Exist.

If you take a look at “These Cats Do Not Exist” you might notice something interesting: the quality of fake cat photos varies widely. Some of the cats it generates look like perfectly normal cats. Others appear slightly off; they might have odd proportions or too many paws. And a whole other contingent appears as what can best be described as eldritch monstrosities.  

The reason for the errors in both our above images and our fake cats is due to the fact that the AI doesn’t understand what we are asking it to make. The bot has no concept of lungs as an organ, or cats as a creature; it merely recognizes aspects and characteristics of those concepts. This is why AI art and AI images can look impressive on the surface but fall apart under any scrutiny: the robot can mimic patterns well enough, but the details are much harder to replicate, especially when details vary so much between images. For example, consider these diagrams of human cells I had AI generate for this blog post.

A picture of an AI generated human cell. There are red boxes with text pointing out errors and issues in the image. The top box has the text "nonsense words. Some of these labels don't even point to anything" with two red lines pointing to a series of odd generated letters that mean nothing. Below that, a red box has the text "I have no idea what this is supposed to be" with a red line pointing to a round red ball. On the right side, a text box reads "is this supposed to be a mitochondria? Or is it loose pasta?" with a red line pointing to what looks to be a green penne noodle in the cell. Below that, a red text box reads "I don't think you can find a minature man inside the human cell" and a red line points to the upper torso and head of a man coming out of the cell.

Our AI doesn’t do bad in some regards: it understands the importance of a nucleus, and that a human cell should be round. This is pretty consistent across the images I had it make. But when it comes to showcasing other parts of the cell we run into trouble, given how differently such things are presented in other diagrams. The shape that one artist might decide to use for anaspect of a cell, another artist might draw entirely differently. The AI doesn’t understand the concept of a human cell, it is merely replicating images it’s been fed. 

These errors can lead to embarrassing consequences. In March, a paper went viral for all the wrong reasons; the AI images the writers used had many of the flaws listed above, along with a picture of a mouse that was rather absurd. While the writers disclosed the use of AI, the fact these images passed peer review with nonsense text and other flaws, turned into a massive scandal. The paper was later retracted. 

Let’s go back to our hypothetical. If you need images for your paper or project, instead of using AI, why not use some of Himmelfarb’s resources? On this Image Resources LibGuide, you will find multiple places to find reputable images, with clear copyright permissions. There are plenty of options from which to work. 

As for our AI image generators? If you want to generate photos of cats, go ahead! But leave the scientific charts and images for humans. 

Sources:

  1. Ai Art, explained. YouTube. June 1, 2022. Accessed April 19, 2024. https://www.youtube.com/watch?v=SVcsDDABEkM.
  2. Wong C. AI-generated images and video are here: how could they shape research? Nature (London). Published online 2024.

photo of coffee in teacup with open notebook, pen and laptop
Image from pxfuel.com

Himmelfarb Library’s Scholarly Communications Committee produces short tutorial videos on scholarly publishing and communications topics for SMHS, GWSPH, and GW School of Nursing students, faculty, and staff. Five new videos are now available on our YouTube channel and Scholarly Publishing Research Guide!

2023 NIH Data Management and Sharing Policy Resources by Sara Hoover - Sara is our resident expert on data management policy and resources. She provides an overview of the NIH policy, the essential elements of a data management and sharing plan, and highlights GW and non-GW resources that can aid you in putting together a data management and sharing plan. The video is 10 minutes in length. 

Animal Research Alternatives by Paul Levett - Paul demonstrates how to conduct 3Rs alternatives literature searches for animal research protocols. He defines the 3Rs and explains how to report the search in the GW Institutional Animal Care and Use Committee (IACUC) application form. Paul is currently a member of the GW IACUC. The video is 13 minutes long.

Artificial Intelligence Tools and Citations by Brittany Smith - As a Library Science graduate student, Brittany has an interest in how AI is impacting the student experience. She discusses how tools like Chat GPT can assist with your research, the GW policy on AI, and how to create citations for these resources. The video is 6.5 minutes in length.

UN Sustainable Development Goals: Finding Publications by Stacy Brody - Stacy addresses why the goals were developed, what they hope to achieve, and shows ways to find related publications in Scopus. The video is 5 minutes long.

Updating Your Biosketch via SciEncv by Tom Harrod - Tom talks about the differences between NIH’s SciEncv and Biosketch and demonstrates how to use SciEncv to populate a Biosketch profile. Tom advises GW SMHS, School of Nursing, and GWSPH researchers on creating and maintaining research profiles and he and Sara provide research profile audit services. The video is 5 minutes long.

You can find the rest of the videos in the Scholarly Communications series in this YouTube playlist or on the Scholarly Publishing Research Guide.

Artificial intelligence is on the cusp of radically transforming many aspects of our lives, including healthcare. AI tools can be used to aid diagnosis, recommend treatments, and monitor patients through wearables and sensors. A study published in May of this year found 47 FDA-approved AI remote patient monitoring devices. The majority monitor cardiovascular functions, but the study also found diabetes management and sleep monitors (Dubey and Tiwari, 2023).AI-enabled surgical robots are in various phases of testing and adoption. Partially autonomous systems like da Vinci and TSolution One® are in use for hard tissue procedures and the NIH reported on the successful use of a soft tissue robot last year (Saedi, et al., 2022). 

AI can track trends in health or make predictions about it in populations. For example, the earliest warnings about the Covid pandemic came from two AI applications, HealthMap and BlueDot in December of 2019 (Chakravorti, 2022). A recent editorial in Pathogens discusses how AI machine learning can be used to analyze large data sets to identify patterns and trends in infectious disease, identify potential drug targets, and build predictive models to prevent or mitigate outbreaks (Bothra, et al., 2023).

AI administrative tools can greatly reduce the burden of paperwork through digital note taking with speech recognition software and filing insurance claims with systems like Medicodio. They can also be used to optimize scheduling, staffing, and resource allocation. AI robots that can gather and deliver supplies and equipment, reducing the burden on nurses and other clinical staff, are being adopted in hospitals (Gaines, 2023).

A 2020 GAO report on AI in healthcare identified challenges to building effective and safe AI applications. Accessing quality data headed the list. Incomplete and inconsistent data sets hampered AI decision tools during the Covid pandemic response (Chakravorti, 2022). Bias in data, lack of transparency, risks to patient privacy, and potential liability were also identified as barriers.

Another important factor is lack of trust in or acceptance of AI applications in healthcare by health consumers. A recent Pew Survey found that 60% of Americans are uncomfortable with AI being used in their healthcare and fewer than half believed that AI would improve health outcomes. The findings were not all negative. A majority thought that AI would reduce the number of mistakes made by healthcare providers and that it could also help eliminate bias and unfair treatment in healthcare. Respondents were comfortable with AI tools for skin cancer detection, but decidedly less comfortable with AI surgical robots and use of chatbots for mental health screenings. They were also concerned that the pace of adoption of these technologies will be too fast before risks to patients are understood and minimized.

References

  1. Dubey, A. and Tiwari, A. (2023). Artificial intelligence and remote patient monitoring in US healthcare market: a literature review. Journal of Market Access & Health Policy, 11(1), 2205618. https://doi.org/10.1080/20016689.2023.2205618
  1. Saeidi, H, Opfermann, J.D., Kam, M, et al.(2022). Autonomous robotic laparoscopic surgery for intestinal anastomosis. Science Robotics 7(62). https://doi.org/10.1126/scirobotics.abj2908
  1. Bothra, A., Cao, Y., Černý, J., & Arora, G. (2023). The Epidemiology of infectious diseases meets AI: a match made in heaven. Pathogens, 12(2), 317. https://doi.org/10.3390/pathogens12020317
  1. Gaines, K. (2022). Delivery care robots are being used to alleviate nursing staff. Nurse.org https://nurse.org/articles/delivery-care-robots-launched-in-texas/
  1. Chakravorti, B. (2022). Why AI failed to live up to its potential during the pandemic. Harvard Business Review. https://hbr.org/2022/03/why-ai-failed-to-live-up-to-its-potential-during-the-pandemic

Robotic hand reaches for a mural of white dots and connecting lines displayed on a blue backdrop
Photo credit: Photo by Tara Winstead

OpenAI, an artificial intelligence research and development company, released the latest version of their generative text chatbot program, ChatGPT, near the end of 2022. The program provides responses based on prompts from users. Since its release universities, research institutions, publishers and other educators worry that ChatGPT and similar products will radically change the current education system. Some institutions have taken action to limit or ban the use of AI generated text. Others argue that ChatGPT and similar products may be the perfect opportunity to reimagine education and scholarly publishing. There is a lot to learn about AI and its impact on research and publishing. This article aims to serve as an introduction to this rapidly evolving technology.

In a Nature article, Chris Stokel-Walker described ChatGPT as “a large language model (LLM), which generates convincing sentences by mimicking the statistical patterns of language in a huge database of text collated from the Internet.” (Stokel-Walker, 2023, para. 3) OpenAI’s website says “The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” (OpenAI, n.d., para. 1) ChatGPT may be used to answer simple and complex questions and may provide long-form responses based on the prompt. In recent months, students and researchers have used the chatbot to perform simple research tasks or develop and draft manuscripts. By automating certain tasks, ChatGPT and other AI technologies may provide people with the opportunity to focus on other aspects of the research or learning process.

There are benefits and limitations to AI technology and many people agree that guidelines must be in place before ChatGPT and similar models are fully integrated into the classroom or laboratory.

Van Dis et al. notes that “Conversational AI is likely to revolutionize research practices and publishing, creating both opportunities and concerns. It might accelerate the innovation process, shorten time-to-publication, and by helping people to write fluently, make science more equitable and increase the diversity of scientific perspectives.” (van Dis et. al., 2023, para. 4) Researchers who have limited or no English language proficiency would benefit from using ChatGPT to develop their manuscript for publication. The current version of ChatGPT is  free to use making it accessible to anyone with internet access and a computer. This may make scholarly publishing more equitable, though there is a version of the program that is only available with a monthly subscription fee. If future AI technologies require fees, this will create additional access and equity issues. 

 While ChatGPT can produce long-form, seemingly thoughtful responses there are concerns about its ability to accurately cite information. OpenAI states that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” (OpenAI, n.d., para. 7) There is a potential for AI generated text to spread misleading information. Scholars who have tested ChatGPT also note that the AI will create references that do not exist. Researchers must fact-check the sources pulled by the AI to ensure that their work adheres to current integrity standards. There are also concerns about ChatGPT’s relationship to properly citing original sources. “And because this technology typically reproduces text without reliably citing the original sources or authors, researchers using it are at risk of not giving credit to earlier work, unwittingly plagiarizing a multitude of unknown texts and perhaps even giving away their own ideas.” (van Dis et al, 2023, para. 10)

Students and researchers interested in using AI generated text should be aware of current policies and restrictions. Many academic journals, universities and colleges have updated their policies to either limit the use or institute a complete ban of AI in research. Other institutions are actively discussing their plans for this new technology and may implement new policies in the future. At the time of writing, GWU has not shared policies to address AI usage in the classroom. If you’re interested in using AI generated text in your research papers or projects, be sure to closely read submission guidelines or university policies. 

ChatGPT and other AI text generators are having profound impacts and as the technology continues to improve, it will become increasingly difficult distinguishing work written without the aid of an AI and work co-authored with an AI. The long term impacts of AI in the classroom have yet to be fully understood. Many institutions are moving to address this new technology. As we continue to learn about ChatGPT’s benefits and limitations, it is important to remain aware of your institution’s policies on using AI in research. To learn more about ChatGPT, please read any of the sources listed below! Himmelfarb Library will continue to discuss AI technology and its impact on research as more information is made available.

Additional Reading:

Work Cited: