Skip to content

One of the most frequent questions we get at the library in recent months is in regards to A.I. What is A.I? Is A.I the future? Are we all about to be replaced by robots? In this month's comic strip, we simplify A.I. in order to make sense of what's realistic, what's plausible and what's still science fiction.

Speech Bubble 1:Ever since AI burst onto the scene, I’ve seen a lot of folks misunderstand how it works. 
Image: Rebecca, a librarian with light skin and dark curly brown hair in a ponytail speaks in front of a bunch of tech items.
Panel 4: 
Narration: In reality, while AI can write or talk, it’s not “thinking” like humans do 
Image: The robot displaying a blank expression is next to a thought bubble showing binary code.
Narration: To understand how AI “thinks” we need to understand what this kind of AI is and how it works.
Image: There is a monitor and on it, a pixilated version of Rebecca is shown next to the text “Understand A.I.” Then under that is the text A: Y B: N
Panel 6: 
Narration: First, the kind of AI seen in movies is not the same kind in chat-gpt. That is, self-aware AI currently doesn’t exist outside of fiction.
Image: Two books are shown. One of the books has a picture of a robot on it stating “foolish: it is statistically unlikely to be lupus” The title of the book is “Watt.Son M.D”
Panel 7: 
Speech Bubble: The AI we see discussed today is known as generative AI. It can produce things like text, images and audio by being trained on large amounts of data (1).
Image: A flow chart is shown. A bunch of file cabinets is first, then an audio icon next to the text or then a picture of a monitor next to the text or and then a smiley face drawing.
Panel 7:
Narrator: I’m going to vastly simplify. Say we want an AI to make images of sheep. First we’d grab a bunch of images of sheep as our training data. 
Image: A table is covered with a variety of photos of sheep. The sheep are all different sizes and colors.
Panel 8:
Narration: Over time, as we feed the model more pictures of the sheep, the model starts to identify common shared characteristics between the images. 
There is a little white sheep with a black face. Next to it, text states: Aspect: fluffy Feature 2(ear) Feature 2(eye) feature: tail= sheep
Panel 9:
Narration: Now, when this works as intended, after tons of images, our AI can start to produce images of sheep itself based off the training data. This is why it’s called “generative” AI; it is generating new content.
Image: The robot from early has an excited expression on it’s monitor. It points to a fridge where a picture of a sheep is displayed.
Panel 10:
The AI is able to produce these images not because it now “knows” what a sheep is, but by essentially large scale probability. I’m still vastly simplifying, but the AI makes the sheep fluffy not because it knows what wool is, but because 100% of its training data includes wool. 
Image: Rebecca stands in front of a television screen. On the screen, the robot looks confused at a black sheep in a field. 
Panel 11: 
Narration: So if we apply this to words, AI is not so much writing as it is calculating the probability of what word is most likely to follow the word it just typed. Sort of like autocorrect. 
Image: The background is a thunderstorm. There is text that reads: it was a dark and stormy _____? A. Night 90% B. Evening 7% C Afternoon 2% D. Day 1%
Panel 12: 
Narration: Okay so why bother making this distinction. Why does it matter?
Image: The robot is shown with it’s monitor displaying a buffering message. Above it, a chibi Rebecca says “let me explain.” 

Panel 13:
Narration: AI relies on its training data. Let’s consider the sheep example from earlier. In the photos I drew, none of them show a sheep’s legs. 
Image: Rebecca sits in front of her tablet with a drawing pen. She gestures to the viewer, exasperated. 
Rebecca ‘s Speech Bubble: “Look, I only have so much time to draw these things.”
Panel 14: 
Narration: If all the images I feed our hypothetical AI are of sheep from the middle up we might get something like this.
Image: Three pictures of sheep are displayed. None of the sheep have legs and instead are puffballs of wool. One sheep is square shaped.
Narration Con: Our AI can only generate based on its data. So if we feed it no pictures of sheep with legs, we get no pictures of sheep with legs (frankly is also shouldn’t make images of a sheep where the entire body is in the frame either). The backgrounds will be a mashup too, as the AI will consider it as part of the image. This leads to interesting results with a wide range of background types.
Panel 15:
Narration: This is one of the reasons AI images struggle with details like fingers: how many fingers you can see in an image of a person varies widely depending on their pose and the angle of the photograph (2).
Image: Four hands with different skin tones are shown, each with a different gesture. In a little bubble to the left, Rebecca is shown looking tired.
Rebecca Speech Bubble: Drawing hands is hard…
Panel 16:
Narration: The same thing goes for writing: when AI writes out “it was a dark and stormy night” it has no comprehension of any of those words. It’s all based on probability. And this is the misconception that leads to so many problems.
Image: The robot is seated at a chair, typing at a computer. From the computer, text reads “it was a dark and stormy night” and from the robot speech bubble we get more binary.
Panel 17: Narration: For example let’s take AI hallucinations. AI Hallucinations refer to when AI makes things up, essentially lying to the user.  Now that we understand how AI works, we can understand how this happens.
Image: The robot is shown its monitor full of a kaleidoscope of colors and two big white eyes. The background behind it is also a mix of colors. 
Panel 18: Narration: AI has no comprehension of lies or the truth. It is regurgitating its training data. Which means that if it doesn't have the answer in the training data, or is fed the wrong answer, what you’re going to get is, the wrong answer.
Panel 19: For example, Google AI made headlines when it recommended people use glue to make the cheese stick on their pizza.  (3). 
Image: A man with dark skin, glasses and a beard stands in front of a pizza and a bottle of glue. He is wearing an apron. 
Man’s speech bubble: “A least it said to use non-toxic glue.
Panel 20: Now where did it get this cooking tip? A joke post from reddit. Google made a deal with Reddit to train it’s A.I on the site’s data in February 2024. 
Image: The avatar for reddit yells after the robot who is running off with the image of a glue bottle on it’s monitor.
Reddit avatar’s speech bubble: It was a joke!
Panel 21: That example was pretty harmless, but it can be much worse. AI has told people to eat poisonous mushrooms (4), provided dieting advice on a hotline for eating disorders (5) or displayed racial bias (6).
Image: The grim reaper is shown, wearing a little shef scarf with his sythe. Next to him is a display of mushrooms. Underneath text reads: guest chef death showcases favorite deadly mushrooms.
Panel 22: Generative AI systems also comes up with fake citations to books and papers that don’t exist. Often is mashes up real authors and journals with fake doi numbers
Image: Three journals are shown composed of fragments of other journals on their covers, each stitched together
Panel 23: Narration: And don’t get me started on the ways images can go wrong (8).
Image: Rebecca stands next to a table with school supplies and a rat. The rat is looking up with her with a question mark over its head.
Rebecca’s speech bubble: Just look up AI rat scandal and you’ll understand why I didn’t draw an example.
Panel 24: Image: The rat from the last panel is shown. 
Rat speech bubble: So AI is worthless? 
Narration: Absolutely not!
Panel 25: 
Narration: AI absolutely has uses. While it’s still in early stages, AI has shown promise in helping doctors identify potentially cancerous moles
Image: The robot and a doctor look at a monitor
Doctor: Should I make a biopsy of both?
Robot: 71%
Doctor: Both it is!

Panel 25: 
Narration: But it’s not a magical solution to every problem. And when we forget that, our “artificial intelligence” is more artificial than anything intelligent.
Image: The robot’s monitor is shown with the citations for this comic displayed.

Comic written and drawn by: Rebecca Kyser

Citations: 

1.Christian B. The Alignment Problem : Machine Learning and Human Values. W.W. Norton & Company; 2021.

2. Lanz D/ JA. AI Kryptonite: Why Artificial Intelligence Can’t Handle Hands. Decrypt. Published April 10, 2023. Accessed August 5, 2024. https://decrypt.co/125865/generative-ai-art-images-hands-fingers-teeth

3. Robison K. Google promised a better search experience — now it’s telling us to put glue on our pizza. The Verge. Published May 23, 2024. Accessed August 5, 2024. https://www.theverge.com/2024/5/23/24162896/google-ai-overview-hallucinations-glue-in-pizza

4. AI-powered mushroom ID apps are frequently wrong - The Washington Post. Accessed August 5, 2024. https://www.washingtonpost.com/technology/2024/03/18/ai-mushroom-id-accuracy/

5. Wells K. An eating disorders chatbot offered dieting advice, raising fears about AI in health. NPR. https://www.npr.org/sections/health-shots/2023/06/08/1180838096/an-eating-disorders-chatbot-offered-dieting-advice-raising-fears-about-ai-in-hea. Published June 9, 2023. Accessed August 5, 2024.

6. Noble SU. Algorithms of Oppression : How Search Engines Reinforce Racism. New York University Press; 2018. doi:10.18574/9781479833641

7. Welborn A. ChatGPT and Fake Citations. Duke University Libraries Blogs. Published March 9, 2023. Accessed August 5, 2024. https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/

8. Pearson J. Study Featuring AI-Generated Giant Rat Penis Retracted, Journal Apologizes. Vice. Published February 16, 2024. Accessed August 5, 2024. https://www.vice.com/en/article/4a389b/ai-midjourney-rat-penis-study-retracted-frontiers

9. Lewis. An artificial intelligence tool that can help detect melanoma. MIT News | Massachusetts Institute of Technology. Published April 2, 2021. Accessed August 5, 2024. https://news.mit.edu/2021/artificial-intelligence-tool-can-help-detect-melanoma-0402

For a moment, let’s entertain a hypothetical. Let’s say you have an excellent paper on your hands about the impact of smoke on the lungs. Your team is about to submit it for publication: pretty exciting! When you get your paper back from the publisher, it’s mostly good news: they’re willing to publish your paper with the caveat that you add a diagram of the lungs to your paper as a visual aid of the systems impacted. The problem? You have no idea where you could acquire an image that would suit this task that wouldn’t potentially violate copyright. 

With this conundrum, one of your coauthors suggests a solution: why not generate one? They have a subscription to Midjourney, the AI software that can generate images from text. Why not give Midjourney a summary of the diagram you need, have it generate it, and then use that for your paper. After checking the journal’s policies on AI (it’s allowed with disclosure), you do just that, glad to have quickly moved past that stumbling block. 

Pretty great, right? It sure sounds like it, until you take a look at the images Midjourney generated. Because on closer inspection, there are some problems. 

Below is an image I generated in CoPilot for this blog post. I didn’t ask it to do something as complicated as making a diagram of how smoking impacts the lungs; instead I asked for just a diagram of human lungs. Here is what I got, with my notes attached.

An image of an AI generated diagram of the lungs in a human women is featured with red text boxes pointing to errors. In the upper left, a box says "nonsense of gibberish text" and a red line points to oddly generated letters that mean nothing. Below it, another box reads "I don't know what this is supposed to be, but I don't think it's in the armpit" with a line pointing to what looks to be an organ with a flower pattern in it. Below that, another box reads "this heart is way too small for an adult" and the red line points to the heart on the diagram. On the left, the top red box reads "now the stomach does not reside in one's hair or arteries" with red lines pointing to a picture of the stomach that is falsely labeled as being in the hair or neck. Below that, a new box reads "what are the gold lines supposed to be in this diagram" and it points to yellow veins that run through the figure like the red and blue that usually denote the circulatory system. The last box on the right says "I have no idea what this is supposed to be" and points to what looks to be bone wrapped around a tube leading out of the bottom of the lungs.

Alright, so this might not be our best image. Thankfully, we have others. Let’s take a look at another image from the same prompt and see if it does a better job. 

An image of an AI generated diagram of the lungs is featured with red text boxes pointing to errors. In the upper left, a box says "more nonsense text" and a red line points to oddly generated letters that mean nothing. On the right side, a box says "bubbles should not be in the lungs!" with a red line pointing to  what looks to be odd bubbles inside the lungs. Below it, a red box reads "what are these small clumps/objects?" and it points to what looks to be red large bacteria and clumps on the lungs.

So what happened here? To explain how this image went terribly wrong, it’s best to start with an explanation of how AI actually works.

When we think of AI, we generally think of movies like The Terminator or The Matrix, where robots can fully think and make decisions, just like a human can. As cool (or terrifying depending on your point of view) as that is, such highly developed forms of artificial intelligence still solely exist in the realm of science fiction. What we call AI now is something known as generative AI. To vastly simplify the process, generative AI works as follows: you take a computer and feed it a large amount of information that resembles what you want it to possibly generate. This is known as “training data.” The AI then attempts to replicate images based on the original training data. (Vox made a video explaining this process much better than I can). So for example, if I feed an AI picture of cats, over time, it identifies aspects of cats across photos: fur, four legs, a tail, a nose,etc. After a period of time, it then generates images based on those qualities. And that’s how we get websites like “These Cats Do Not Exist.

If you take a look at “These Cats Do Not Exist” you might notice something interesting: the quality of fake cat photos varies widely. Some of the cats it generates look like perfectly normal cats. Others appear slightly off; they might have odd proportions or too many paws. And a whole other contingent appears as what can best be described as eldritch monstrosities.  

The reason for the errors in both our above images and our fake cats is due to the fact that the AI doesn’t understand what we are asking it to make. The bot has no concept of lungs as an organ, or cats as a creature; it merely recognizes aspects and characteristics of those concepts. This is why AI art and AI images can look impressive on the surface but fall apart under any scrutiny: the robot can mimic patterns well enough, but the details are much harder to replicate, especially when details vary so much between images. For example, consider these diagrams of human cells I had AI generate for this blog post.

A picture of an AI generated human cell. There are red boxes with text pointing out errors and issues in the image. The top box has the text "nonsense words. Some of these labels don't even point to anything" with two red lines pointing to a series of odd generated letters that mean nothing. Below that, a red box has the text "I have no idea what this is supposed to be" with a red line pointing to a round red ball. On the right side, a text box reads "is this supposed to be a mitochondria? Or is it loose pasta?" with a red line pointing to what looks to be a green penne noodle in the cell. Below that, a red text box reads "I don't think you can find a minature man inside the human cell" and a red line points to the upper torso and head of a man coming out of the cell.

Our AI doesn’t do bad in some regards: it understands the importance of a nucleus, and that a human cell should be round. This is pretty consistent across the images I had it make. But when it comes to showcasing other parts of the cell we run into trouble, given how differently such things are presented in other diagrams. The shape that one artist might decide to use for anaspect of a cell, another artist might draw entirely differently. The AI doesn’t understand the concept of a human cell, it is merely replicating images it’s been fed. 

These errors can lead to embarrassing consequences. In March, a paper went viral for all the wrong reasons; the AI images the writers used had many of the flaws listed above, along with a picture of a mouse that was rather absurd. While the writers disclosed the use of AI, the fact these images passed peer review with nonsense text and other flaws, turned into a massive scandal. The paper was later retracted. 

Let’s go back to our hypothetical. If you need images for your paper or project, instead of using AI, why not use some of Himmelfarb’s resources? On this Image Resources LibGuide, you will find multiple places to find reputable images, with clear copyright permissions. There are plenty of options from which to work. 

As for our AI image generators? If you want to generate photos of cats, go ahead! But leave the scientific charts and images for humans. 

Sources:

  1. Ai Art, explained. YouTube. June 1, 2022. Accessed April 19, 2024. https://www.youtube.com/watch?v=SVcsDDABEkM.
  2. Wong C. AI-generated images and video are here: how could they shape research? Nature (London). Published online 2024.

photo of coffee in teacup with open notebook, pen and laptop
Image from pxfuel.com

Himmelfarb Library’s Scholarly Communications Committee produces short tutorial videos on scholarly publishing and communications topics for SMHS, GWSPH, and GW School of Nursing students, faculty, and staff. Five new videos are now available on our YouTube channel and Scholarly Publishing Research Guide!

2023 NIH Data Management and Sharing Policy Resources by Sara Hoover - Sara is our resident expert on data management policy and resources. She provides an overview of the NIH policy, the essential elements of a data management and sharing plan, and highlights GW and non-GW resources that can aid you in putting together a data management and sharing plan. The video is 10 minutes in length. 

Animal Research Alternatives by Paul Levett - Paul demonstrates how to conduct 3Rs alternatives literature searches for animal research protocols. He defines the 3Rs and explains how to report the search in the GW Institutional Animal Care and Use Committee (IACUC) application form. Paul is currently a member of the GW IACUC. The video is 13 minutes long.

Artificial Intelligence Tools and Citations by Brittany Smith - As a Library Science graduate student, Brittany has an interest in how AI is impacting the student experience. She discusses how tools like Chat GPT can assist with your research, the GW policy on AI, and how to create citations for these resources. The video is 6.5 minutes in length.

UN Sustainable Development Goals: Finding Publications by Stacy Brody - Stacy addresses why the goals were developed, what they hope to achieve, and shows ways to find related publications in Scopus. The video is 5 minutes long.

Updating Your Biosketch via SciEncv by Tom Harrod - Tom talks about the differences between NIH’s SciEncv and Biosketch and demonstrates how to use SciEncv to populate a Biosketch profile. Tom advises GW SMHS, School of Nursing, and GWSPH researchers on creating and maintaining research profiles and he and Sara provide research profile audit services. The video is 5 minutes long.

You can find the rest of the videos in the Scholarly Communications series in this YouTube playlist or on the Scholarly Publishing Research Guide.

Artificial intelligence is on the cusp of radically transforming many aspects of our lives, including healthcare. AI tools can be used to aid diagnosis, recommend treatments, and monitor patients through wearables and sensors. A study published in May of this year found 47 FDA-approved AI remote patient monitoring devices. The majority monitor cardiovascular functions, but the study also found diabetes management and sleep monitors (Dubey and Tiwari, 2023).AI-enabled surgical robots are in various phases of testing and adoption. Partially autonomous systems like da Vinci and TSolution One® are in use for hard tissue procedures and the NIH reported on the successful use of a soft tissue robot last year (Saedi, et al., 2022). 

AI can track trends in health or make predictions about it in populations. For example, the earliest warnings about the Covid pandemic came from two AI applications, HealthMap and BlueDot in December of 2019 (Chakravorti, 2022). A recent editorial in Pathogens discusses how AI machine learning can be used to analyze large data sets to identify patterns and trends in infectious disease, identify potential drug targets, and build predictive models to prevent or mitigate outbreaks (Bothra, et al., 2023).

AI administrative tools can greatly reduce the burden of paperwork through digital note taking with speech recognition software and filing insurance claims with systems like Medicodio. They can also be used to optimize scheduling, staffing, and resource allocation. AI robots that can gather and deliver supplies and equipment, reducing the burden on nurses and other clinical staff, are being adopted in hospitals (Gaines, 2023).

A 2020 GAO report on AI in healthcare identified challenges to building effective and safe AI applications. Accessing quality data headed the list. Incomplete and inconsistent data sets hampered AI decision tools during the Covid pandemic response (Chakravorti, 2022). Bias in data, lack of transparency, risks to patient privacy, and potential liability were also identified as barriers.

Another important factor is lack of trust in or acceptance of AI applications in healthcare by health consumers. A recent Pew Survey found that 60% of Americans are uncomfortable with AI being used in their healthcare and fewer than half believed that AI would improve health outcomes. The findings were not all negative. A majority thought that AI would reduce the number of mistakes made by healthcare providers and that it could also help eliminate bias and unfair treatment in healthcare. Respondents were comfortable with AI tools for skin cancer detection, but decidedly less comfortable with AI surgical robots and use of chatbots for mental health screenings. They were also concerned that the pace of adoption of these technologies will be too fast before risks to patients are understood and minimized.

References

  1. Dubey, A. and Tiwari, A. (2023). Artificial intelligence and remote patient monitoring in US healthcare market: a literature review. Journal of Market Access & Health Policy, 11(1), 2205618. https://doi.org/10.1080/20016689.2023.2205618
  1. Saeidi, H, Opfermann, J.D., Kam, M, et al.(2022). Autonomous robotic laparoscopic surgery for intestinal anastomosis. Science Robotics 7(62). https://doi.org/10.1126/scirobotics.abj2908
  1. Bothra, A., Cao, Y., Černý, J., & Arora, G. (2023). The Epidemiology of infectious diseases meets AI: a match made in heaven. Pathogens, 12(2), 317. https://doi.org/10.3390/pathogens12020317
  1. Gaines, K. (2022). Delivery care robots are being used to alleviate nursing staff. Nurse.org https://nurse.org/articles/delivery-care-robots-launched-in-texas/
  1. Chakravorti, B. (2022). Why AI failed to live up to its potential during the pandemic. Harvard Business Review. https://hbr.org/2022/03/why-ai-failed-to-live-up-to-its-potential-during-the-pandemic

Robotic hand reaches for a mural of white dots and connecting lines displayed on a blue backdrop
Photo credit: Photo by Tara Winstead

OpenAI, an artificial intelligence research and development company, released the latest version of their generative text chatbot program, ChatGPT, near the end of 2022. The program provides responses based on prompts from users. Since its release universities, research institutions, publishers and other educators worry that ChatGPT and similar products will radically change the current education system. Some institutions have taken action to limit or ban the use of AI generated text. Others argue that ChatGPT and similar products may be the perfect opportunity to reimagine education and scholarly publishing. There is a lot to learn about AI and its impact on research and publishing. This article aims to serve as an introduction to this rapidly evolving technology.

In a Nature article, Chris Stokel-Walker described ChatGPT as “a large language model (LLM), which generates convincing sentences by mimicking the statistical patterns of language in a huge database of text collated from the Internet.” (Stokel-Walker, 2023, para. 3) OpenAI’s website says “The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” (OpenAI, n.d., para. 1) ChatGPT may be used to answer simple and complex questions and may provide long-form responses based on the prompt. In recent months, students and researchers have used the chatbot to perform simple research tasks or develop and draft manuscripts. By automating certain tasks, ChatGPT and other AI technologies may provide people with the opportunity to focus on other aspects of the research or learning process.

There are benefits and limitations to AI technology and many people agree that guidelines must be in place before ChatGPT and similar models are fully integrated into the classroom or laboratory.

Van Dis et al. notes that “Conversational AI is likely to revolutionize research practices and publishing, creating both opportunities and concerns. It might accelerate the innovation process, shorten time-to-publication, and by helping people to write fluently, make science more equitable and increase the diversity of scientific perspectives.” (van Dis et. al., 2023, para. 4) Researchers who have limited or no English language proficiency would benefit from using ChatGPT to develop their manuscript for publication. The current version of ChatGPT is  free to use making it accessible to anyone with internet access and a computer. This may make scholarly publishing more equitable, though there is a version of the program that is only available with a monthly subscription fee. If future AI technologies require fees, this will create additional access and equity issues. 

 While ChatGPT can produce long-form, seemingly thoughtful responses there are concerns about its ability to accurately cite information. OpenAI states that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” (OpenAI, n.d., para. 7) There is a potential for AI generated text to spread misleading information. Scholars who have tested ChatGPT also note that the AI will create references that do not exist. Researchers must fact-check the sources pulled by the AI to ensure that their work adheres to current integrity standards. There are also concerns about ChatGPT’s relationship to properly citing original sources. “And because this technology typically reproduces text without reliably citing the original sources or authors, researchers using it are at risk of not giving credit to earlier work, unwittingly plagiarizing a multitude of unknown texts and perhaps even giving away their own ideas.” (van Dis et al, 2023, para. 10)

Students and researchers interested in using AI generated text should be aware of current policies and restrictions. Many academic journals, universities and colleges have updated their policies to either limit the use or institute a complete ban of AI in research. Other institutions are actively discussing their plans for this new technology and may implement new policies in the future. At the time of writing, GWU has not shared policies to address AI usage in the classroom. If you’re interested in using AI generated text in your research papers or projects, be sure to closely read submission guidelines or university policies. 

ChatGPT and other AI text generators are having profound impacts and as the technology continues to improve, it will become increasingly difficult distinguishing work written without the aid of an AI and work co-authored with an AI. The long term impacts of AI in the classroom have yet to be fully understood. Many institutions are moving to address this new technology. As we continue to learn about ChatGPT’s benefits and limitations, it is important to remain aware of your institution’s policies on using AI in research. To learn more about ChatGPT, please read any of the sources listed below! Himmelfarb Library will continue to discuss AI technology and its impact on research as more information is made available.

Additional Reading:

Work Cited: