Skip to content

Himmelfarb has more than books and articles! This article will highlight some of the exciting options available to you as SMHS, GWSPH, or GW Nursing students.

Himmelfarb has more than books and articles! This article will highlight some of the exciting options available to you as SMHS, GWSPH, or GW Nursing students.

3D Printing: 

If you’ve stopped by the circulation desk, you may have noticed a slight scenery change: Himmelfarb has a new Bambu Lab 3-D printer! The Bambu Lab X-1 Carbon prints significantly faster than our older printers, greatly increasing our turnaround time and ability to process more jobs. Plus, it can print in multi-colors, leading to festive and interesting options. 

You can print as many curricular prints as the queue allows and one non-curricular print a month (full policy here). 

If you’re wondering where to find 3-D printer models, check out this article!

The applications for med students are vast: from stethoscope holders to molecular diagrams to model organs. 

Or fun friends, like this poseable turtle. 

A 3D printed turtle stands angled towards the camera.

VR:

Himmelfarb has two Oculus Quest VR headsets for checkout. 

A VR headset is displayed behind a glass case.

[Oculus headset on display at the Himmelfarb library - available for 4hr checkouts]

These are great for taking a study break with guided meditations or nature walks (although make sure you have the appropriate space) or, if you want to get serious with studies, you can take advantage of the preloaded Medicalholodeck Medical VR platform (which includes Anatomy Master XR, Medical Imaging XR, and Dissection Master XR). Somewhere between a textbook and a cadaver lab, Medicalholodeck allows you to inspect high-resolution dissections layer-by-layer alongside your research.

Check out the video below for a brief demonstration:

BodyViz

Like Medicalholodeck, BodyViz is an interactive anatomy visualization tool that lets users view, study, and manipulate 3D anatomical structures. Although there's a bit of a learning curve, once you get a handle on it, the BodyViz slicing software allows you to digitally dissect models with great precision, allowing for intensive inspection.

Unlike the VR headsets - which can be used anywhere you find the space - BodyViz is best used in the Levine lounge (Himmelfarb 305A), adjacent to the Bloedorn Technology Center. All of these materials are available at our circulation desk. To learn more, explore our BodyViz Guide.

We hope these help take your studies to the next level.

LibKey Nomad Logo

LibKey Nomad is a quick and easy way to get the full-text PDFs of journal articles and book chapters that Himmelfarb Library provides in our collection. This free browser extension lets you view content on the publisher's site and download full-text PDFs quickly and easily! LibKey Nomad is available for web browsers (Chrome, Firefox, Safari, Edge, Brave, and Vivaldi). 

To use LibKey Nomad, install it as a browser plug-in and choose “George Washington University - Himmelfarb Library” as your home library when prompted.

LibKey Nomad is active on publisher sites, PubMed, Amazon, and Google Books! With LibKey Nomad, you’ll be alerted when the full-text articles and e-books are available from Himmelfarb, and be able to get the PDF with a single click in most cases. When searching databases such as PubMed, look for the “Article Link” or “Download PDF” buttons shown below. 

Here’s a screenshot of how these links appear in PubMed:

Screenshot of PubMed results page with LibKey Nomad Article Link and Download PDF buttons.

Many sites will provide a pop-up in the bottom left corner of the screen with the LibKey Nomad logo and “Provided by George Washington University - Himmelfarb Library” (see below). Simply clicking on these buttons will take you to the full-text content.

Screenshot of a LibKey Nomad "Provided By George Washington University - Himmelfarb Library" button.

When you search, LibKey Nomad automatically integrates full-text access from Himmelfarb directly from where you find the content! For example, if you are searching for a book on Amazon, and Himmelfarb Library owns an e-book of the item you’re looking for, LibKey Nomad will give you the option to access the full text through Himmelfarb - which can end up saving you money! 

Screenshot of a LibKey button in Amazon.

LibKey Nomad even tells you when an article has been retracted! When you search Himmelfarb’s search box and retrieve a retracted article, the PDF button that would normally appear in the results is replaced with a Retracted Article button as shown in the screenshot below.

Screenshot of a LibKey Nomad Article Retracted button in PubMed.

Clicking on the Article Retracted button will open a window that details why the article was retracted (data retrieved from Retraction Watch) and links if you still want to read or download the article.

LibKey Nomad makes getting to the full-text and PDFs of articles and book chapters fast and easy, often in as little as a single click! Install the LibKey Nomad browser extension in your favorite browser today! Contact us if you have questions about installing or using LibKey Nomad! You can reach us at himmelfarb@gwu.edu or chat with us during business hours.

Himmelfarb has more than books and eBooks! Make the most of your Himmelfarb access and check out our collection of tools and AV equipment that will help you along your medical journey: from chargers to VR to blood pressure kits. Items check out for 4 hours. Ask one of our staff at the circulation desk for more details. 

Himmelfarb has more than articles and eBooks! Make the most of your Himmelfarb access and check out our collection of tools and AV equipment that will help you along your healthcare journey: from chargers to VR to blood pressure kits. Items check out for 4 hours. Ask one of our staff at the circulation desk for more details. 

Suture Practice Kit

A suture practice kit is displayed on a countertop.

Vscan Portable Ultrasound Machine

A Vscan Ultrasound machine is displayed on a countertop.

Oculus VR Headset

A Oculus VR headset is displayed on a countertop.

BodyViz

 [Accessories to access the 3D Anatomy software in the 3rd Floor Levine Lounge]

The BodyViz accessory kit is displayed on a countertop, which includes a keyboard, remote control, wireless mouse, and Xbox controller

AliveCor ECG Monitor

An ECG monitor is displayed on a countertop.

Wireless Blood Pressure Monitor

A Wireless Blood Pressure Monitoring kit is displayed on a countertop.

iPhone Otoscope [for use with CellScope companion app]

An iPhone otoscope is displayed on a countertop.

‘LectroFanEvo White Noise Machine [for use in study rooms]

A white noise machine is displayed on a countertop.

Withings Advanced Health and Fitness Tracker

A fitness tracker is displayed on a countertop.

20w iPhone Chargers: Lightning and USB-C

Two 20w iPhone chargers - one USB-C and one lightning  - are  displayed on a countertop.

67w MacBook Charger

A 67w MacBook charger is displayed on a countertop.

USB-C to HDMI Out Adapter 

A USB-C to HDMI adapter is displayed on a countertop.

USB-C to USB-A IN Adapter 

A USB-C to USB-A IN adapter is displayed on a countertop.

HDMI Cable

An HDMI cable is displayed on a countertop.

9mm Wired Headphones

A pair of headphones are displayed on a countertop.

One of the most frequent questions we get at the library in recent months is in regards to A.I. What is A.I? Is A.I the future? Are we all about to be replaced by robots? In this month's comic strip, we simplify A.I. in order to make sense of what's realistic, what's plausible and what's still science fiction.

Speech Bubble 1:Ever since AI burst onto the scene, I’ve seen a lot of folks misunderstand how it works. 
Image: Rebecca, a librarian with light skin and dark curly brown hair in a ponytail speaks in front of a bunch of tech items.
Panel 4: 
Narration: In reality, while AI can write or talk, it’s not “thinking” like humans do 
Image: The robot displaying a blank expression is next to a thought bubble showing binary code.
Narration: To understand how AI “thinks” we need to understand what this kind of AI is and how it works.
Image: There is a monitor and on it, a pixilated version of Rebecca is shown next to the text “Understand A.I.” Then under that is the text A: Y B: N
Panel 6: 
Narration: First, the kind of AI seen in movies is not the same kind in chat-gpt. That is, self-aware AI currently doesn’t exist outside of fiction.
Image: Two books are shown. One of the books has a picture of a robot on it stating “foolish: it is statistically unlikely to be lupus” The title of the book is “Watt.Son M.D”
Panel 7: 
Speech Bubble: The AI we see discussed today is known as generative AI. It can produce things like text, images and audio by being trained on large amounts of data (1).
Image: A flow chart is shown. A bunch of file cabinets is first, then an audio icon next to the text or then a picture of a monitor next to the text or and then a smiley face drawing.
Panel 7:
Narrator: I’m going to vastly simplify. Say we want an AI to make images of sheep. First we’d grab a bunch of images of sheep as our training data. 
Image: A table is covered with a variety of photos of sheep. The sheep are all different sizes and colors.
Panel 8:
Narration: Over time, as we feed the model more pictures of the sheep, the model starts to identify common shared characteristics between the images. 
There is a little white sheep with a black face. Next to it, text states: Aspect: fluffy Feature 2(ear) Feature 2(eye) feature: tail= sheep
Panel 9:
Narration: Now, when this works as intended, after tons of images, our AI can start to produce images of sheep itself based off the training data. This is why it’s called “generative” AI; it is generating new content.
Image: The robot from early has an excited expression on it’s monitor. It points to a fridge where a picture of a sheep is displayed.
Panel 10:
The AI is able to produce these images not because it now “knows” what a sheep is, but by essentially large scale probability. I’m still vastly simplifying, but the AI makes the sheep fluffy not because it knows what wool is, but because 100% of its training data includes wool. 
Image: Rebecca stands in front of a television screen. On the screen, the robot looks confused at a black sheep in a field. 
Panel 11: 
Narration: So if we apply this to words, AI is not so much writing as it is calculating the probability of what word is most likely to follow the word it just typed. Sort of like autocorrect. 
Image: The background is a thunderstorm. There is text that reads: it was a dark and stormy _____? A. Night 90% B. Evening 7% C Afternoon 2% D. Day 1%
Panel 12: 
Narration: Okay so why bother making this distinction. Why does it matter?
Image: The robot is shown with it’s monitor displaying a buffering message. Above it, a chibi Rebecca says “let me explain.” 

Panel 13:
Narration: AI relies on its training data. Let’s consider the sheep example from earlier. In the photos I drew, none of them show a sheep’s legs. 
Image: Rebecca sits in front of her tablet with a drawing pen. She gestures to the viewer, exasperated. 
Rebecca ‘s Speech Bubble: “Look, I only have so much time to draw these things.”
Panel 14: 
Narration: If all the images I feed our hypothetical AI are of sheep from the middle up we might get something like this.
Image: Three pictures of sheep are displayed. None of the sheep have legs and instead are puffballs of wool. One sheep is square shaped.
Narration Con: Our AI can only generate based on its data. So if we feed it no pictures of sheep with legs, we get no pictures of sheep with legs (frankly is also shouldn’t make images of a sheep where the entire body is in the frame either). The backgrounds will be a mashup too, as the AI will consider it as part of the image. This leads to interesting results with a wide range of background types.
Panel 15:
Narration: This is one of the reasons AI images struggle with details like fingers: how many fingers you can see in an image of a person varies widely depending on their pose and the angle of the photograph (2).
Image: Four hands with different skin tones are shown, each with a different gesture. In a little bubble to the left, Rebecca is shown looking tired.
Rebecca Speech Bubble: Drawing hands is hard…
Panel 16:
Narration: The same thing goes for writing: when AI writes out “it was a dark and stormy night” it has no comprehension of any of those words. It’s all based on probability. And this is the misconception that leads to so many problems.
Image: The robot is seated at a chair, typing at a computer. From the computer, text reads “it was a dark and stormy night” and from the robot speech bubble we get more binary.
Panel 17: Narration: For example let’s take AI hallucinations. AI Hallucinations refer to when AI makes things up, essentially lying to the user.  Now that we understand how AI works, we can understand how this happens.
Image: The robot is shown its monitor full of a kaleidoscope of colors and two big white eyes. The background behind it is also a mix of colors. 
Panel 18: Narration: AI has no comprehension of lies or the truth. It is regurgitating its training data. Which means that if it doesn't have the answer in the training data, or is fed the wrong answer, what you’re going to get is, the wrong answer.
Panel 19: For example, Google AI made headlines when it recommended people use glue to make the cheese stick on their pizza.  (3). 
Image: A man with dark skin, glasses and a beard stands in front of a pizza and a bottle of glue. He is wearing an apron. 
Man’s speech bubble: “A least it said to use non-toxic glue.
Panel 20: Now where did it get this cooking tip? A joke post from reddit. Google made a deal with Reddit to train it’s A.I on the site’s data in February 2024. 
Image: The avatar for reddit yells after the robot who is running off with the image of a glue bottle on it’s monitor.
Reddit avatar’s speech bubble: It was a joke!
Panel 21: That example was pretty harmless, but it can be much worse. AI has told people to eat poisonous mushrooms (4), provided dieting advice on a hotline for eating disorders (5) or displayed racial bias (6).
Image: The grim reaper is shown, wearing a little shef scarf with his sythe. Next to him is a display of mushrooms. Underneath text reads: guest chef death showcases favorite deadly mushrooms.
Panel 22: Generative AI systems also comes up with fake citations to books and papers that don’t exist. Often is mashes up real authors and journals with fake doi numbers
Image: Three journals are shown composed of fragments of other journals on their covers, each stitched together
Panel 23: Narration: And don’t get me started on the ways images can go wrong (8).
Image: Rebecca stands next to a table with school supplies and a rat. The rat is looking up with her with a question mark over its head.
Rebecca’s speech bubble: Just look up AI rat scandal and you’ll understand why I didn’t draw an example.
Panel 24: Image: The rat from the last panel is shown. 
Rat speech bubble: So AI is worthless? 
Narration: Absolutely not!
Panel 25: 
Narration: AI absolutely has uses. While it’s still in early stages, AI has shown promise in helping doctors identify potentially cancerous moles
Image: The robot and a doctor look at a monitor
Doctor: Should I make a biopsy of both?
Robot: 71%
Doctor: Both it is!

Panel 25: 
Narration: But it’s not a magical solution to every problem. And when we forget that, our “artificial intelligence” is more artificial than anything intelligent.
Image: The robot’s monitor is shown with the citations for this comic displayed.

Comic written and drawn by: Rebecca Kyser

Citations: 

1.Christian B. The Alignment Problem : Machine Learning and Human Values. W.W. Norton & Company; 2021.

2. Lanz D/ JA. AI Kryptonite: Why Artificial Intelligence Can’t Handle Hands. Decrypt. Published April 10, 2023. Accessed August 5, 2024. https://decrypt.co/125865/generative-ai-art-images-hands-fingers-teeth

3. Robison K. Google promised a better search experience — now it’s telling us to put glue on our pizza. The Verge. Published May 23, 2024. Accessed August 5, 2024. https://www.theverge.com/2024/5/23/24162896/google-ai-overview-hallucinations-glue-in-pizza

4. AI-powered mushroom ID apps are frequently wrong - The Washington Post. Accessed August 5, 2024. https://www.washingtonpost.com/technology/2024/03/18/ai-mushroom-id-accuracy/

5. Wells K. An eating disorders chatbot offered dieting advice, raising fears about AI in health. NPR. https://www.npr.org/sections/health-shots/2023/06/08/1180838096/an-eating-disorders-chatbot-offered-dieting-advice-raising-fears-about-ai-in-hea. Published June 9, 2023. Accessed August 5, 2024.

6. Noble SU. Algorithms of Oppression : How Search Engines Reinforce Racism. New York University Press; 2018. doi:10.18574/9781479833641

7. Welborn A. ChatGPT and Fake Citations. Duke University Libraries Blogs. Published March 9, 2023. Accessed August 5, 2024. https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/

8. Pearson J. Study Featuring AI-Generated Giant Rat Penis Retracted, Journal Apologizes. Vice. Published February 16, 2024. Accessed August 5, 2024. https://www.vice.com/en/article/4a389b/ai-midjourney-rat-penis-study-retracted-frontiers

9. Lewis. An artificial intelligence tool that can help detect melanoma. MIT News | Massachusetts Institute of Technology. Published April 2, 2021. Accessed August 5, 2024. https://news.mit.edu/2021/artificial-intelligence-tool-can-help-detect-melanoma-0402

Network Maintenance: 7/13 No WiFi!

GW IT has scheduled network maintenance on the Foggy Bottom Campus on Saturday, July 13th from 8:00 am until 4:00 pm. Impacts to Himmelfarb Library, Ross Hall, and the Milken Institute School of Public Health include disruption of Wifi, network connections, and anything relying on network services will be unavailable. More information can be found at GW IT's Network Maintenance Schedule & Impacts.

Despite the loss of Wifi and network connections at Himmelfarb Library, the Library's website (https://himmelfarb.gwu.edu/) should remain accessible from other locations.

A picture of black framed glasses

When people ask if I use TikTok, I respond with an unconventional reply: only for research purposes. While the wording I use is intended to be humorous, I’m actually telling the truth. I downloaded TikTok in grad school to study misinformation on social media during the height of the Covid-19 pandemic, and to this day, I use it to see what misinformation is spreading online.

Before I get into specifics, let me be clear that TikTok is not the only source of misinformation on the internet or even the worst. It’s hard to gauge how much misinformation spreads from platform to platform, and it’s my personal belief that every social media site has its own misinformation problem that expresses itself in different ways. The problems I see on TikTok can be seen everywhere from X to Meta. But given TikTok’s popularity, it seems best to focus on that platform. 

Back to TikTok. To give readers a taste of what I’m talking about, here is a smallsampling of the type of videos I got on TikTok just last week:

  • Claims that eating a type of seed leads to living over 100 years old and never getting cancer
  • Claims that using a type of veggie powder in your drink can improve your heart health, a powder the video creator also sells
  • The claim that teeth can heal themselves and that dentists are lying to you to gain money 
  • Promotion of a crash diet that claims to cure all diseases 

Needless to say, these health claims are false. If there were a type of seed we could eat that would lead to perfect health, we’d all know about it. But I bring up this type of content not to debunk it, but to ask an important question: why do people believe in these kinds of claims? Most of this content is easily disproved, and some of it has been debunked for decades – take the claim one does not need glasses, for example (Klee, 2023). 

The answer to that question is multifaceted. Since the study of misinformation became more popular, journalists and scientists alike have theorized why some health claims, no matter how ridiculous, still manage to go viral. History has also offered some insights; given the long history of quackery, there are plenty of examples to pull from. 

Here are some reasons why these ideas continue to be popular:

  • Lack of Health Literacy: Health literacy is something that one has to learn. When you lack health literacy, you can be more susceptible to believing outlandish health claims. 
  • Cost of Health Care: Health care is expensive, especially if you lack insurance. When you can’t afford recommended treatments, fringe medical ideas can be more appealing if they don’t break the bank. 
  • They Confirm Our Biases: Claims that back up biases we already have can be particularly enticing. If I love fresh strawberries and see a post claiming fresh strawberries help treat an illness, I’m more likely to buy into the idea. 
  • Negative Experiences with Healthcare: When you have negative encounters with healthcare, alternatives become more appealing, especially when they validate feelings that were previously dismissed (Boyle, 2022)
  • Feeling Anxious/Lacking Control: It’s scary to feel out of control of your own health. Many scams promise that their products offer control over your body, which can be very tempting (Nan, 2022).

One thing I want to draw attention to is what isn’t included on this list: lack of intelligence. While health literacy does help one spot misinformation, it does not provide immunity: some of the biggest spreaders of Covid-19 misinformation are medical professionals (Bond, 2021). No amount of factual knowledge makes us immune to fear, bias, and other emotions that drive the uptake of health misinformation. 

So how do we avoid misinformation? Remaining skeptical is always a good place to start, as well as being aware of one's own biases. As for combating misinformation on a wider scale, teaching health literacy, assisting individuals with high healthcare costs, and improving patient experiences are all ways to start. 

In the meantime, let us all take the health information we hear online with a grain of salt we can only see if we keep wearing our glasses.

Bond, Shannon. “Just 12 People Are Behind Most Vaccine Hoaxes On Social Media, Research Shows.” NPR, 14 May 2021. NPR, https://www.npr.org/2021/05/13/996570855/disinformation-dozen-test-facebooks-twitters-ability-to-curb-vaccine-hoaxes.

Boyle, Patrick. “Why Do People Believe Medical Misinformation?” AAMC, 3 Nov. 2022, https://www.aamc.org/news/why-do-people-believe-medical-misinformation.

Klee, Miles. “‘You Do Not Need Glasses’: A Wellness Coach’s Bogus Claim -- And Its 100-Year History.” Rolling Stone, 18 Sept. 2023, https://www.rollingstone.com/culture/culture-features/wellness-coaches-wearing-glasses-false-claims-1234822624/.

Nan, Xiaoli, et al. “Why Do People Believe Health Misinformation and Who Is at Risk? A Systematic Review of Individual Differences in Susceptibility to Health Misinformation.” Social Science & Medicine, vol. 314, Dec. 2022, p. 115398. ScienceDirect, https://doi.org/10.1016/j.socscimed.2022.115398.

Robson, David. “Why Smart People Are More Likely to Believe Fake News.” The Guardian, 1 Apr. 2019. The Guardian, https://www.theguardian.com/books/2019/apr/01/why-smart-people-are-more-likely-to-believe-fake-news.

Robson, David. “Why Smart People Believe Coronavirus Myths.” BBC, 6 Apr. 2020, https://www.bbc.com/future/article/20200406-why-smart-people-believe-coronavirus-myths.

Image of a peach background with white scrabble tiles spelling out the word "Login" in the center of the image.
Photo from Pexels by Miguel Á. Padriñán 

On Monday, May 20, 2024, Himmelfarb Library, in partnership with GW Libraries and Academic Innovation (GW LAI), will change the underlying system that provides access to our online collections including our ebooks, databases, and journals. This change will be seamless for most users as the new system (OpenAthens) uses the same GW Single Sign-On login method used by our current system (EZproxy).

Changes you can expect to see when accessing Himmelfarb’s e-resources include:

  • When accessing a subscription resource through the library, you may be prompted to log in with your GW UserID and password, even if you are on campus or connected to the VPN.
  • If you’ve recently logged in to another resource via GW’s Single Sign On, you may be able to access resources without logging in again.
  • Many publishers offer direct OpenAthens login from their resources by providing an interface that allows users to choose their institution and select a Single Sign On option for access (for example: JAMA’s OpenAthens access)

For GW faculty, instructional designers, and staff who maintain links to course materials in Blackboard and other course management systems, links that use the older EZproxy system will need to be updated. However, additional time is available to make this transition. 

Here are some key points to know:

  • EZproxy links continue to work and users will be forwarded directly to the linked resource.
  • Link forwarding will remain in place for one year to allow for time to update links to OpenAthens. 
  • Most Blackboard links will be updated automatically via a GW LAI project.
  • Links embedded in PDF documents will need to be updated manually.
  • Linking to Electronic Resources provides support for creating durable links:
    • A QuickTool that allows you to generate and test OpenAthens links.
    • Other methods for creating durable links to journal articles, books, streaming videos, etc.

For additional support related to access and durable linking, see the following resources:

For a moment, let’s entertain a hypothetical. Let’s say you have an excellent paper on your hands about the impact of smoke on the lungs. Your team is about to submit it for publication: pretty exciting! When you get your paper back from the publisher, it’s mostly good news: they’re willing to publish your paper with the caveat that you add a diagram of the lungs to your paper as a visual aid of the systems impacted. The problem? You have no idea where you could acquire an image that would suit this task that wouldn’t potentially violate copyright. 

With this conundrum, one of your coauthors suggests a solution: why not generate one? They have a subscription to Midjourney, the AI software that can generate images from text. Why not give Midjourney a summary of the diagram you need, have it generate it, and then use that for your paper. After checking the journal’s policies on AI (it’s allowed with disclosure), you do just that, glad to have quickly moved past that stumbling block. 

Pretty great, right? It sure sounds like it, until you take a look at the images Midjourney generated. Because on closer inspection, there are some problems. 

Below is an image I generated in CoPilot for this blog post. I didn’t ask it to do something as complicated as making a diagram of how smoking impacts the lungs; instead I asked for just a diagram of human lungs. Here is what I got, with my notes attached.

An image of an AI generated diagram of the lungs in a human women is featured with red text boxes pointing to errors. In the upper left, a box says "nonsense of gibberish text" and a red line points to oddly generated letters that mean nothing. Below it, another box reads "I don't know what this is supposed to be, but I don't think it's in the armpit" with a line pointing to what looks to be an organ with a flower pattern in it. Below that, another box reads "this heart is way too small for an adult" and the red line points to the heart on the diagram. On the left, the top red box reads "now the stomach does not reside in one's hair or arteries" with red lines pointing to a picture of the stomach that is falsely labeled as being in the hair or neck. Below that, a new box reads "what are the gold lines supposed to be in this diagram" and it points to yellow veins that run through the figure like the red and blue that usually denote the circulatory system. The last box on the right says "I have no idea what this is supposed to be" and points to what looks to be bone wrapped around a tube leading out of the bottom of the lungs.

Alright, so this might not be our best image. Thankfully, we have others. Let’s take a look at another image from the same prompt and see if it does a better job. 

An image of an AI generated diagram of the lungs is featured with red text boxes pointing to errors. In the upper left, a box says "more nonsense text" and a red line points to oddly generated letters that mean nothing. On the right side, a box says "bubbles should not be in the lungs!" with a red line pointing to  what looks to be odd bubbles inside the lungs. Below it, a red box reads "what are these small clumps/objects?" and it points to what looks to be red large bacteria and clumps on the lungs.

So what happened here? To explain how this image went terribly wrong, it’s best to start with an explanation of how AI actually works.

When we think of AI, we generally think of movies like The Terminator or The Matrix, where robots can fully think and make decisions, just like a human can. As cool (or terrifying depending on your point of view) as that is, such highly developed forms of artificial intelligence still solely exist in the realm of science fiction. What we call AI now is something known as generative AI. To vastly simplify the process, generative AI works as follows: you take a computer and feed it a large amount of information that resembles what you want it to possibly generate. This is known as “training data.” The AI then attempts to replicate images based on the original training data. (Vox made a video explaining this process much better than I can). So for example, if I feed an AI picture of cats, over time, it identifies aspects of cats across photos: fur, four legs, a tail, a nose,etc. After a period of time, it then generates images based on those qualities. And that’s how we get websites like “These Cats Do Not Exist.

If you take a look at “These Cats Do Not Exist” you might notice something interesting: the quality of fake cat photos varies widely. Some of the cats it generates look like perfectly normal cats. Others appear slightly off; they might have odd proportions or too many paws. And a whole other contingent appears as what can best be described as eldritch monstrosities.  

The reason for the errors in both our above images and our fake cats is due to the fact that the AI doesn’t understand what we are asking it to make. The bot has no concept of lungs as an organ, or cats as a creature; it merely recognizes aspects and characteristics of those concepts. This is why AI art and AI images can look impressive on the surface but fall apart under any scrutiny: the robot can mimic patterns well enough, but the details are much harder to replicate, especially when details vary so much between images. For example, consider these diagrams of human cells I had AI generate for this blog post.

A picture of an AI generated human cell. There are red boxes with text pointing out errors and issues in the image. The top box has the text "nonsense words. Some of these labels don't even point to anything" with two red lines pointing to a series of odd generated letters that mean nothing. Below that, a red box has the text "I have no idea what this is supposed to be" with a red line pointing to a round red ball. On the right side, a text box reads "is this supposed to be a mitochondria? Or is it loose pasta?" with a red line pointing to what looks to be a green penne noodle in the cell. Below that, a red text box reads "I don't think you can find a minature man inside the human cell" and a red line points to the upper torso and head of a man coming out of the cell.

Our AI doesn’t do bad in some regards: it understands the importance of a nucleus, and that a human cell should be round. This is pretty consistent across the images I had it make. But when it comes to showcasing other parts of the cell we run into trouble, given how differently such things are presented in other diagrams. The shape that one artist might decide to use for anaspect of a cell, another artist might draw entirely differently. The AI doesn’t understand the concept of a human cell, it is merely replicating images it’s been fed. 

These errors can lead to embarrassing consequences. In March, a paper went viral for all the wrong reasons; the AI images the writers used had many of the flaws listed above, along with a picture of a mouse that was rather absurd. While the writers disclosed the use of AI, the fact these images passed peer review with nonsense text and other flaws, turned into a massive scandal. The paper was later retracted. 

Let’s go back to our hypothetical. If you need images for your paper or project, instead of using AI, why not use some of Himmelfarb’s resources? On this Image Resources LibGuide, you will find multiple places to find reputable images, with clear copyright permissions. There are plenty of options from which to work. 

As for our AI image generators? If you want to generate photos of cats, go ahead! But leave the scientific charts and images for humans. 

Sources:

  1. Ai Art, explained. YouTube. June 1, 2022. Accessed April 19, 2024. https://www.youtube.com/watch?v=SVcsDDABEkM.
  2. Wong C. AI-generated images and video are here: how could they shape research? Nature (London). Published online 2024.

Picture of a person meditating in lotus pose on a yoga mat with a Virtual Reality headset nearby.
Photo by Eren Li

April is Stress Awareness Month. Himmelfarb Library’s Oculus Virtual Reality (VR) headsets now include healthy living apps that can help you manage your stress! These new apps help users meditate, alleviate anxiety, and generally relax. Take a few minutes to unwind and get recentered in virtual reality so you’ll feel refreshed and rejuvenated in actual reality!

Our Oculus headsets can be checked out from the circulation desk on Himmelfarb’s first floor for four hours at a time. You’ll need some unobstructed space to use the headset since the apps allow you to move around within a virtual space. Our VR Headset Overview page includes recommended spaces within Himmelfarb to use the headsets that can accommodate the space needed to use these apps comfortably. 

Guided Meditation VR

The Guided Meditation VR app helps users detach and relax with guided or unguided meditation sessions with calming music and ambient noises from more than 40 digitally-generated environments. This app has over 30 hours of meditations geared toward alleviating anxiety, finding resilience, improving sleep, and even maternity meditations. If you’re unsure about VR but want to experience some of the sessions, you can try them out for free online! This app is available on both of Himmelfarb’s Oculus headsets. 

Nature Treks VR

The Nature Treks VR app lets users choose between nine different natural environments and lets them explore and play. You can choose to explore forests, beaches, or even outer space! You even get to choose your preferred weather and time of day and can summon animals. These individually designed spaces can be used as places to meditate or perform breathing exercises. This app is available on Himmelfarb’s “Walter” headset so that you can ask for it by name at the Circulation Desk. 

National Geographic Explore VR

The National Geographic Explore VR app lets users choose between two different ecosystems to explore: Machu Picchu and Antarctica! In Antarctica, you’ll get to navigate around icebergs in a kayak, climb a massive ice shelf, and survive a raging snowstorm while searching for a lost emperor penguin colony. Or you can visit Machu Picchu, Peru, and explore digital reconstructions of the ancient Inca citadel, raise a cup of sacred chicha, and encounter alpacas while you match Hiram Bingham’s photographs from when he rediscovered the Inca citadel. Not only can you experience the landscape, but you’ll get to take photographs as well. This app is a bit more physically strenuous and can need some additional room to navigate. This app is available on Himmelfarb’s “Paul” headset.

While the noises generated by all three of these apps are gentle and soothing, they are audible outside of the Oculus headset, so it’s best to use these apps in a quiet space away from others who may be studying or trying to concentrate. Himmelfarb study rooms are a great option for using this app and can be reserved in advance!

Other Stress Relief Resources at Himmelfarb

If Virtual Reality isn’t of interest to you, Himmelfarb’s healthy living collection has other stress relief resources that may suit your style. Take a look at our Healthy Living @ Himmelfarb Guide for a full list of resources. Check out the Wellness Apps page of this guide to find useful meditation and stress relief apps. Our healthy living collection also includes books on stress reduction including Stress, Cognition, and Health by Tony Cassidy, The Psychology of Meditation by Peter Sedlmeier, and Managing Stress by Brian Luke Seaward. As always, feel free to stop by the healthy living collection on Himmelfarb’s first floor to make use of our exercise equipment if you’d prefer to manage your stress with some physical activity and use our exercise balls, hand weights, hula hoops, or yoga mats. We also have plenty of games including chess, Sorry, Scrabble, Blokus, and Pandemic. As always, a jigsaw puzzle is in progress on our puzzle table, and we are waiting for your contributions! 

Picture of a jigsaw puzzle on a wooden table.

Want more resources to help you manage your stress? Check out the GW Resiliency and Well-Being Center’s Stress Management page for resources related to mindfulness practice, well-being, physical activity, healthy lifestyle tips, and student resources related to stress management.