Skip to content

Logo with the words IndigenousSIPIN and a blue wave graphic
Logo for IndigenousSIPIN intervention, shared with author permission from Haozous, E., Yeary, K., Maybee, W., Porter, C., Zoellner, J., John, B., Henry, W. A. E., & Haring, R. C. (2024). Indigenous knowledge and sugar sweetened beverages: Qualitative adaptations towards chronic disease prevention and intervention. Explore (New York, N.Y.), 20(6), 103066. Advance online publication. https://doi.org/10.1016/j.explore.2024.103066

November is Native American Heritage Month. To observe it, The Rotation spoke with Dr. Emily A. Haozous, PhD, RN, FAAN (Chiricahua Fort Sill Apache). 

Dr. Haozous is a nurse and research scientist with the Pacific Institute for Research and Evaluation - Southwest Center, based in Albuquerque, New Mexico. Dr. Haozous conducts community-based and community-guided research and evaluation in collaboration with Native American partners, including urban tribal centers, reservation-based tribal organizations, and tribal governments. Her work is focused on issues of access to care, health equity, cancer and non-cancer pain management, cultural tailoring, and national trends in premature mortality. Dr. Haozous has a clinical background in oncology, hospice, and palliative care nursing. She is a breast cancer survivor and has co-facilitated a women’s cancer support group continuously since 2007. Dr. Haozous received her undergraduate degree in music from the University of California, Santa Cruz, and her MSN and PhD in nursing from Yale University. Most recently, she participated in the authorship of the National Academy of Medicine’s special publication, Systems’ Impact on Historically and Currently Marginalized Populations (expected publication date 2025).

Dr. Haozous is first author of the recent publication, “Indigenous knowledge and sugar sweetened beverages: Qualitative adaptations towards chronic disease prevention and intervention.” The focus of this publication is the adaptation for one Indigenous population of an evidence-based curriculum (SIPsmartER) to reduce consumption of sugar-sweetened beverages (SSB). The study reports on the successful adaptation of the curriculum resulting in a curriculum called IndigenousSIPIN, and also provides valuable insights into the practice of cultural tailoring

The Rotation: Thanks for taking time to talk with me today. I wanted to speak about your most recent publication, and discuss some concepts related to research and Native populations that it introduced me to. I noticed quite a few co-authors on this paper.

EH: It was a really big team. We had people from all over the country and different disciplines… The one thing we didn’t have were any MDs. That wasn’t a conscious decision, just how it worked out. People think about medical research, or health research, and they think about doctors. Here we have nutritionists, we have nurses, we have a social worker… We have people from all across the healthcare spectrum.

The Rotation: I was intrigued by the use of the concept of the Good Mind, a concept familiar to Indigenous people from the community that was addressed by this adaptation, in the modification of the SSB curriculum, which also introduced the metaphor of the Clean and Dirty River as a framework for the curriculum. These worked for the specific Indigenous group the intervention was tested on, who were male athletes in the Northeastern U.S. Would these metaphors be understood to all Indigenous people?

EH: What you’re asking me about is Native Science. That’s kind of the core of cultural tailoring. The old mainstream perspective on cultural tailoring of health literature is “Well, let’s just change the color scheme and maybe add some photos. If they speak a different language, we’ll change the language. Or maybe we’ll make the font size bigger.”

The Rotation: Sort of like when someone doesn’t understand English, so the other person just speaks louder?

EH: That’s a good way to think about it. Think about how that feels, if you’ve ever been to another country and people have done that to you… But when we’re talking about actual cultural tailoring, you really want to think about, Who are the people you’re trying to communicate with, How do they think? What’s important to them? What is their culture? And so, when you do that… it stops being about changing the color scheme – well, actually, maybe the color scheme is important. A lot of Native tribes – I’m not going to say all, because that would be disrespectful, [since] there are 574 recognized tribes in this country right now, and that number’s changing all the time, and we’re all different – color is important to us. When I go and spend time with my tribe, I can tell who’s Apache because of the colors they’re wearing. And I can tell who’s Comanche because of the colors they’re wearing. So color’s important.

The Rotation: That’s why I was wondering, when you are culturally tailoring an intervention or instrument, are there terms that transcend differences between the tribes? I’m asking whether the ways that you modified the tool are fundamentally, across the board, things that would be understood, regardless of tribe.

EH: I don’t think I could say that. I’d have to talk to each person and say, “Does this make sense to you?” Until I had talked to someone from every single tribe or community – and even within tribes there’s differences – I’d have to really do a scan to be able to confidently say yes or no. 

So getting back to [cultural tailoring] – we have to get to what is meaningful for people. So it’s not just about color and not just about pictures, but what is meaningful for those people.”

The Rotation: I look at many studies that aren’t designed like this. Is this research practice of culturally tailoring instruments or interventions something fairly new?

EH: Yes. The practice of really digging deep into a community and finding out what is meaningful to you. And it is not just using an algorithm, but going in and saying, “Is this color aesthetically pleasing to you? Are there colors that we shouldn’t be using? Are there pictures that we shouldn’t be using?” You know, in some communities you don’t include pictures of people who have passed on. Which is challenging, because – people die. And so you have to be very careful with that. And in other communities they really want that, to celebrate people who have been important to them.

In science, they want algorithms. In dissemination and implementation research, it’s all about, “What works here should be able to work everywhere else.” And that’s just not the case in Native communities. So, I can’t take the Clean and Dirty River model and use it in the Southwest. Because we just don’t have the same accessibility to water. So I can use the same practice of finding a meaningful metaphor and trying to transform it, but I can’t use Clean and Dirty River.

Chart showing five stages of tailoring the existing SIPsmartER intervention to the Indigenous-focused IndigenousSIPIN.
Figure 1 from Haozous, E., Yeary, K., Maybee, W., Porter, C., Zoellner, J., John, B., Henry, W. A. E., & Haring, R. C. (2024). Indigenous knowledge and sugar sweetened beverages: Qualitative adaptations towards chronic disease prevention and intervention. Explore (New York, N.Y.), 20(6), 103066. Advance online publication. https://doi.org/10.1016/j.explore.2024.103066. Shared with author permission.

The Rotation: I was curious about the graphic in the article, which depicts the stages of cultural tailoring of evidence-based interventions. 

EH: That was just me trying to make something that was usable. Part of it is, we have this whole curriculum for the program that we didn’t want to publish, because we didn’t want it to become mainstream.

The Rotation: You don’t want it to be used like a blunt instrument.

EH: Exactly.

The Rotation: Publication of these findings is intended to demonstrate cultural tailoring in practice, but it is not intended as a product to be posted online or whatever.

EH: If people want to contact my colleagues and see the materials they created, it’s up to them.

The Rotation: Were all the team members Indigenous?

EH: Some people were not. We spent some time with the non-Indigenous team members getting them to understand… Some people were saying, “You’ll never get them to drink water.” Because there’s no precedent in the literature where you could convince people who were basically addicted to drinking SSB to stop drinking sweet things. And so we had to do a lot of teaching within the team to say, Look, a lot of traditional beverages are sweet, they’re just not sugar-sweetened. They’re sweetened with berries, there are teas that you can sweeten. And natural stevia grows in the area where we did this research. And they were like, “They’ll never choose water.” And we were able to prove them wrong.

The Rotation: Part of the work being done here is to dismantle the assumptions people are making.

EH: For one of them, this person had been working in the field for a very long time, and her biggest success was getting people to drink diet sodas. And we were like, maybe we can aim for a different purpose.

The Rotation: How much of your published work has been related to Indigenous people?

EH: I always get called in as the expert on Indigenous research. I’m happy to do that. That’s my mission. I’ve published in a lot of different places, domains, whether it’s large data analysis or qualitative research looking at access to care in different places, whether that’s in Indian Health Service or pain management or telehealth.

The Rotation: Do you have recommendations for those who are new to reading research conducted in Indigenous populations?

EH: The first thing I would suggest is that when people are reading an article, they find articles that are written by Indigenous authors. Usually there’s a disclosure statement if a person [on the team is] Native. You want a team that has Native people on the team. I’m starting to see articles coming from other countries where they’re just slurping up data from American sources, and they don’t have Native authors, and they’re terrible. The American Journal of Public Health is usually very careful about this. You want to make sure that [researchers have] followed data ownership guidelines from the tribes. That is usually included in the disclosure with the article. Usually the top tier journals will follow that, and the peer reviewers will keep track of that. It’s a very small circle, you start to see the same people publishing.

The Rotation: What was your experience working on this project?

EH: It was a great project. I like doing that kind of work, because it really makes me work my Indigenous mind, and I get to work with Native teams, which I really like to do.

The Rotation: How long did the project last, start to finish?

EH: It was a couple years, and it all happened during COVID. We had to do a lot of the interviews online, which was hard. But one of the best parts was talking to these men who really knew a lot about their culture, and a lot about how to encourage young men to drink water, and what was important to them.

The Rotation: I was struck by the quote in the article from a participant in the intervention who suggested that something that would make others in their community pay attention to reducing SSB consumption was the high cost of dental care, and the prospect of having dental problems, as being more persuasive than health issues which would appear farther down the road.

EH: There’s a lot going on there, like the fact that they don’t have access to good dental care. There’s so much more in there that we couldn’t add.

The Rotation: When we think about barriers to access to care, people are primarily thinking about, say, African American communities or urban versus rural communities. I think it is rare for people to perceive there are Native communities all around us confronting the same or similar issues. Thank you so much for taking the time to speak with me today.

For those interested in learning more about Native Science, Dr. Haozous recommends Gregory Cajete’s Native science : natural laws of interdependence (Clear Light Publishers, 2000.) This book is available to borrow from Georgetown University through Himmelfarb’s WRLC consortial borrowing program.

References

Haozous, E., Yeary, K., Maybee, W., Porter, C., Zoellner, J., John, B., Henry, W. A. E., & Haring, R. C. (2024). Indigenous knowledge and sugar sweetened beverages: Qualitative adaptations towards chronic disease prevention and intervention. Explore (New York, N.Y.), 20(6), 103066. Advance online publication. https://doi.org/10.1016/j.explore.2024.103066

 

This week is Open Access week! Open access is an international movement that looks to remove barriers to scientific research and data. The goal is that everyone can access academic scholarship equally without running into legal, financial or technical barriers (1).

This year's theme for Open Access Week is “Community Over Commercialization.” The goal is to look at ways we can share scholarship in ways that benefit everyone. 

If you want to get involved and learn more, check out these on-campus events run by the George Washington Open Source Project:

Oct 22nd, 7pm-9:30pm Movie Night with Q&A for Open Access Week

University Student Center Amphitheater

Join the GW OSPO for a showing of "The Internet's Own Boy: The Aaron Swartz Story", an award-winning movie about a computer programmer, writer, political organizer, and internet activist and his battle with the U.S. government and the publishing industry as he risks everything in the pursuit of sharing knowledge. The screening will be followed by a Q&A panel to talk about research, publishing, access to information, and other important topics raised throughout this film.

Popcorn will be provided. The first 25 attendees will get a homemade chocolate chip peanut butter cookie!

Oct 24, 11:30am-12:30pm GW Coders' Lunch and Learn: Care Work and Accessibility in p5.js and Open Source Software*

Join us in SEH, B2600 or online in Zoom: https://go.gwu.edu/gwcoderszoom

We are very excited to host the lead maintainer of the open source project p5.js.  p5.js is a friendly tool for learning to code and make art. It is a free and open-source JavaScript library built by an inclusive, nurturing community. p5.js welcomes artists, designers, beginners, educators, and anyone else! Qianqian Ye, the lead maintainer will discuss care work, accessibility, demonstrate the tool, and answer questions.

Oct 25, 12pm-1pm GW OSPO Webinar Panel Discussion: Can Diamond Open Access disrupt the broken paywall publishing model and save science with the help of open source software?

Join us online: GW OSPO Zoom Webinar

Our distinguished panel of Diamond Open Access experts from across the globe will explore possible paths forward for open access publishing.  Please come and bring your hard questions for this group to try to answer.

If you want to explore and learn about Open Access on your own time, here are some materials and resources to explore Open Access:

Paywall Documentary: Not familiar with the world of Scholarly Publishing, or the Open Access movement? Take some time to watch the documentary “Paywall.” Paywall is an excellent introduction to the world of Open Access for complete beginners and it’s a great watch. 

PHD Comics: Don’t have the time for a full documentary? Try this video comic from PHD comics about Open Access that provides a dynamic illustrated introduction to the topic. 

Open Access and Your Research: Curious what Open Access means for you and your own work? Check out this instructional video from the Scholarly Communications Committee about what to expect. 

OA LibGuide: Need to find open access material to learn about medicine? Try our Open Access LibGuide which contains links to textbooks, journals, and other resources people can use. 

  1. What is open access? International Open Access Week. Accessed October 17, 2024. https://www.openaccess.nl/en/what-is-open-access
  2. Paywall: The Business of Scholarship. The Movie.; 2018. Accessed October 18, 2024. https://www.youtube.com/watch?v=zAzTR8eq20k
  3. Open Access Explained!; 2012. Accessed October 18, 2024. https://www.youtube.com/watch?v=L5rVH1KGBCY
  4. Open Access and Your Research.; 2022. Accessed October 18, 2024. https://www.youtube.com/watch?v=6SpLN7BbzGg

Sometimes researching can be more complicated than it appears. Below, we take a look at predatory publishing, what it is, and how to avoid it.

A title card that says predatory publishing
Narration: A label next in the bottom left corner denotes the speaker as “Rebecca, Librarian, Amature Cartoonist.”
Pp Page 1
Rebecca in the boat, looks down concerned at the sea, where multiple shark fins can be seen poking through the waves. The speech bubble states “But there metaphorical waters can prove treacherous. And unlike real sharks, these threats to scientific knowledge provide little benefit to the scholarly ecosystem”
Narration: “Introducing Predatory Publishing” is at the top of the page. At the bottom, there is a label for the shark, which states “Ponzi, the Shark”
Image: A shark wearing a top hat and bow tie waves a fin, looking smug.
Panel 4 Narration: “But what are predatory publishers?”
Image: Rebecca looking stern, looks forward with a parrot on her shoulder. “Predatory publishers are journals that only exist to make money.”

panel 5 
Image: A white man with blonde hair and old fashion clothes, holding onto ship wreckage like Jack in Titanic, looks at a mermaid with brown skin, black hair and a purple tail. In the background there is other evidence of a ship wreck. The man says “What do you mean “make money?” to which the mermaid replies “You didn’t know?”
Pp Page 1
Image: Now under the sea, the mermaid from earlier gestures to a treasure chest full of gold. Other sea life float in the background. She says “Scientific publishing is a huge business. One publisher, had a profit margin of almost 40% in 2023 (1). In contrast, Apple’s was 44% (2).
Pp Page 1
Panel 1 Narration: To best understand how publishers make so much money, one must learn how the publishing process works.
Image: The parrot from earlier says “Polly want an explanation!”

Panel 2 
Narration: “Traditional publishing looks something like this. Scientists submit to journals who publish it to the world (ideally). And money flows like this: scientists submit to journals for free (or a small fee) and publishers pay to publish the work to the world, who pay higher costs in return for access. Publishers get work for free that is edited for free and then charge individuals, libraries, ect for access.
Image: A flow chart of a beaker, a journal and the Earth is shown demonstrating the relationship described in the narration.
Narration: This can lead to science being behind a paywall, especially for scientists, schools and others who can’t afford to pay.
Image: Rebecca and Polly the parrot stand on opposite sides of a poster with a picture of a journal on it. The poster says “$$$$ science.” Rebecca, talking to Polly, says “I can’t afford this.” Polly, who is resting on a bird perch, says “Polly can’t even afford a cracker…”
Panel 1
Narration: SO a new model was born: open access. It looks like the traditional model but money flows like this (authors pay to journals to publish their work and journals pay to publish to the world). The idea is the author pays a fee to ensure wider access.
Image: A flow chart of a beaker, a journal and the Earth is shown demonstrating the relationship described in the narration. 
Panel 2:
Narration: Ideally, the rest works the same. Scientists submit their best works, it’s peer reviewed and if it passes muster, it’s published like traditional publishing.
Image: We see the Earth in space with an Astronaut floating in the foreground. The astronaut says “I even get access out here!” There is also a UFO floating over the Earth as a gag.
Narration: Except…what if instead of being discerning about what you publish, you just accept everything? After all, the more articles you accept, the more money you make in fees.
Pp 2
Narration: This is the business model of predatory publishers: accept anything and make a profit from the fees. Some tactics of predatory publishers include:
Image: A wanted poster of Ponzi the shark is affixed to a brick wall. On the poster, Ponzi looks alarmed. The text of the poster says “wanted: fraud.”
Narration: Pretending to be respected journals by spoofing the name of a more reputable publication.
Image: A bald Black scientist wearing glasses looks concerned at Ponzi, who looks the same except for a drawn on fake mustache. Both of them stand next to posters. The scientist’s poster says “submit to Nature.” Ponzi’s poster states “submit to Natures.”
Panel 1 

Narration: Or they’ll ue the name of a defunct journal that has a better reputation.
Image: Ponzi the shark is seen floating underwater over a human skeleton. There are two labels affixed to each. The skeleton is labeled as “human sciences.” Ponzi is labeled as “human sciences 2.0”
Panel 2:
Narration: They might offer services like peer review with no intention of doing it, or claim rapid turn around times.
Image: A white and yellow tropical fish stares at a piece of paper on a fish hook. The paper says “pls review in 24 hours.” Question marks are shown over the fish’s head.
Narration: The reason this is a huge issue is partially one of quality. Predatory publishers flood scientific literature with B.S that can be dangerous.
Pp 2
Narration: The other issue impacts scientists: those tricked into publishing in these journals can see a hit to their reputations. 
Image: A line up of three figures is shown with a text box underneath. The first two figures are literal clowns in full makeup while the third is a scientist with brown skin and brown hair looking horrified in their direction. The text underneath the three states” This issue: Balloon animals found to boost happiness page 8. The speed of trick flowers page 32. New cells found in clinical trials page 41.
Panel 1: 
Image: The shipwrecked sailor holding onto wood planks from page 1 floats in the ocean. He asks “So what do we do about this? Give up on open access?”
Panel 2: 
Image: Rebecca is seen balancing on the mast of the ship with the sail behind her. He says “of course not! Open science is important. We just need to be careful.”
Narration: Look for red flags. Things like:
Image: A red flag is seen in the sky. On the red flag there is a yellow circle which showcases Ponzi the shark.
Panel 1 
Narration: Editors who have frank credentials, lack expertise that matches the journal or don’t exist at all. 
Image: A volleyball with a face painted on it in red (much like Wilson in Castaway) rests on a beach. Below it, a text box states: “Editor in Chief: Wilson V. Ball”
Panel 2:
Narration: Having a weird street address for a business or no about page.
Image: An underwater cave is shown, There is a wooden sign in front of the cave that says “home of Natures.”
Panel 1 

Narration: You’re naked to submit work entirely unsolicited.
Image: An anglerfish with a letter in the place of its light antenna, floats in the deep sea. With sharp teeth it says “You got mail.” 
Panel 2:
Narration: Promising rapid publication.
Image: A stopwatch on a chain is shown with the intervals of 15, 30 and 45 on it. The top interval says “publish.”
Panel 1 Narration: Unsure about a publisher? You can ask a librarian or try using Cabells, a director of publishing opportunities. It identifies predatory publishers. We offer access to Cabells through Himmelfarb.
Image: The interface of Cabells is shown where journals are marked as predatory. 

Panel 2

Narration: Good luck!
Image: Rebecca is seen in the bird's nest, giving a salute to the audience. The ocean and sun can be seen behind her,
A list of sources\

Yup K. How Scientific Publishers’ Extreme Fees Put Profit Over Progress. Published online May 31, 2023. Accessed May 6, 2024. https://www.thenation.com/article/society/neuroimage-elsevier-editorial-board-journal-profit/
Miglani J. Apple Sales And Profits Analysis For FY 2023 — Top 10 Insights. Forrester. Published November 21, 2023. Accessed May 6, 2024. https://www.forrester.com/blogs/apple-sales-and-profits-analysis-for-fy-2023-top-10-insights/
Bueter R. Research Guides: Predatory Publishing: Home. Himmelfarb Health Science Library. Accessed May 29, 2024. https://guides.himmelfarb.gwu.edu/PredatoryPublishing/Home

Sources:

  1. Yup K. How Scientific Publishers’ Extreme Fees Put Profit Over Progress. Published online May 31, 2023. Accessed May 6, 2024. https://www.thenation.com/article/society/neuroimage-elsevier-editorial-board-journal-profit/
  2. Miglani J. Apple Sales And Profits Analysis For FY 2023 — Top 10 Insights. Forrester. Published November 21, 2023. Accessed May 6, 2024. https://www.forrester.com/blogs/apple-sales-and-profits-analysis-for-fy-2023-top-10-insights/
  3. Bueter R. Research Guides: Predatory Publishing: Home. Himmelfarb Health Science Library. Accessed May 29, 2024. https://guides.himmelfarb.gwu.edu/PredatoryPublishing/Home

Fingers with arthritis resting on a blue cushion
Fingertip arthritis - DIP joint, by handarmdoc on Flickr, licensed under Creative Commons

May was designated as Arthritis Awareness Month by Congress and the President in 1972. An estimated 53.2 million US adults (21.2%) reported being diagnosed with some form of arthritis, rheumatoid arthritis, gout, lupus, or fibromyalgia, in response to the CDC’s National Health Interview Survey (Fallon et al., 2023). There are numerous types of arthritis. Because of its prevalence, as well as the financial impact of the various forms of arthritis – for the year 2017, the CDC estimated that osteoarthritis was the second most costly condition treated at US hospitals. Let’s look at a couple of recently published articles examining the impacts of arthritis on population health.

When we consider a condition that is as common within the population as arthritis, and as costly to treat, health disparities are a concern. In a brief report in the July 2023 issue of Arthritis Care & Research, researchers examined healthcare utilization by patients diagnosed with rheumatoid arthritis (RA) or osteoarthritis (OA), focusing on whether these patients live in rural/isolated, largely rural, or urban locations (Desilet et al., 2023) The study was based on questionnaires filled out by over 37,000 RA patients and over 8200 OA patients. A majority of the RA patients responding (74.5%) lived in a rural area, and this proportion was similar for OA patients. By analyzing questionnaire responses indicating healthcare utilization over six months, the research team found that among RA patients, urban residents were more likely to utilize healthcare provided by some type of professional than their rural counterparts. The same was true for OA patients. Patients with both types of arthritis fare better under the care of a rheumatologist, and in rural areas, access to this expertise is more limited. The findings of this study suggest the importance of extending access to rheumatology care in rural communities that are not currently well-served.

A forthcoming article in the journal Rheumatology (d'Elia et al, 2024) reports on a study of symptoms in a primary care database, which tracked prodromal (early) symptoms for the 24 months prior to diagnosis, in over 70,000 RA patients, over a period of 18 years. When analyzed demographically and socioeconomically, the findings were that symptoms were reported differently in new-onset RA across ethnic groups. While some of this may be accounted for due to the way symptoms are reported by patients, delayed diagnosis and treatment is another potential factor. 

Another interesting finding of this study was the fact that of the symptoms reported, there was a discrepancy between the most common symptoms of RA (e.g. painful small joints of the hands, present in over half of RA patients) and the percentage of patients in the database who were reporting this symptom (10.2%). This may point to under-coding of symptoms, which would have an impact on treatment. Future studies may build on these findings delving more deeply into the differences in RA symptoms among different ethnic groups, including their underlying causes and their clinical implications.

Arthritis affects a large proportion of the population in the US and worldwide, and the burden falls more heavily on those who struggle to access care, as well as those who are not served equitably within healthcare settings. This Arthritis Awareness Month, consider how you might be able to contribute to our understanding of these disparities and help to cure them.

References

Fallon, E. A., Boring, M. A., Foster, A. L., Stowe, E. W., Lites, T. D., Odom, E. L., & Seth, P. (2023). Prevalence of Diagnosed Arthritis - United States, 2019-2021. MMWR. Morbidity and mortality weekly report72(41), 1101–1107. https://doi.org/10.15585/mmwr.mm7241a1

Desilet, L. W., Pedro, S., Katz, P., & Michaud, K. (2023). Urban and Rural Patterns of Health Care Utilization Among People With Rheumatoid Arthritis and Osteoarthritis in a Large US Patient Registry. Arthritis Care & Research (2010). https://doi.org/10.1002/acr.25192

d'Elia, A., Baranskaya, A., Haroon, S., Hammond, B., Adderley, N. J., Nirantharakumar, K., Chandan, J. S., Falahee, M., & Raza, K. (2024). Prodromal symptoms of rheumatoid arthritis in a primary care database: variation by ethnicity and socioeconomic status. Rheumatology (Oxford, England). Advance online publication.

For a moment, let’s entertain a hypothetical. Let’s say you have an excellent paper on your hands about the impact of smoke on the lungs. Your team is about to submit it for publication: pretty exciting! When you get your paper back from the publisher, it’s mostly good news: they’re willing to publish your paper with the caveat that you add a diagram of the lungs to your paper as a visual aid of the systems impacted. The problem? You have no idea where you could acquire an image that would suit this task that wouldn’t potentially violate copyright. 

With this conundrum, one of your coauthors suggests a solution: why not generate one? They have a subscription to Midjourney, the AI software that can generate images from text. Why not give Midjourney a summary of the diagram you need, have it generate it, and then use that for your paper. After checking the journal’s policies on AI (it’s allowed with disclosure), you do just that, glad to have quickly moved past that stumbling block. 

Pretty great, right? It sure sounds like it, until you take a look at the images Midjourney generated. Because on closer inspection, there are some problems. 

Below is an image I generated in CoPilot for this blog post. I didn’t ask it to do something as complicated as making a diagram of how smoking impacts the lungs; instead I asked for just a diagram of human lungs. Here is what I got, with my notes attached.

An image of an AI generated diagram of the lungs in a human women is featured with red text boxes pointing to errors. In the upper left, a box says "nonsense of gibberish text" and a red line points to oddly generated letters that mean nothing. Below it, another box reads "I don't know what this is supposed to be, but I don't think it's in the armpit" with a line pointing to what looks to be an organ with a flower pattern in it. Below that, another box reads "this heart is way too small for an adult" and the red line points to the heart on the diagram. On the left, the top red box reads "now the stomach does not reside in one's hair or arteries" with red lines pointing to a picture of the stomach that is falsely labeled as being in the hair or neck. Below that, a new box reads "what are the gold lines supposed to be in this diagram" and it points to yellow veins that run through the figure like the red and blue that usually denote the circulatory system. The last box on the right says "I have no idea what this is supposed to be" and points to what looks to be bone wrapped around a tube leading out of the bottom of the lungs.

Alright, so this might not be our best image. Thankfully, we have others. Let’s take a look at another image from the same prompt and see if it does a better job. 

An image of an AI generated diagram of the lungs is featured with red text boxes pointing to errors. In the upper left, a box says "more nonsense text" and a red line points to oddly generated letters that mean nothing. On the right side, a box says "bubbles should not be in the lungs!" with a red line pointing to  what looks to be odd bubbles inside the lungs. Below it, a red box reads "what are these small clumps/objects?" and it points to what looks to be red large bacteria and clumps on the lungs.

So what happened here? To explain how this image went terribly wrong, it’s best to start with an explanation of how AI actually works.

When we think of AI, we generally think of movies like The Terminator or The Matrix, where robots can fully think and make decisions, just like a human can. As cool (or terrifying depending on your point of view) as that is, such highly developed forms of artificial intelligence still solely exist in the realm of science fiction. What we call AI now is something known as generative AI. To vastly simplify the process, generative AI works as follows: you take a computer and feed it a large amount of information that resembles what you want it to possibly generate. This is known as “training data.” The AI then attempts to replicate images based on the original training data. (Vox made a video explaining this process much better than I can). So for example, if I feed an AI picture of cats, over time, it identifies aspects of cats across photos: fur, four legs, a tail, a nose,etc. After a period of time, it then generates images based on those qualities. And that’s how we get websites like “These Cats Do Not Exist.

If you take a look at “These Cats Do Not Exist” you might notice something interesting: the quality of fake cat photos varies widely. Some of the cats it generates look like perfectly normal cats. Others appear slightly off; they might have odd proportions or too many paws. And a whole other contingent appears as what can best be described as eldritch monstrosities.  

The reason for the errors in both our above images and our fake cats is due to the fact that the AI doesn’t understand what we are asking it to make. The bot has no concept of lungs as an organ, or cats as a creature; it merely recognizes aspects and characteristics of those concepts. This is why AI art and AI images can look impressive on the surface but fall apart under any scrutiny: the robot can mimic patterns well enough, but the details are much harder to replicate, especially when details vary so much between images. For example, consider these diagrams of human cells I had AI generate for this blog post.

A picture of an AI generated human cell. There are red boxes with text pointing out errors and issues in the image. The top box has the text "nonsense words. Some of these labels don't even point to anything" with two red lines pointing to a series of odd generated letters that mean nothing. Below that, a red box has the text "I have no idea what this is supposed to be" with a red line pointing to a round red ball. On the right side, a text box reads "is this supposed to be a mitochondria? Or is it loose pasta?" with a red line pointing to what looks to be a green penne noodle in the cell. Below that, a red text box reads "I don't think you can find a minature man inside the human cell" and a red line points to the upper torso and head of a man coming out of the cell.

Our AI doesn’t do bad in some regards: it understands the importance of a nucleus, and that a human cell should be round. This is pretty consistent across the images I had it make. But when it comes to showcasing other parts of the cell we run into trouble, given how differently such things are presented in other diagrams. The shape that one artist might decide to use for anaspect of a cell, another artist might draw entirely differently. The AI doesn’t understand the concept of a human cell, it is merely replicating images it’s been fed. 

These errors can lead to embarrassing consequences. In March, a paper went viral for all the wrong reasons; the AI images the writers used had many of the flaws listed above, along with a picture of a mouse that was rather absurd. While the writers disclosed the use of AI, the fact these images passed peer review with nonsense text and other flaws, turned into a massive scandal. The paper was later retracted. 

Let’s go back to our hypothetical. If you need images for your paper or project, instead of using AI, why not use some of Himmelfarb’s resources? On this Image Resources LibGuide, you will find multiple places to find reputable images, with clear copyright permissions. There are plenty of options from which to work. 

As for our AI image generators? If you want to generate photos of cats, go ahead! But leave the scientific charts and images for humans. 

Sources:

  1. Ai Art, explained. YouTube. June 1, 2022. Accessed April 19, 2024. https://www.youtube.com/watch?v=SVcsDDABEkM.
  2. Wong C. AI-generated images and video are here: how could they shape research? Nature (London). Published online 2024.

The image features a group of people sitting at a table using laptops.

Do you have questions about how to determine if a journal is a predatory publisher? Would you like a short tutorial on importing your citations into RefWorks? Do you need help understanding copyright laws and how these laws apply to you as a researcher in the scholarly publishing landscape? The Scholarly Communications Committee’s recent tutorials address these topics and more. To learn more about scholarly publishing and communications, watch one of the videos listed below or visit the full video tutorial library

In this video, librarian Ruth Bueter shows how Cabells Predatory Reports can be used to evaluate publishing options. The tutorial describes characteristics of predatory journals, explains what Cabells Predatory Reports are, limitations to the reports and ends with a demonstration of how to access and use the reports. This tutorial is useful for researchers who want to avoid publishing in a predatory journal or those who are interested in learning more about a resource that can help evaluate journals.

During this tutorial, Metadata Specialist Brittany Smith goes into detail about copyright in the United States, including rights automatically granted to authors of a work and how publishing agreements may impact authors’ rights. Additionally, the tutorial provides resources that can assist authors in understanding their rights and feeling confident when negotiating their agreements with publishers.

RefWorks is a citation manager that assists with tracking citations and building a bibliography to properly attribute works referenced in research. Senior Circulation Assistant Randy Plym demonstrates how to access RefWorks from Himmelfarb Library’s homepage and walks you through the process of importing citations from databases such as Pubmed or CINAHL. 

Many of the tutorials are five minutes or less and new videos are routinely added to the collection. Topics range from the research life cycle to understanding what editors look for in a manuscript to setting up a Google Scholar or ORCiD profile. If you would like to watch one of these tutorials visit the Scholarly Communications Guide or Himmelfarb Library’s YouTube profile.

Last month, European researchers launched a program to identify errors within scientific literature. With an initial fund of 250,000 Swiss francs - roughly 285,000 USD - team leaders Malte Elson and Ruben C. Arslan are seeking experts to investigate and discover errors in scientific literature, beginning with psychological papers. 

Here’s them in their own words: 

ERROR is a comprehensive program to systematically detect and report errors in scientific publications, modeled after bug bounty programs in the technology industry. Investigators are paid for discovering errors in the scientific literature: The more severe the error, the larger the payout. In ERROR, we leverage, survey, document, and increase accessibility to error detection tools. Our goal is to foster a culture that is open to the possibility of error in science to embrace a new discourse norm of constructive criticism.

(Elson, 2024)

Their program follows a growing awareness of what researchers in the early 2010s called “the replication crisis:” the inability to reproduce large amounts of scientific findings. For example, the former head of cancer research at the biotechnology company Amgen, C. Glenn Begley, investigated 53 of his company’s most promising publications (pieces that would lead to groundbreaking discoveries). Of those 53, his team could only reproduce 6 (Hawkes, 2012). While 53 is not a large sample size, Nature surveyed 1,576 researchers and more than 70% reported trying and failing to reproduce published experiments (Baker, 2016).

ERROR founders Malte Elson and Ruben C. Arslan point to a poor incentive structure: “error detection as a scientific activity is relatively unappealing as there is little to gain and much to lose for both the researchers whose work is being scrutinized (making cooperation unlikely)” (Elson, 2024). 

Nature concurs. Journals, they report, are less likely to publish verification of older work or work simply reporting negative findings (Baker, 2016). Reproduction gets deferred, because reproduction requires more time and money (Ibid). 

Not to mention that even in science, biases can crop up - the siren call of new discoveries can lead people to publishing versus confirming results. In a noteworthy example, Begley - the aforementioned Amgen researcher - approached a scientist and explained that he tried - and failed - 50 times to reproduce the results of his experiments. The scientist answered that “they had done it six times and got this result once but put it in the paper because it made the best story” (Hawkes, 2012, emphasis added). 

Bearing these issues in mind, the ERROR program hopes to incentivize error-detection and change the publication culture: opening the perception of negative results as useful data (Elson, 2024). To foster a positive environment, authors must agree to be reviewed, and hopefully, these authors can even benefit from the verification (Lee, 2024). 

Since at least 2005, researchers have called for attempts to address the replication crisis (Pashler, 2012; Loaandis, 2005). While time will decide whether the ERROR program makes a difference, it provides an interesting answer to that call. 

REFERENCES

Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature 533, 452–454. https://www.nature.com/articles/533452a.

Elson, M. (2024). ERROR: A Bug Bounty Program for Science. https://error.reviews/

Hawkes, N. (2012). Most laboratory cancer studies cannot be replicated, study shows. BMJ 344. https://doi.org/10.1136/bmj.e2555 (Published 04 April 2012)

Lee, S. (2024). Wanted: Scientific Errors. Cash Reward. The Chronicle of Higher Education. https://www.chronicle.com/article/wanted-scientific-errors-cash-reward

Loannidis, J. (2005). Why Most Published Research Findings Are False. Plos Medicine 19(8). https://doi.org/10.1371/journal.pmed.1004085

Pashler, H., Harris, C. (2012). Is the Replicability Crisis Overblown? Three Arguments Examined. Perspectives on Psychological Science, Volume 7 (6). https://journals.sagepub.com/doi/10.1177/1745691612463401

Photo by Markus Winkler on Unsplash

In its January 19th issue, Science reported on the increasingly aggressive and corrupt methods that paper mills are employing to get bogus research published in respected journals. You can listen to the Science podcast for an interview with the author of the article, Frederik Joelving from Retraction Watch

Last year Nicholas Wise, a fluid dynamics researcher at Cambridge with an interest in scientific fraud, found Facebook postings by Olive Academic (a Chinese paper mill) offering substantial payments to journal editors to accept papers for publication. Further digging revealed payments of up to $20,000 and a list of more than 50 journal editors who had signed on. Wise and other experts in scientific fraud joined up with Science and Retraction Watch to investigate if this was an isolated incident or more widespread. They found similar activity by several other paper mills and more than 30 editors of reputable journals who were complicit. Publishers like Elsevier and Taylor and Francis say they are under siege, admitting that their journal editors are regularly approached with bribes from paper mills.

Special editions of journals were found to be most vulnerable to these scams because they are often edited by individuals or teams separate from the regular editorial boards. The investigation found that paper mills will at times engineer entire special issues themselves. “The latest generation papermill, they’re like the entire production line” (Joelving, 2024). Open access special issues can generate large profits for publishers based on the fees collected from authors, sometimes via paper mills. Wiley, Elsevier and other well known publishers have had regular journal editors involved in these special issue scams.

As a result of the investigation Hindawi and its parent company Wiley pulled thousands of papers in special issues due to compromised peer review and Wiley announced in December that the Hindawi brand would be suspended. The Hindawi retracted papers had ties to Tamjeed Publishing that acted as a broker between paper mills and multiple editors. 

The need to publish to advance in certain professions becomes especially problematic in places where students or young professionals cannot easily attain the training or resources to do research that is publishable. This creates the market for paper mills. More than half of Chinese medical residents surveyed in a preprint referred to in the Science story said they had engaged in research misconduct such as buying papers or fabricating results. The Financial Times reported last year on how widespread the problem is in China and how it “threatens to overwhelm the editorial processes of a significant number of journals.”(Olcott and Smith, 2023)

It’s not just a problem in China. India, Russia, a number of ex-Soviet countries and Saudi Arabia are also common sources of paper mills engaging in these practices. There is concern that papers coming from these countries will start to draw extra scrutiny, creating potential inequities for researchers from them.

Though there is now increased awareness and a desire by reputable publishers to crack down on fraud, it is difficult and time consuming to do. The exponential growth of peer review fraud and sham papers make it all but impossible to ferret out all the publications that should be retracted. An analysis by Nature late last year concluded that over 10,000 articles were retracted in 2023 with retractions rising at a rate that far exceeds the growth of scientific papers. And they speculate it’s just the tip of the iceberg.

Retraction Watch alerts of retracted articles are available for Himmelfarb Library users when searching Health Information @ Himmelfarb, the library catalog, and when using the LibKey Nomad browser extension or BrowZine to connect to full-text. Read more about the service.

Sources

Joelving, F. (2024). Paper trail. Science (American Association for the Advancement of Science), 383(6680), 252–255. https://doi.org/10.1126/science.ado0309

Olcott, E., & Smith, A. (2023). China’s fake science industry: how ‘paper mills’ threaten progress. FT.Com. https://wrlc-gwahlth.primo.exlibrisgroup.com/permalink/01WRLC_GWAHLTH/1c5oj26/cdi_proquest_reports_2791535957

Van Noorden, R. (2023). More than 10,000 research papers were retracted in 2023 - a new record.  Nature, 624, 479-481. www.nature.com/articles/d41586-023-03974-8

Screenshot of the Scholarly Communications Videos playlist from YouTube.

Are you interested in scholarly publishing, but aren’t sure where to start? Himmelfarb Library has a library of short video tutorials focused on a variety of scholarly publishing topics! We add new videos to this library each semester, so the library is always growing. Videos range from 3 to 10 minutes in length, so you can learn in small chunks of time that fit your schedule. Here are some of our newest videos!

Journal Impact Factors: What You Need to Know

In this video, Tom Harrod, Himmelfarb’s Associate Director of Reference, Instruction, and Access discusses journal impact factors. You’ve probably heard that journals with higher Impact Factors are more reputable, and are more desirable when the time comes to publish your research. But what is a journal Impact Factor exactly? And how is an Impact Factor calculated? This six-minute video answers both of these questions and also explores how to address Impact Factors in context and why some journals don’t have an Impact Factor.

Artificial Intelligence Tools & Citations

In this 6-minute video, Himmelfarb’s Metadata Specialist, Brittany Smith, explores generative artificial intelligence tools. This video starts off by discussing the emergence of AI and the importance of checking current guidelines and rules regarding AI, as this is a new and constantly evolving field. This video discusses how AI can help with your research, discusses GW’s AI policy, and how to create citations for AI in your research. 

Updating Your Biosketch via SciENcv

Tom Harrod discusses the differences between NIH’s ScieENcv and Biosketch and demonstrates how to use SciENcv to populate a Biosketch profile in this 5-minute video. 

UN Sustainable Development Goals: Finding Publications

In this 5-minute video, Stacy Brody explores why the United Nations' sustainable development goals were developed, and the intended achievements of these goals. This video discusses how to find publications related to these goals using Scopus.

Dimensions Analytics: An Introduction

Sara Hoover, Himmelfarb’s Metadata and Scholarly Publishing Librarian provides a brief overview of the Dimensions database and discusses how to access Dimensions from Himmelfarb. This 7-minute video also provides several examples of use cases for this great resource!

In addition to these great videos, you can find the full 37-video library on the Scholarly Communications YouTube Playlist and on the Scholarly Publishing Research Guide. Additional videos cover a wide range of topics including:

  • Project planning and development videos:
    • Research life cycle
    • Advanced literature searches using PubMed MeSH search builder
    • CREDiT taxonomy
    • Human participants' research support
  • Publishing-related videos:
    • Clarivate Manuscript Matcher
    • Including Article Processing Charges (APCs) in funding proposals
    • Changing from AMA to APA citation style
    • How to cite legal resources using APA style 
  • Project promotion and preservation videos:
    • Tracking citations with Scopus
    • Creating a Google Scholar profile
    • Archiving scholarship in an institutional repository
    • How to promote your research.

Image of a sheep's body with a wolf's head.
Image by Sarah Richter from Pixabay

We’ve been getting a lot of questions recently about Open Access (OA) journals, and predatory journals, and how to tell the difference between them. Navigating the publishing landscape is tricky enough without having to worry about whether or not the journal you choose for your manuscript might be predatory. The concept of predatory journals may be completely new to some researchers and authors. Others who are aware of the dangers of predatory journals might mistake legitimate scholarly OA journals as predatory because of the Article Processing Charges (APCs) charged by OA journals. In today’s post, we’ll explore the differences between OA journals and predatory journals, and how to tell the difference between them.

Open Access Journals

The open access publishing movement stemmed from a need to make research more openly accessible to readers and aims to remove the paywalls that most research was trapped behind under that traditional publishing model. In a traditional, non-OA journal, readers must pay to access the full text of an article published in a journal. This payment may be through a personal subscription, a library-based subscription to the journal, or a single payment for access to a single article. 

This video provides a great overview of why and how OA journals came about:

OA journals shift the burden of cost from the reader to the author by operating under an “author pays” model. In this model, authors pay a fee (often called an “Article Processing Charge” or APC) to make their articles available as open access. Readers are then able to access the full text of that article free of charge and without paying for a subscription. OA articles are accessible for anyone to read and without a paywall. The author fees associated with OA journals can range from a few hundred dollars to a few thousand dollars. OA journals charging APCs is completely normal and paying to publish in an open access journal is not itself a sign that the title is predatory in nature - this is normal practice for open access journals that helps publishers cover the cost of publication.

Open access journals offer all of the same author services that traditional journals offer, including quality peer review and article archiving and indexing services. Legitimate OA journals have clear retraction policies and manuscript submission portals. There are different types of OA journals, including journals that publish only OA articles, and hybrid journals that publish OA articles alongside articles that exist behind a paywall. To learn more about the types of OA research, check our recent blog post on Green, Gold, and Diamond OA models

Predatory Journals 

Predatory publishing came about as a response to the open access movement as unethical businesses saw OA journals as a way to make money off of researchers' need to publish. Predatory journals use the OA model for their own profit and use deceptive business practices to convince authors to publish in their journals. 

One key difference between reputable, scholarly OA journals and predatory journals is that predatory journals charge APCs without providing any legitimate peer view services. This means that there are no safeguards to protect a quality research article from being published alongside junk science. Predatory journals typically promise quick peer review, when in reality, no peer review actually takes place. 

When you publish with a legitimate OA journal, the journal provides peer review, archiving, and discovery services that help others find your work easily. Predatory journals do not provide these essential services. Publishing in a predatory journal could mean that your work could disappear from the journal's website at any time, making it difficult to prove that your paper was ever published in said journal. Additionally, because predatory journals are not indexed in popular databases such as Scopus, PubMed, CINAHL, or Web of Science, despite false claims to the contrary, other researchers may never find, read, and cite your research. 

Some general red flags to look for include:

  • Emailed invitations to submit an article
  • The journal name is suspiciously similar to a prominent journal in the field
  • Misleading geographic information in the title
  • Outdated or unprofessional website
  • Broad aim and scope
  • Insufficient contact information (a web contact form is not enough)
  • Lack of editors or editorial board
  • Unclear fee structure
  • Bogus impact factors or invented metrics
  • False indexing claims
  • No peer review information

To learn more about predatory journals, check out our Predatory Publishing Guide.

OA vs. Predatory: How to Tell the Difference

Luckily, identifying scholarly open access journals and predatory journals can be done if you know what to look for, including the red flags listed above. OA journals that are published by reputable publishers (such as Elsevier, Wiley, Taylor and Francis, Sage, Springer Nature, etc.) can be trusted. If a journal is published by a well-known, established publisher, it’s a safe bet that the journal is not predatory in nature. These well-known, large publishers have policies in place that predatory journals lack, including indexing and archiving policies, peer review policies, retraction policies, and publication ethics policies.

Learn more by watching our How to Spot a Predatory Journal tutorial:

Check out the assessment tools available in our Predatory Publishing Guide for more tools that can help you evaluate journals, emails from publishers, and journal websites. There are even some great case studies available on this page to put your newly learned skills into practice! 

For questions about predatory journals, or to take advantage of Himmelfarb’s Journal PreCheck Service, contact Ruth Bueter (rbueter@gwu.edu) or complete our Journal PreCheck Request Form.