Skip to content

For a moment, let’s entertain a hypothetical. Let’s say you have an excellent paper on your hands about the impact of smoke on the lungs. Your team is about to submit it for publication: pretty exciting! When you get your paper back from the publisher, it’s mostly good news: they’re willing to publish your paper with the caveat that you add a diagram of the lungs to your paper as a visual aid of the systems impacted. The problem? You have no idea where you could acquire an image that would suit this task that wouldn’t potentially violate copyright. 

With this conundrum, one of your coauthors suggests a solution: why not generate one? They have a subscription to Midjourney, the AI software that can generate images from text. Why not give Midjourney a summary of the diagram you need, have it generate it, and then use that for your paper. After checking the journal’s policies on AI (it’s allowed with disclosure), you do just that, glad to have quickly moved past that stumbling block. 

Pretty great, right? It sure sounds like it, until you take a look at the images Midjourney generated. Because on closer inspection, there are some problems. 

Below is an image I generated in CoPilot for this blog post. I didn’t ask it to do something as complicated as making a diagram of how smoking impacts the lungs; instead I asked for just a diagram of human lungs. Here is what I got, with my notes attached.

An image of an AI generated diagram of the lungs in a human women is featured with red text boxes pointing to errors. In the upper left, a box says "nonsense of gibberish text" and a red line points to oddly generated letters that mean nothing. Below it, another box reads "I don't know what this is supposed to be, but I don't think it's in the armpit" with a line pointing to what looks to be an organ with a flower pattern in it. Below that, another box reads "this heart is way too small for an adult" and the red line points to the heart on the diagram. On the left, the top red box reads "now the stomach does not reside in one's hair or arteries" with red lines pointing to a picture of the stomach that is falsely labeled as being in the hair or neck. Below that, a new box reads "what are the gold lines supposed to be in this diagram" and it points to yellow veins that run through the figure like the red and blue that usually denote the circulatory system. The last box on the right says "I have no idea what this is supposed to be" and points to what looks to be bone wrapped around a tube leading out of the bottom of the lungs.

Alright, so this might not be our best image. Thankfully, we have others. Let’s take a look at another image from the same prompt and see if it does a better job. 

An image of an AI generated diagram of the lungs is featured with red text boxes pointing to errors. In the upper left, a box says "more nonsense text" and a red line points to oddly generated letters that mean nothing. On the right side, a box says "bubbles should not be in the lungs!" with a red line pointing to  what looks to be odd bubbles inside the lungs. Below it, a red box reads "what are these small clumps/objects?" and it points to what looks to be red large bacteria and clumps on the lungs.

So what happened here? To explain how this image went terribly wrong, it’s best to start with an explanation of how AI actually works.

When we think of AI, we generally think of movies like The Terminator or The Matrix, where robots can fully think and make decisions, just like a human can. As cool (or terrifying depending on your point of view) as that is, such highly developed forms of artificial intelligence still solely exist in the realm of science fiction. What we call AI now is something known as generative AI. To vastly simplify the process, generative AI works as follows: you take a computer and feed it a large amount of information that resembles what you want it to possibly generate. This is known as “training data.” The AI then attempts to replicate images based on the original training data. (Vox made a video explaining this process much better than I can). So for example, if I feed an AI picture of cats, over time, it identifies aspects of cats across photos: fur, four legs, a tail, a nose,etc. After a period of time, it then generates images based on those qualities. And that’s how we get websites like “These Cats Do Not Exist.

If you take a look at “These Cats Do Not Exist” you might notice something interesting: the quality of fake cat photos varies widely. Some of the cats it generates look like perfectly normal cats. Others appear slightly off; they might have odd proportions or too many paws. And a whole other contingent appears as what can best be described as eldritch monstrosities.  

The reason for the errors in both our above images and our fake cats is due to the fact that the AI doesn’t understand what we are asking it to make. The bot has no concept of lungs as an organ, or cats as a creature; it merely recognizes aspects and characteristics of those concepts. This is why AI art and AI images can look impressive on the surface but fall apart under any scrutiny: the robot can mimic patterns well enough, but the details are much harder to replicate, especially when details vary so much between images. For example, consider these diagrams of human cells I had AI generate for this blog post.

A picture of an AI generated human cell. There are red boxes with text pointing out errors and issues in the image. The top box has the text "nonsense words. Some of these labels don't even point to anything" with two red lines pointing to a series of odd generated letters that mean nothing. Below that, a red box has the text "I have no idea what this is supposed to be" with a red line pointing to a round red ball. On the right side, a text box reads "is this supposed to be a mitochondria? Or is it loose pasta?" with a red line pointing to what looks to be a green penne noodle in the cell. Below that, a red text box reads "I don't think you can find a minature man inside the human cell" and a red line points to the upper torso and head of a man coming out of the cell.

Our AI doesn’t do bad in some regards: it understands the importance of a nucleus, and that a human cell should be round. This is pretty consistent across the images I had it make. But when it comes to showcasing other parts of the cell we run into trouble, given how differently such things are presented in other diagrams. The shape that one artist might decide to use for anaspect of a cell, another artist might draw entirely differently. The AI doesn’t understand the concept of a human cell, it is merely replicating images it’s been fed. 

These errors can lead to embarrassing consequences. In March, a paper went viral for all the wrong reasons; the AI images the writers used had many of the flaws listed above, along with a picture of a mouse that was rather absurd. While the writers disclosed the use of AI, the fact these images passed peer review with nonsense text and other flaws, turned into a massive scandal. The paper was later retracted. 

Let’s go back to our hypothetical. If you need images for your paper or project, instead of using AI, why not use some of Himmelfarb’s resources? On this Image Resources LibGuide, you will find multiple places to find reputable images, with clear copyright permissions. There are plenty of options from which to work. 

As for our AI image generators? If you want to generate photos of cats, go ahead! But leave the scientific charts and images for humans. 

Sources:

  1. Ai Art, explained. YouTube. June 1, 2022. Accessed April 19, 2024. https://www.youtube.com/watch?v=SVcsDDABEkM.
  2. Wong C. AI-generated images and video are here: how could they shape research? Nature (London). Published online 2024.

The image features a group of people sitting at a table using laptops.

Do you have questions about how to determine if a journal is a predatory publisher? Would you like a short tutorial on importing your citations into RefWorks? Do you need help understanding copyright laws and how these laws apply to you as a researcher in the scholarly publishing landscape? The Scholarly Communications Committee’s recent tutorials address these topics and more. To learn more about scholarly publishing and communications, watch one of the videos listed below or visit the full video tutorial library

In this video, librarian Ruth Bueter shows how Cabells Predatory Reports can be used to evaluate publishing options. The tutorial describes characteristics of predatory journals, explains what Cabells Predatory Reports are, limitations to the reports and ends with a demonstration of how to access and use the reports. This tutorial is useful for researchers who want to avoid publishing in a predatory journal or those who are interested in learning more about a resource that can help evaluate journals.

During this tutorial, Metadata Specialist Brittany Smith goes into detail about copyright in the United States, including rights automatically granted to authors of a work and how publishing agreements may impact authors’ rights. Additionally, the tutorial provides resources that can assist authors in understanding their rights and feeling confident when negotiating their agreements with publishers.

RefWorks is a citation manager that assists with tracking citations and building a bibliography to properly attribute works referenced in research. Senior Circulation Assistant Randy Plym demonstrates how to access RefWorks from Himmelfarb Library’s homepage and walks you through the process of importing citations from databases such as Pubmed or CINAHL. 

Many of the tutorials are five minutes or less and new videos are routinely added to the collection. Topics range from the research life cycle to understanding what editors look for in a manuscript to setting up a Google Scholar or ORCiD profile. If you would like to watch one of these tutorials visit the Scholarly Communications Guide or Himmelfarb Library’s YouTube profile.

Last month, European researchers launched a program to identify errors within scientific literature. With an initial fund of 250,000 Swiss francs - roughly 285,000 USD - team leaders Malte Elson and Ruben C. Arslan are seeking experts to investigate and discover errors in scientific literature, beginning with psychological papers. 

Here’s them in their own words: 

ERROR is a comprehensive program to systematically detect and report errors in scientific publications, modeled after bug bounty programs in the technology industry. Investigators are paid for discovering errors in the scientific literature: The more severe the error, the larger the payout. In ERROR, we leverage, survey, document, and increase accessibility to error detection tools. Our goal is to foster a culture that is open to the possibility of error in science to embrace a new discourse norm of constructive criticism.

(Elson, 2024)

Their program follows a growing awareness of what researchers in the early 2010s called “the replication crisis:” the inability to reproduce large amounts of scientific findings. For example, the former head of cancer research at the biotechnology company Amgen, C. Glenn Begley, investigated 53 of his company’s most promising publications (pieces that would lead to groundbreaking discoveries). Of those 53, his team could only reproduce 6 (Hawkes, 2012). While 53 is not a large sample size, Nature surveyed 1,576 researchers and more than 70% reported trying and failing to reproduce published experiments (Baker, 2016).

ERROR founders Malte Elson and Ruben C. Arslan point to a poor incentive structure: “error detection as a scientific activity is relatively unappealing as there is little to gain and much to lose for both the researchers whose work is being scrutinized (making cooperation unlikely)” (Elson, 2024). 

Nature concurs. Journals, they report, are less likely to publish verification of older work or work simply reporting negative findings (Baker, 2016). Reproduction gets deferred, because reproduction requires more time and money (Ibid). 

Not to mention that even in science, biases can crop up - the siren call of new discoveries can lead people to publishing versus confirming results. In a noteworthy example, Begley - the aforementioned Amgen researcher - approached a scientist and explained that he tried - and failed - 50 times to reproduce the results of his experiments. The scientist answered that “they had done it six times and got this result once but put it in the paper because it made the best story” (Hawkes, 2012, emphasis added). 

Bearing these issues in mind, the ERROR program hopes to incentivize error-detection and change the publication culture: opening the perception of negative results as useful data (Elson, 2024). To foster a positive environment, authors must agree to be reviewed, and hopefully, these authors can even benefit from the verification (Lee, 2024). 

Since at least 2005, researchers have called for attempts to address the replication crisis (Pashler, 2012; Loaandis, 2005). While time will decide whether the ERROR program makes a difference, it provides an interesting answer to that call. 

REFERENCES

Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature 533, 452–454. https://www.nature.com/articles/533452a.

Elson, M. (2024). ERROR: A Bug Bounty Program for Science. https://error.reviews/

Hawkes, N. (2012). Most laboratory cancer studies cannot be replicated, study shows. BMJ 344. https://doi.org/10.1136/bmj.e2555 (Published 04 April 2012)

Lee, S. (2024). Wanted: Scientific Errors. Cash Reward. The Chronicle of Higher Education. https://www.chronicle.com/article/wanted-scientific-errors-cash-reward

Loannidis, J. (2005). Why Most Published Research Findings Are False. Plos Medicine 19(8). https://doi.org/10.1371/journal.pmed.1004085

Pashler, H., Harris, C. (2012). Is the Replicability Crisis Overblown? Three Arguments Examined. Perspectives on Psychological Science, Volume 7 (6). https://journals.sagepub.com/doi/10.1177/1745691612463401

Photo by Markus Winkler on Unsplash

In its January 19th issue, Science reported on the increasingly aggressive and corrupt methods that paper mills are employing to get bogus research published in respected journals. You can listen to the Science podcast for an interview with the author of the article, Frederik Joelving from Retraction Watch

Last year Nicholas Wise, a fluid dynamics researcher at Cambridge with an interest in scientific fraud, found Facebook postings by Olive Academic (a Chinese paper mill) offering substantial payments to journal editors to accept papers for publication. Further digging revealed payments of up to $20,000 and a list of more than 50 journal editors who had signed on. Wise and other experts in scientific fraud joined up with Science and Retraction Watch to investigate if this was an isolated incident or more widespread. They found similar activity by several other paper mills and more than 30 editors of reputable journals who were complicit. Publishers like Elsevier and Taylor and Francis say they are under siege, admitting that their journal editors are regularly approached with bribes from paper mills.

Special editions of journals were found to be most vulnerable to these scams because they are often edited by individuals or teams separate from the regular editorial boards. The investigation found that paper mills will at times engineer entire special issues themselves. “The latest generation papermill, they’re like the entire production line” (Joelving, 2024). Open access special issues can generate large profits for publishers based on the fees collected from authors, sometimes via paper mills. Wiley, Elsevier and other well known publishers have had regular journal editors involved in these special issue scams.

As a result of the investigation Hindawi and its parent company Wiley pulled thousands of papers in special issues due to compromised peer review and Wiley announced in December that the Hindawi brand would be suspended. The Hindawi retracted papers had ties to Tamjeed Publishing that acted as a broker between paper mills and multiple editors. 

The need to publish to advance in certain professions becomes especially problematic in places where students or young professionals cannot easily attain the training or resources to do research that is publishable. This creates the market for paper mills. More than half of Chinese medical residents surveyed in a preprint referred to in the Science story said they had engaged in research misconduct such as buying papers or fabricating results. The Financial Times reported last year on how widespread the problem is in China and how it “threatens to overwhelm the editorial processes of a significant number of journals.”(Olcott and Smith, 2023)

It’s not just a problem in China. India, Russia, a number of ex-Soviet countries and Saudi Arabia are also common sources of paper mills engaging in these practices. There is concern that papers coming from these countries will start to draw extra scrutiny, creating potential inequities for researchers from them.

Though there is now increased awareness and a desire by reputable publishers to crack down on fraud, it is difficult and time consuming to do. The exponential growth of peer review fraud and sham papers make it all but impossible to ferret out all the publications that should be retracted. An analysis by Nature late last year concluded that over 10,000 articles were retracted in 2023 with retractions rising at a rate that far exceeds the growth of scientific papers. And they speculate it’s just the tip of the iceberg.

Retraction Watch alerts of retracted articles are available for Himmelfarb Library users when searching Health Information @ Himmelfarb, the library catalog, and when using the LibKey Nomad browser extension or BrowZine to connect to full-text. Read more about the service.

Sources

Joelving, F. (2024). Paper trail. Science (American Association for the Advancement of Science), 383(6680), 252–255. https://doi.org/10.1126/science.ado0309

Olcott, E., & Smith, A. (2023). China’s fake science industry: how ‘paper mills’ threaten progress. FT.Com. https://wrlc-gwahlth.primo.exlibrisgroup.com/permalink/01WRLC_GWAHLTH/1c5oj26/cdi_proquest_reports_2791535957

Van Noorden, R. (2023). More than 10,000 research papers were retracted in 2023 - a new record.  Nature, 624, 479-481. www.nature.com/articles/d41586-023-03974-8

Screenshot of the Scholarly Communications Videos playlist from YouTube.

Are you interested in scholarly publishing, but aren’t sure where to start? Himmelfarb Library has a library of short video tutorials focused on a variety of scholarly publishing topics! We add new videos to this library each semester, so the library is always growing. Videos range from 3 to 10 minutes in length, so you can learn in small chunks of time that fit your schedule. Here are some of our newest videos!

Journal Impact Factors: What You Need to Know

In this video, Tom Harrod, Himmelfarb’s Associate Director of Reference, Instruction, and Access discusses journal impact factors. You’ve probably heard that journals with higher Impact Factors are more reputable, and are more desirable when the time comes to publish your research. But what is a journal Impact Factor exactly? And how is an Impact Factor calculated? This six-minute video answers both of these questions and also explores how to address Impact Factors in context and why some journals don’t have an Impact Factor.

Artificial Intelligence Tools & Citations

In this 6-minute video, Himmelfarb’s Metadata Specialist, Brittany Smith, explores generative artificial intelligence tools. This video starts off by discussing the emergence of AI and the importance of checking current guidelines and rules regarding AI, as this is a new and constantly evolving field. This video discusses how AI can help with your research, discusses GW’s AI policy, and how to create citations for AI in your research. 

Updating Your Biosketch via SciENcv

Tom Harrod discusses the differences between NIH’s ScieENcv and Biosketch and demonstrates how to use SciENcv to populate a Biosketch profile in this 5-minute video. 

UN Sustainable Development Goals: Finding Publications

In this 5-minute video, Stacy Brody explores why the United Nations' sustainable development goals were developed, and the intended achievements of these goals. This video discusses how to find publications related to these goals using Scopus.

Dimensions Analytics: An Introduction

Sara Hoover, Himmelfarb’s Metadata and Scholarly Publishing Librarian provides a brief overview of the Dimensions database and discusses how to access Dimensions from Himmelfarb. This 7-minute video also provides several examples of use cases for this great resource!

In addition to these great videos, you can find the full 37-video library on the Scholarly Communications YouTube Playlist and on the Scholarly Publishing Research Guide. Additional videos cover a wide range of topics including:

  • Project planning and development videos:
    • Research life cycle
    • Advanced literature searches using PubMed MeSH search builder
    • CREDiT taxonomy
    • Human participants' research support
  • Publishing-related videos:
    • Clarivate Manuscript Matcher
    • Including Article Processing Charges (APCs) in funding proposals
    • Changing from AMA to APA citation style
    • How to cite legal resources using APA style 
  • Project promotion and preservation videos:
    • Tracking citations with Scopus
    • Creating a Google Scholar profile
    • Archiving scholarship in an institutional repository
    • How to promote your research.

Image of a sheep's body with a wolf's head.
Image by Sarah Richter from Pixabay

We’ve been getting a lot of questions recently about Open Access (OA) journals, and predatory journals, and how to tell the difference between them. Navigating the publishing landscape is tricky enough without having to worry about whether or not the journal you choose for your manuscript might be predatory. The concept of predatory journals may be completely new to some researchers and authors. Others who are aware of the dangers of predatory journals might mistake legitimate scholarly OA journals as predatory because of the Article Processing Charges (APCs) charged by OA journals. In today’s post, we’ll explore the differences between OA journals and predatory journals, and how to tell the difference between them.

Open Access Journals

The open access publishing movement stemmed from a need to make research more openly accessible to readers and aims to remove the paywalls that most research was trapped behind under that traditional publishing model. In a traditional, non-OA journal, readers must pay to access the full text of an article published in a journal. This payment may be through a personal subscription, a library-based subscription to the journal, or a single payment for access to a single article. 

This video provides a great overview of why and how OA journals came about:

OA journals shift the burden of cost from the reader to the author by operating under an “author pays” model. In this model, authors pay a fee (often called an “Article Processing Charge” or APC) to make their articles available as open access. Readers are then able to access the full text of that article free of charge and without paying for a subscription. OA articles are accessible for anyone to read and without a paywall. The author fees associated with OA journals can range from a few hundred dollars to a few thousand dollars. OA journals charging APCs is completely normal and paying to publish in an open access journal is not itself a sign that the title is predatory in nature - this is normal practice for open access journals that helps publishers cover the cost of publication.

Open access journals offer all of the same author services that traditional journals offer, including quality peer review and article archiving and indexing services. Legitimate OA journals have clear retraction policies and manuscript submission portals. There are different types of OA journals, including journals that publish only OA articles, and hybrid journals that publish OA articles alongside articles that exist behind a paywall. To learn more about the types of OA research, check our recent blog post on Green, Gold, and Diamond OA models

Predatory Journals 

Predatory publishing came about as a response to the open access movement as unethical businesses saw OA journals as a way to make money off of researchers' need to publish. Predatory journals use the OA model for their own profit and use deceptive business practices to convince authors to publish in their journals. 

One key difference between reputable, scholarly OA journals and predatory journals is that predatory journals charge APCs without providing any legitimate peer view services. This means that there are no safeguards to protect a quality research article from being published alongside junk science. Predatory journals typically promise quick peer review, when in reality, no peer review actually takes place. 

When you publish with a legitimate OA journal, the journal provides peer review, archiving, and discovery services that help others find your work easily. Predatory journals do not provide these essential services. Publishing in a predatory journal could mean that your work could disappear from the journal's website at any time, making it difficult to prove that your paper was ever published in said journal. Additionally, because predatory journals are not indexed in popular databases such as Scopus, PubMed, CINAHL, or Web of Science, despite false claims to the contrary, other researchers may never find, read, and cite your research. 

Some general red flags to look for include:

  • Emailed invitations to submit an article
  • The journal name is suspiciously similar to a prominent journal in the field
  • Misleading geographic information in the title
  • Outdated or unprofessional website
  • Broad aim and scope
  • Insufficient contact information (a web contact form is not enough)
  • Lack of editors or editorial board
  • Unclear fee structure
  • Bogus impact factors or invented metrics
  • False indexing claims
  • No peer review information

To learn more about predatory journals, check out our Predatory Publishing Guide.

OA vs. Predatory: How to Tell the Difference

Luckily, identifying scholarly open access journals and predatory journals can be done if you know what to look for, including the red flags listed above. OA journals that are published by reputable publishers (such as Elsevier, Wiley, Taylor and Francis, Sage, Springer Nature, etc.) can be trusted. If a journal is published by a well-known, established publisher, it’s a safe bet that the journal is not predatory in nature. These well-known, large publishers have policies in place that predatory journals lack, including indexing and archiving policies, peer review policies, retraction policies, and publication ethics policies.

Learn more by watching our How to Spot a Predatory Journal tutorial:

Check out the assessment tools available in our Predatory Publishing Guide for more tools that can help you evaluate journals, emails from publishers, and journal websites. There are even some great case studies available on this page to put your newly learned skills into practice! 

For questions about predatory journals, or to take advantage of Himmelfarb’s Journal PreCheck Service, contact Ruth Bueter (rbueter@gwu.edu) or complete our Journal PreCheck Request Form.  

Retraction Watch and Crossref logos.
Image from Retraction Watch.

On September 12, 2023, Crossref, a not-for-profit membership organization aiming to make research easy to find, cite, link, assess, and reuse, formally acquired the Retraction Watch database, a comprehensive database of retractions. Retraction Watch began in 2010 as a journalism blog that aspired to “examine whether scientific correction mechanisms were robust” (Oransky, 2023). In 2018, with financial support from the MacArthur Foundation, the Arnold Foundation (now Arnold Ventures), and the Helmsley Trust, the Retraction Watch Database in its current form was officially launched. 

The database was licensed to organizations to help researchers stay informed about current retractions. With Crossref’s purchase of the Retraction Watch Database, the database will now be completely open and freely available. According to a Crossref blog post, this agreement “will allow Retraction Watch to keep the data populated on an ongoing basis and always open, alongside publishers registering their retraction notices directly with Crossref” (Hendricks, et al., 2023). This agreement only pertains to the Retraction Watch Database - the Retraction Watch blog continues to be separate from Crossref, and will continue to independently investigate retractions and related topics. Crossref will remain a “neutral facilitator in efforts to assess the quality of scientific works” (Hendricks, et al., 2023). 

So why does all of this matter? The volume of journal articles being published continues to grow. With so many articles being published, it’s difficult to keep track of articles that are later retracted. Researchers who want to avoid citing a retracted article in their papers have to put in a lot of time and effort into checking each reference on publisher sites for retractions, and it’s incredibly difficult to catch all retractions (Oransky & Lammey, 2023). It’s even more difficult for readers to know if a work they are reading is citing retracted articles. According to Hendricks et al., “combining efforts to create the largest single open-source database of retractions reduces duplication, making it more efficient, transparent, and accessible for all” (Hendricks et al., 2023). 

Interested in learning more? Watch a discussion about this new collaboration: 

References:

Hendricks, G., Lammey, R., Ofiesh, L., Bilder, G., Pentz, E. (2023, September 12). News: Crossref and Retraction Watch. Crossref blog. https://www.crossref.org/blog/news-crossref-and-retraction-watch/

Oransky, I. (2023, September 12). The Retraction Watch Database becomes completely open - and RW becomes far more sustainable. Retraction Watch blog. https://retractionwatch.com/2023/09/12/the-retraction-watch-database-becomes-completely-open-and-rw-becomes-far-more-sustainable/

Oransky, I., Lammey, R. (2023, September 27). Making retraction data freely accessible - Why Crossref’s acquisition of the Retraction Watch database is a big step forward. The London School of Economics and Political Science blog. https://blogs.lse.ac.uk/impactofsocialsciences/2023/09/27/making-retraction-data-freely-accessible-why-crossrefs-acquisition-of-the-retraction-watch-database-is-a-big-step-forward/

STM Publishing News. (2023, September 13). Crossref acquired Retraction Watch data and opens it for the scientific community. STM Publishing News. https://www.stm-publishing.com/crossref-acquires-retraction-watch-data-and-opens-it-for-the-scientific-community/

Health Sciences Research Commons

Did you recently present at a conference or during a workshop? Would you like to share your conference poster with other scholars? Are you interested in archiving your research in a central location? The Health Sciences Research Commons (HSRC) is Himmelfarb Library’s online institutional repository and allows researchers to store their research in a reliable location so it may be accessed by other researchers. 

Here are a few benefits to storing your research in the HSRC:

  1. Your conference poster will be placed in a permanent collection with a consistent link. This link may be embedded in your resume/CV or on your researcher’s website. It may also be shared with your peers and connect them with your conference poster. 
  2. Your work is archived according to your departmental affiliation, so your work is situated among the collective output of your colleagues. 
  3. Your research is discoverable via search engines such as Google Scholar, thus allowing your work to reach a broader audience. 
  4. Lastly, you can measure the impact and reach of your research through PlumX metrics and Altmetrics data. 

Archiving your poster in the HSRC is a reliable alternative to conference websites which may not be maintained once the conference ends. The HSRC is able to accept most file formats and you may upload a full image of your poster. Library staff members maintain the repository and will archive your research for you. Send an email hsrc@gwu.edu and a Himmelfarb Library staff member will respond to collect more information. 

Are you interested in  a preview of how your poster will appear in the institutional repository? Visit the 2023 Research Days Posters collection or any of the other collections in the repository.

Picture of a monthly planner with a red and blue pen lying on top.
Photo by 2H Media on Unsplash

Fall means more than pumpkin spice. Fall grant application season is also here with October submission deadlines for both NIH and NSF. Both organizations have modified the grant application process and here’s what you need to know:

  • NIH: NIH has rescinded the single budget line item requirement for data management and sharing costs.
    • Applications with a due date of October 5, 2023, or later will not be required to include a single line item for Data Management and Sharing Plan activities in the budget. These costs should be placed in other appropriate categories, such as personnel, equipment, supplies, and other expenses. Read the full announcement on the NIH website.
  • NSF: NSF now requires the use of the SciENcv or the Science Experts Network Curriculum Vitae for biographical information.
    • The mandate to use SciENcv only for preparation of the biographical sketch and current and pending (other) support will go into effect for new proposals submitted or due on or after October 23, 2023. Read more on the NSF website

Need additional resources to help you with the grant application process? 

For additional information reach out to Sara Hoover, Metadata and Scholarly Communications Librarian at shoover@gwu.edu or Himmelfarb at himmelfarb@gwu.edu.

Open access is the emerging standard for how scientific literature is published and shared. An open access publication is digital, has no fees required for access, and has no copyright or licensing restrictions. The idea is to make scientific findings accessible to all who would benefit. This is a noble goal, but the practicalities of its application can be confusing. There are a number of ways that authors and publishers can make published studies available open access. Some put the burden of payment on the author or institution that produced the research, some on the publisher, and an emerging model puts it on libraries who enter agreements with publishers for subscriptions with open access benefits for researchers at their institution.

The three most common models are green, gold, and diamond/platinum open access.  Here’s a quick breakdown of each:

Green OA - A publisher allows the author(s) to self-archive an open access copy of the article being published in one of its journals. This is generally allowed for a preprint version of the article. The author can opt to self-archive to a subject-based archive like PubMed Central, or in an institutional repository, like Himmelfarb’s Health Sciences Research Commons. To find out if a journal allows Green OA and what the specific terms are, Sherpa/Romeo is a free tool to check publisher open access policies. Learn more about how to deposit your research in an institutional repository in our video tutorial, Archiving Scholarship in an Institutional Repository.

Gold OA - The authors (or their affiliated institution) pay the publisher to allow open access to the content with an Article Processing Charge (APC). In this model, the author frequently retains copyright. The downside is the typically high expense to publish gold OA in reputable journals. Note that vanity presses and some predatory publications will fall into the gold category. Learn more about how to identify a predatory journal in our video tutorial, How to Spot a Predatory Journal.

Diamond or Platinum OA - Also known as cooperative or non-commercial open access, in this model neither the author nor the reader pays. Typically this model is used by not-for-profit publishing venues like University presses or scholarly society publications. A 2021 study estimated that there are 29,000 diamond OA journals, but only 10,000 of them are included in the Directory of Open Access Journals (DOAJ), and many are not indexed to make their contents findable in databases. Only about half of diamond OA journal articles have a DOI which jeopardizes future access.

The Venn diagram below developed by Jamie Farquarhson illustrates what each of the three levels means for both authors and readers.

Venn diagram with copyright retention, cost for authors and readers, and peer review for open access models.
Diagram by Jamie-farquharson - https://doi.org/10.6084/m9.figshare.21598179, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=125787281

As Gold OA becomes more common, some institutions are creating funds that their researchers can use to pay for APCs. Researchers are also including these expenses in grant applications, especially for those like NIH grants that require depositing research findings and associated data in freely accessible archives. Learn more about how to include article processing charges into grants in our video tutorial, How to Include Article Processing Charges (APCs) in Funding Proposals.

As mentioned earlier in this article, libraries are starting to take on some of the burden of APCs. In what’s known as a transformative agreement, the fees paid to a publisher are transitioning from subscription access for library users to open access publishing by the institution’s researchers and authors. The library pays for both users to read for free and for the institution's authors to publish open access in the publisher’s journals. There may be limits on how many articles can be published or other price caps built in. Usually, these agreements are cost neutral meaning that the library is not saving on subscription fees. Currently, GW has  transformative agreements in place with Cambridge Journals and The Company of Biologists (Development, Journal of Cell Science, and the Journal of Experimental Biology). GW has explored transitioning to transformative agreements with other publishers.

Sources

Arianna Becerril, Lars Bjørnshauge, Jeroen Bosman, et al. The OA Diamond Journals Study. March, 2021. https://doi.org/10.5281/zenodo.4562790

Lisa Janiche Hinchliffe. Transformative Agreements in Libraries: A Primer. The Scholarly Kitchen blog, April 23, 2019. https://scholarlykitchen.sspnet.org/2019/04/23/transformative-agreements/