Skip to content

In recent years, clinical calculators have faced criticism for their treatment of race and ethnicity. The datasets on which these calculators were based, drawn from cohort studies and other longitudinal trials, are frequently homogeneous populations or of limited diversity. Reports of research methods have often been opaque. The data categories used for race and ethnicity, based on those created by the Office of Management and Budget, are limited and do not reflect the diversity of participants’ identities (see AMA Manual, 11th edition, Chapter 11.12.3). Furthermore, the data behind the calculators reflects existing disparities, which are perpetuated in their continued use (Vyas et al., 2020).

As the medical community confronts how the variable race serves a proxy for systemic racism in these calculations (Davidson et al., 2021), conversations have even risen to more mainstream media (e.g. Should Black People Get Race Adjustments In Kidney Medicine?, Racial bias in widely used hospital algorithm, study finds). As we reckon with the racism and discrimination that has been part of medicine; we advocate for, and work towards, change. 

Time and research is needed to identify predictive variables, develop algorithms, and validate calculators (Hamad et al., 2022 video abstract, Rodriguez et al., 2019, Cardiovascular Risk Assessment [DynaMed]). For ASCVD risk estimation, for instance, we often use the Pooled Cohort Equations (PCE). The 2019 ACC/AHA Guideline on the Primary Prevention of Cardiovascular Disease notes, “The PCE are best validated among non-Hispanic whites and non-Hispanic blacks living in the United States. In other racial/ethnic groups or in some non-US populations, the PCE may overestimate or underestimate risk. Therefore, clinicians may consider the use of another risk prediction tool as an alternative to the PCE if the tool was validated in a population with characteristics similar to those of the evaluated patient” (Arnett et al., 2019, emphasis added). Fortunately, there are alternatives, which are listed in practice guidelines and in DynaMed, and work to develop and validate tools continues, e.g. Weale et al., 2021. 

Over years, we see new calculators developed and guidelines begin to include them. More immediately, we see a statement on Race in Medical Calculators and Risk Estimates describing MDCalc’s efforts to provide additional context and signposting. Going back to ASCVD risk estimation, the ASCVD Risk Estimator+ now notes that “estimates may underestimate the 10-year and lifetime risk for persons from some race/ethnic groups, especially American Indians, some Asian Americans (e.g., of south Asian ancestry), and some Hispanics (e.g., Puerto Ricans), and may overestimate the risk for others, including some Asian Americans (e.g., of east Asian ancestry) and some Hispanics (e.g., Mexican Americans)”. 

We can be advocates for changes. Medical students in a February 2022 informatics session questioned why one particular tool only had Black and White as options for race. (Other calculators offer Black | White | Other; the AMA manual of style notes that “The nonspecific group label "other"... is uninformative and may be considered pejorative" (AMA Manual of Style, Chapter 11.12.3).) Their librarian instructor contacted the tool developer, who responded by adding the option ‘Neither of these’. While the display changes are imperfect, they highlight the importance of continuing the discussion of how race is used in clinical calculators, and the  importance of highlighting where we need to adopt additional tools, develop new tools, reconsider what we are trying to measure, and invite more people to plan and participate in our studies to ensure that we have data that reflects our population. 

Research to improve these calculators continues. We have seen a reevaluation of the use of race as a variable at all in calculations like eGFR. Calculators are being developed and validated using data and variables that reflect more diverse populations. Researchers are being asked to consider how race as a social construct impacts their research questions, whether to use race as a variable, and, if so, what categories are appropriate. In the informatics session, we discuss how practitioners need to consider the data from which the calculators were derived, how that data does (not) reflect their patients, and what alternative tools they might use. Himmelfarb’s point-of-care tools highlight practice guidelines and recommended calculators. Our librarians are here to help you access and navigate these resources.  

For more on this topic, see:

Awareness in Writing and Publishing, for information on collecting and reporting on race and ethnicity in research [Additional Resources - Cultural Competency

Critical Data Literacy: Addressing Race as a Variable in a Preclinical Medical Education Session [poster; Research Guide]

Race Correction in the UTI Guidelines - The Curbsiders [podcast]

References

Vyas D. A., Eisenstein L. G., Jones D. S. Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms. The New England jJournal of medicine. 2020;383(9):874-882. https://doi.org/doi:10.1056/NEJMms2004740

Davidson, K. W., Krist, A. H., Tseng, C.-W., Simon, M., Doubeni, C. A., Kemper, A. R., Kubik, M., Ngo-Metzger, Q., Mills, J., & Borsky, A. (2021). Incorporation of Social Risk in US Preventive Services Task Force Recommendations and Identification of Key Challenges for Primary Care. JAMA. https://doi.org/10.1001/jama.2021.12833

Hamad, R., Glymour, M. M., Calmasini, C., Nguyen, T. T., Walter, S., & Rehkopf, D. H. (2022). Explaining the variance in cardiovascular disease risk factors: A comparison of demographic, socioeconomic, and genetic predictors. Epidemiology (Cambridge, Mass.), 33(1), 25–33. https://doi.org/10.1097/EDE.0000000000001425

Rodriguez, F., Chung, S., Blum, M. R., Coulet, A., Basu, S., & Palaniappan, L. P. (2019). Atherosclerotic cardiovascular disease risk prediction in disaggregated asian and hispanic subgroups using electronic health records. Journal of the American Heart Association, 8(14), e011874. https://doi.org/10.1161/JAHA.118.011874 

Arnett D. K., Blumenthal R. S., Albert M. A., et al. 2019 ACC/AHA Guideline on the Primary Prevention of Cardiovascular Disease: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines [published correction appears in Circulation. 2019 Sep 10;140(11):e649-e650] [published correction appears in Circulation. 2020 Jan 28;141(4):e60] [published correction appears in Circulation. 2020 Apr 21;141(16):e774]. Circulation. 2019;140(11):e596-e646. https://doi.org:/10.1161/CIR.0000000000000678

Weale, M. E., Riveros-Mckay, F., Selzam, S., Seth, P., Moore, R., Tarran, W. A., Gradovich, E., Giner-Delgado, C., Palmer, D., Wells, D., Saffari, A., Sivley, R. M., Lachapelle, A. S., Wand, H., Clarke, S. L., Knowles, J. W., O’Sullivan, J. W., Ashley, E. A., McVean, G., … Donnelly, P. (2021). Validation of an Integrated Risk Tool, Including Polygenic Risk Score, for Atherosclerotic Cardiovascular Disease in Multiple Ethnicities and Ancestries. The American Journal of Cardiology. https://doi.org/10.1016/j.amjcard.2021.02.032

AMA Manual of Style Committee. (2020). Correct and Preferred Usage. In AMA Manual of Style: A Guide for Authors and Editors (11th ed.). Oxford University Press. https://doi.org/10.1093/jama/9780190246556.001.0001

Evidence-Based Healthcare Day 2021: October 20

In honor of World Evidence-Based Healthcare Day, held annually on October 20, we wanted to share some tools to help you keep up with the evidence!

Point-of-care resources like DynaMed synthesize evidence on conditions, management, patient evaluation, and other clinically relevant topics. PubMed searches retrieve original research articles, from case studies to meta-analyses. McMaster Health Knowledge Refinery (HKR) provides tools to search pre-appraised research articles and be alerted to the most relevant and impactful studies. 

McMaster Health Knowledge Refinery (HKR) helps practitioners stay up-to-date on the latest, high-quality, clinically relevant, and practice-changing evidence in fields, including medicine, rehabilitation, and knowledge translation. HKR research associates select articles reporting on treatments, diagnostics, and prognostic studies, as well as systematic reviews, from 100-plus journals, including core titles like JAMA, NEJM, and The Lancet

During critical appraisal, research associates tag articles by type, purpose, population, and clinical specialty, to improve findability. For quality assurance, clinicians check these data, as well as index terms added by indexers. 

While metadata helps users find and filter for relevant articles, the added value comes into play in the McMaster Online Rating of Evidence (MORETM). Practicing physicians, nurses, and therapists evaluate articles for 1) relevance and 2) newsworthiness. Once at least three practitioners evaluate the article, ratings are averaged, and articles receiving scores greater than four out of seven in both categories are added to the Premium LiteratUre Service (PLUS) database. 

Users can create alerts to learn of new articles meeting selected notification criteria, i.e. thresholds for relevance and newsworthiness, specific disciplines and populations, etc. 

To set alerts, users need a free account. Then, they can set alert preferences, create custom dashboards, and identify GW Himmelfarb as their PubMed outside tool https://www.evidencealerts.com/Account/MyProfile#OutsideTool, which links to Himmelfarb content via the Get It @ Himmelfarb button on PubMed. 

EvidenceAlerts is one of many tools to help you stay up-to-date with the latest relevant and newsworthy research in your field. 

Find more tools for keeping up with the literature. 

Consult with a librarian for help setting up tables of contents or search alerts.

And don’t forget to critically appraise the research!

Hand putting paper in trashcan

The rapid evolution of evidence and constant press coverage during the COVID-19 pandemic shone a spotlight on an issue that has continued to dog librarians, evidence synthesists, and database creators: how to track and display retractions, comments of concerns, and other, related notices, while maintaining the completeness of the scientific record. 

Science is, as is often said, a self-correcting process. We have measures in place to ensure the soundness and quality of research published. We use peer review. We have reporting standards. Journals and publishers are adding more and more transparency guidelines, for instance around funding disclosures and data and software/code sharing. 

Still, retractions happen. So do corrections. And comments of concern. Other scientists, editors, and readers in general (even students (see Reardon, 2021) flag issues in published research. “Part of the iterative process of scientific research is calling out and remembering the mistakes so as not to repeat them” (Berenbaum, 2021, p.3).

Once that research is published, how do we manage these concerns? “Removing a discredited paper from the literature entirely isn’t possible [and] isn’t necessarily desirable; doing so removes part of the record of the self-correcting iterative process by which science advances” (Berenbaum, 2021, p.2). How do we at once preserve the scientific record, keeping the original article for historical and/or meta-research purposes, and ensure that readers are alert to larger concerns about the article?

This has been highlighted during the COVID-19 pandemic and the accelerated research, writing, review, and publication cycles. The retractions from premier journals, and their subsequent reuse and citation, had potential for very real consequences in decision-making and "challenge authors, peer reviewers, journal editors, and academic institutions to do a better job of addressing the broader issues of ongoing citations of retracted scientific studies" (Lee, et al., 2021).

In conversations with other librarians conducting COVID-19 literature searches, we all encountered instances of retractions, comments of concern, withdrawals, and even disappearances of articles we were responsible for identifying and sharing with decision-makers and clinicians.

In one email thread, librarians shared strategies to specifically identify retractions in literature searches. The tools at our disposal are necessarily limited by the publishers’ practices and the metadata in our databases. For instance, a withdrawn preprint remains difficult to capture. 

That said, we can devise, from the documentation provided by PubMed, a strategy to identify retractions and concerns when conducting systematic reviews, developing guidelines, and participating in other projects requiring comprehensive searches. When conducting such projects, the time between the original search and export of results, writing, submission, and actual publication can be months. Within that time, articles can be corrected or retracted for a variety of reasons, ranging from updating an author’s affiliation to the uncovering of fabricated data. 

In the email thread of librarians discussing retraction searching in the context of COVID-19, one suggested searching “Expression of concern for: [article title].” Not all articles are formally retracted. Others may be published as errata or expressions of concern. The reasons for each can vary. To fully cover the breadth of potential concerns, I used this suggestion as a starting point to identify potentially problematic articles within a set of search results. 

In Ovid MEDLINE, AND the following to your search strategy 

"Expression of concern for".m_titl.

"Erratum in".mp.

"Retraction in".mp.

retracted publication.pt.

1 or 2 or 3 or 4

*Please note, this search approach has not been formally tested.*

Line 1 aims to capture expressions of concern, which are written by journal editors and often use the phrase “Expression of concern” in their titles. 

Line 2 aims to capture errata. Errata are published to correct or add information in a published article and to address errors arising in either the publication process or from missteps in methodology. Note, errata include a range of corrections and additions, from correcting an author’s job title (BMJ, 2008) to accidental duplication of a figure (Silva-Pinheiro, et al. 2021). 

Lines 3 and 4 aim to capture retractions. According to the Committee on Publication Ethics, retractions should be considered when there is reason to believe a publication presents unreliable findings or unethical research, plagiarises, uses material without proper authorization, or fails to note major competing interests (Barbour et al., 2009). 

Lines could be added to specifically capture comments, corrected articles, and updated articles. 

Additional resources are available to help identify and monitor retractions in the literature. Retraction Watch maintains a searchable database. If you use Zotero, you are automatically alerted to retracted papers saved in your library. 

When in doubt, reach out to your Himmelfarb librarians for assistance searching!

References

Reardon, S. (2021). Flawed ivermectin preprint highlights challenges of COVID drug studies. Nature, 596(7871), 173–174. https://doi.org/10.1038/d41586-021-02081-w

Berenbaum, M. R. (2021). On zombies, struldbrugs, and other horrors of the scientific literature. Proceedings of the National Academy of Sciences, 118(32), e2111924118. https://doi.org/10.1073/pnas.2111924118

Lee, T. C., Senecal, J., Hsu, J. M., & McDonald, E. G. (2021). Ongoing citations of a retracted study involving cardiovascular disease, drug therapy, and mortality in covid-19. JAMA Internal Medicine. https://doi.org/10.1001/jamainternmed.2021.4112

BMJ. (2008). 3360-b. https://doi.org/10.1136/bmj.a402

Silva-Pinheiro, P., Pardo-Hernández, C., Reyes, A., Tilokani, L., Mishra, A., Cerutti, R., Li, S., Rozsivalova, D. H., Valenzuela, S., Dogan, S. A., Peter, B., Fernández-Silva, P., Trifunovic, A., Prudent, J., Minczuk, M., Bindoff, L., Macao, B., Zeviani, M., Falkenberg, M., & Viscomi, C. (2021). Correction to 'DNA polymerase gamma mutations that impair holoenzyme stability cause catalytic subunit depletion'. Nucleic Acids Research. https://doi.org/10.1093/nar/gkab837

Barbour, V., Kleinert, S., Wager, E., & Yentis, S. (2009). Guidelines for retracting articles. Committee on Publication Ethics. https://doi.org/10.24318/cope.2019.1.4

Have you wondered what an article’s citation count means? Who cited the article? How?

Take the now-retracted Wakefield et al. article “Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children” (1998). Though most citations are negative, this is not reflected in the overall citation count (Suelzer, 2019). A unidimensional metric like citation count does not capture the diversity of how and where citations appear. 

Scite aims to contextualize the citation count

Scite, founded by Josh Nicholson and Yuri Lazebnik and previously funded by NSF and NIDA, is a database of over 800 million citation statements (Herther, 2021) categorized as 

  • Supporting: providing supporting evidence for the cited work
  • Mentioning: mentioning the cited work
  • Disputing: providing disputing evidence for the cited work

The statements are also tagged by where they appear in citing articles (intro, results, methods, discussion, or other).  

Users can search the website and install plug-ins for browsers and reference management tools.  

Scite uses text mining and artificial intelligence

Scite uses machine learning to enhance the database of citation statements. “The corpus on which the model was trained included 43,665 citation statements classified by trained annotators with experience in a variety of scientific fields” (Nicholson et al., 2020). Scite continues to build partnerships with publishers to gain access to articles for text mining. While the tool is imperfect and evolving, it begins to demonstrate how and where works are being used.

Publishers and researchers can use scite

In addition to citation counts and altmetrics, scite smart citations are beginning to appear on databases and journal websites. They are already available in EuropePMC records (Herther 2021, Araujo & Europe PMC, 2020). 

Researchers also use scite to see how others use their own publications and how their results fit into the larger landscape. With a free account, researchers can create a limited number of reports and visualizations and set up author alerts. Researchers can use the digital badge when presenting their works. A paid account provides access to the reference check feature, which alerts authors to potentially disputed or retracted references in an uploaded manuscript.

Finally, the founders of scite hope that smart citations will encourage researchers to “report unsuccessful attempts to [test] reported claims, as so called negative results often go unpublished because they are considered inconsequential” (Grabitz et al., 2017).

Use scite with a grain of salt

The technology is evolving and the corpus to which scite has access is incomplete. As with all citation indexes, the methods and data sources vary, which is why different numbers are found across platforms like Scopus, Web of Science, PubMed, and Google Scholar. Scite’s founders caution users “given the limitations of the model precision [and] the current limited coverage of articles analyzed by scite” (Nicholson et al., 2020).

Take the Wakefield et al. (1998) article. As of March 8, 2021, the article has been cited by 349 articles in PubMed, 1,443 in Web of Science, 1,590 in Scopus, and over 3,000 in Google Scholar. The scite browser plug-in shows over 1,000 citation statements (note one citing article may include multiple citation statements). Most are classified as mentions, with 2 confirming and 7 disputing (See: A retracted article has no disputing cites, does that mean scite is not working? | scite help desk). In contrast, a recent analysis found 838 negative citations in a collection of 1,153 citing works (Suelzer et al. 2019). 

Methods and tools to evaluate research and improve reproducibility continue to evolve, and researchers can contribute to improving the model by flagging mis-classified citations. While many AI-based tools are still in development, they offer hope for a multidimensional approach to publication metrics. 

Still curious?

Read more about how scite classifies citations. Search the website. Install the plug-ins. Visit the Himmelfarb guide on How To Measure Impact?

References

Araujo, D., Europe PMC. (2020, January 20). Europe PMC integrates smart citations from scite.ai. Retrieved February 26, 2021, from http://blog.europepmc.org/2020/01/europe-pmc-integrates-smart-citations.html

Grabitz, P., Lazebnik, Y., Nicholson, J., & Rife, S. (2017). Science with no fiction: Measuring the veracity of scientific reports by citation analysis [Preprint]. Scientific Communication and Education. https://doi.org/10.1101/172940

Herther, N. K. (2021, February 15). Scite. Ai update- part 1: Creating new opportunities- an atg original. Charleston Hub. https://www.charleston-hub.com/2021/02/scite-ai-update-part-1-creating-new-opportunities-an-atg-original/

Khamsi, R. (2020). Coronavirus in context: Scite.ai tracks positive and negative citations for COVID-19 literature. Nature. https://doi.org/10.1038/d41586-020-01324-6 

Nicholson, J. M., Uppala, A., Sieber, M., Grabitz, P., Mordaunt, M., & Rife, S. C. (2020). Measuring the quality of scientific references in Wikipedia: An analysis of more than 115M citations to over 800 000 scientific articles. The FEBS Journal, febs.15608. https://doi.org/10.1111/febs.15608

Suelzer, E. M., Deal, J., Hanus, K. L., Ruggeri, B., Sieracki, R., & Witkowski, E. (2019). Assessment of citations of the retracted article by wakefield et al with fraudulent claims of an association between vaccination and autism. JAMA Network Open, 2(11), e1915552. https://doi.org/10.1001/jamanetworkopen.2019.15552

Wakefield, A., Murch, S., Anthony, A., Linnell, J., Casson, D., Malik, M., Berelowitz, M., Dhillon, A., Thomson, M., Harvey, P., Valentine, A., Davies, S., & Walker-Smith, J. (1998). RETRACTED: Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. The Lancet, 351(9103), 637–641. https://doi.org/10.1016/S0140-6736(97)11096-0 

Around this time every year, we start seeing blog posts for the top ten books of the past year and the top ten resolutions for the year ahead. These listicles help us to reflect and look forward. 

This post is inspired by this seasonal spirit of list-making.

Every year, the National Library of Medicine (NLM) adds new concepts to the controlled vocabulary Medical Subject Headings (MeSH). As a reminder, NLM is one of our National Institutes of Health and is the institution behind PubMed, among other tools. Around this time each year, NLM adds new MeSH concepts. This may seem like something only librarians - particularly indexers - would get excited about, but the annual release of the new MeSH list is a reflection of research trends and priorities, as represented by the scientific literature. 

Before I dive into the new concepts for 2021, a quick aside: what are medical subject headings? Subject headings are tags assigned to articles to increase findability. Imagine you are searching for articles on “soda”. Some authors write about soda, while others write about pop. It is best practice to use all synonyms of a term for the most comprehensive search. That’s where subject headings come in. A search using the subject heading for soda would return all articles, whether the authors use soda or pop or another term ( What term does your hometown use? Let me know in the comments below!).

Note, the “soda” subject heading search may also return articles for specific types of soda through automatic explosion - interested in learning more about this, also let me know in the comments!

Now, “soda” hasn’t always been around, and researchers have not always been writing about soda. The number of articles published on “soft drinks” has increased since the 1970s, as depicted by the graph below.

But, soon enough, more and more papers are published on the topic, and then we need additional headings to capture new concepts, like ice cream soda

Trends in research are reflected in the papers researchers publish, and trends in biomedical research can be seen in the articles indexed in PubMed. Medical Subject Headings have to keep up to help users locate papers on new technologies like “automated facial recognition” and “surgical navigation systems” or specific fungi and bacteria of increasing interest and clinical importance

Tree LocationNumber of New Terms for 2021  
ascomycota12
actinobacteria8
basidiomycota5
cyanobacteria5
actinomycetales2
clostridiales2
gram negative2

In the new MeSH terms for 2021, perhaps unsurprisingly, we see a number of headings related to COVID-19, from the physiological (i.e. Angiotensin-Converting Enzyme 2) to the socioeconomic (i.e. teleworking). Remember when the first tweets, anecdotes, and news stories came out on loss of smell as a clinical sign? A 2020 search for “loss of smell” in PubMed would be  mapped to “olfaction disorders”. In other words, the algorithm found the closest subject heading and searched for that as well as similar keywords to expand the search. However, in 2021, your “loss of smell” search might be mapped to the more specific heading “anosmia”, which will be applied to newly indexed articles. 

Certainly COVID has dominated publishing in 2020. What else is trending? The word cloud below depicts the most frequently used words in the subject headings and scope notes in the New MeSH Headings for 2021. Ascomycota, bacteria, family, fungi, genus, phylum, and species might all be indicative of the new headings for specific bacteria and fungi. Viral, virus, and covid also appear. There are other terms, too, that reflect a broad view of health and medicine. Behavior, food, social - these and other terms are seen relatively frequently. 

We see, for instance, subject headings related to structure, society, and psychology, some of which were added to integrate into MeSH terminology from the NIH Office of Behavioral and Social Sciences Research (for instance, Correctional Facilities; Food Deserts; Models, Biopsychosocial; Water Insecurity) an expansion of the biomedical to address the impact of various factors on individual health. There appears to be interest in whole-person health ( which is also likely a reflection of work published during COVID), for instance “financial stress” and “food security”. We see terms for different hobbies and activities, “marathon running” and “internet use”. 

Tree LocationNumber of New Terms for 2021  
food supply4
psychotherapy3
social behavior3
behavior addictive2
human rights2
interpersonal relations2
social problems2
socioeconomic factors2
stress psychological2

Every year, changes in language are often reflections of dramatic social shifts, and 2020 is certainly no exception. Want to learn more about the 2021 MeSH terms? Find out What’s New in MeSH for 2021 and, as always, Ask a Librarian for help using MeSH in your search strategy. 

What are preprints?  

  • Preprints are research manuscripts posted prior to peer review. Depending on the preprint server, manuscripts may be screened for privacy concerns or potential harm. For instance, “All manuscripts uploaded to medRxiv undergo a basic screening process for offensive and/or non-scientific content and for material that might pose a health risk”.
  • Preprints are not peer reviewed, and they are not final research products. The NIH offers guidance about “Preprints and Other Interim Research Products”, such as datasets and code. 
  • Preprints can enable timely discussion of research, especially in rapidly evolving fields. The publication of research via journal articles is often delayed by the formal peer review process. 

Where can I find preprints?

How do I evaluate preprints?

  • Because preprints have not undergone peer review, they require more critical analysis for potential issues that might be caught during peer review.  
  • Members of the scientific community engage with preprints in a variety of ways. Some platforms, such as F1000Research, post reviewer comments alongside the article. However, preprint reviews and critiques are rarely so easily located. Often, further searching will be necessary to find cross linking to social media discussions about a preprint. 
  • Researchers can find reviews and critiques of COVID-19 related preprints on review sites and overlay journals, including NCRC from Johns Hopkins, and Rapid Reviews: COVID-19 from MIT. Generally, while review sites link to preprints, preprint servers do not link to reviews.
  • Outbreak Science Rapid PREreview offers a browser plug-in that alerts readers to reviews of preprints: for instance, this September preprint has a review. 
  • SciScore uses AI to detect indicators of rigor and reproducibility in a manuscript. Follow @SciScoreReports for updates on COVID-19 related preprints evaluated with this tool. 

I found a preprint and reviewed the discussion around it. Now, how do I cite it?

  • Since the rapid dissemination of COVID-19 research via preprints, the need for clarity in citing preprints in reference lists has grown more urgent. Where possible, citations should include the version and preprint status; this is specified in Vancouver Style. Some preprint servers provide no version control, though others, for instance those using OSF Preprints or Jupyter Notebooks, do include version control. 
  • The DOI should also be included, as well as the server name, to assist readers in identifying and locating the preprint cited. Note, medRxiv provides versioning and DOIs. If you cannot find the preprint‘s DOI, search the preprint title in crossref.org or preprint servers.
  • For more information on citing preprints and potential problems, see this Scholarly Kitchen blog post.

What should I be aware of when reading and citing preprints?

  • Across the research landscape, there is an increasing focus on transparency and reproducibility, encouraging authors to share interim research products, including preprints, datasets, and computer code. (Note: Data sharing is mandated for federal grant recipients.) Locating and analyzing research data and code may be especially important when appraising preprints, which have not undergone peer review. Preprint servers have varying abilities and infrastructure to link to research data. Read the full text of the manuscript to find mention of, or, better yet, links to, datasets and code on repositories, such as figshare, dryad, GitHub, and Zenodo. You can look for subject data repositories on the website re3data.org. Often, datasets link to preprints, but preprints do not reliably link out to the data. 
  • Before citing a preprint, check if the research has been published in a peer-reviewed article since the preprint’s posting. While some servers link to published articles, there may be a time lag or other technical reasons that prevent automatic linking. Authors are encouraged to ensure links are posted from preprints to articles, but this is not 100% reliable. Try searching the preprint title (though titles may change between posting and publishing) in PubMed, Google Scholar, or CrossRef to check for a published, peer-reviewed article.
  • And remember, any changes to the standard of care need to be based on authoritative evidence. Do not change guidelines based on evidence found only in a preprint. https://blogs.gwu.edu/himmelfarb/2020/06/17/revisiting-medrxiv-in-the-age-of-covid-19/ 


For more information about preprints in general and to learn how and where to upload a preprint of your own: see the Himmelfarb preprint guide.

https://lh4.googleusercontent.com/hNe0RsaZDI4nfO9ouPAXzlpfaec-HK2SebQXkrNFq5XJFNVQHWrIQUxHIVQ_9POopoRzp6hpMNT47ulQBPMMAY9WOkKnALN77ks4rZ3sx9FRIr-xO6mKqzmUHluV_Vtj6YHrExRR

Scholarly Publishing in Early CareerLinda Werling, Ph.D. who currently teaches in GW’s MD and PA curricula, has a distinguished record writing scientific articles and advising students who are writing up their own work for publication.

Linda WerlingDr. Werling is the author of 61 peer-reviewed publications in scientific journals.  She has also authored 10 invited chapter and reviews, as well as 75 abstracts for presentations at national and international meetings.  She has served as a reviewer for 13 journals, and was on the editorial board of Synapse for 13 years.

Dr. Werling taught scientific writing for 10 years to graduate students at GWU.  She encouraged her own PhD students to publish their work, resulting in solid predoctoral publication records for all of them.  She has served on many PhD dissertation committees, and enjoyed assisting them in preparing clear and concise accounts of their research projects.

Given her impressive background as both an author and as a mentor, we asked her what advice she would give to young researchers as they think about publishing their own work.

Here’s what she had to say:

1.  Choose the right journal for submission

    • Make sure your work fits with the type of article the journal publishes.  What kind of journal do you and your labmates read?  You want your work to have the best exposure to the right audience.
    • Choose a high quality journal, and have backup journals in mind in case your paper is not accepted by your first choice.

 

2.  READ THE INSTRUCTIONS

  • Make sure you organize and format your submission in strict compliance with journal specifications.  Journals receive a lot of submissions.  There is no reason to have your work rejected because you did not carefully follow guidelines.
  • Provide figures as specified by the guide to authors.
  • Be sure to organize your reference list according to journal specifications.  There are lots of programs that can store all your references and tailor their format for you for various journals’ requirements.
  • Construct a cover letter that tells why you believe your work is suitable for that particular journal, and (very briefly) what your major findings are.

 

3.  Tell a story

  • Give sufficient background for the reader to understand why you did the work. This usually goes into the section called Introduction.
  • Make figures and illustrations you plan to include, and lay them out in order
  • Use the figures as a roadmap to describe what you found.  In this way, the results section of the paper can almost write itself.
  • Use the Discussion to place your findings in the broader context of the field.  Do not use this section to simply reiterate your results; explain what they mean in advancing knowledge in the area of research.
  • Cite original sources for literature referenced.  Do not assume that the authors of a paper you have read have cited the source work correctly.

 

4.  Proofread for content, spelling, grammar and syntax

  • Also ask your colleagues to read the paper.  It is advisable to choose readers both directly involved in your field, as well as scientists who are in a different field.  What may seem very clear to you or your advisor may not be as clear to another researcher.  Considering the critiques of others will ensure your work can be understood by a more general scientific audience
  • Have a thick skin.  If you ask for critiques, understand that your colleagues are doing you a favor.  (You can return the favor by reading their drafts.)

 

5.  When you receive an editorial decision, revise accordingly

  • Again, have a thick skin.  Your response should not be argumentative.  This rarely will be received favorably by the reviewers or the editor.  Thank the reviewers for their helpful comments, even when you may not feel they were all that helpful. You may need to rewrite to be more clear, or you may need to do additional experiments.  If you disagree with the reviewers’ advice, you may certainly rebut, but go gently.
  • If you cannot meet the reviewers’ expectations, or your submission is rejected outright, revise for submission to another journal.  Don’t give up.  Writing and publishing is a learning experience.

 

During the first two weeks of August, librarian instructors gathered data to learn what fuels the class of 2024. In a thoroughly unscientific and unsystematic poll, we asked our first-year medical students about their favorite non-alcoholic beverage preferences. This was not intended to be a formal data-gathering exercise. Rather, we hoped to use it as an icebreaker, a way to get to know each other as the library orientation was moved online. 

 

While the most popular response appears to be coffee, iced coffee, and espresso drinks, aligning perhaps with the stereotype of medical students, I ask you to pause and ask how the question was framed - perhaps we librarian instructors influenced responses by how we asked the question or by offering our own preferences.

Any time there is data, there is room to question. How was the data collected and who collected it? How is it presented and how might we visualize it?

Librarian instructors collected the data via email or Google Forms. Each librarian sent individual emails to students in their small groups and might have presented the question differently, according to their individual personalities. Some librarians offered set response options whereas others allowed free text. Data points were grouped into categories for ease of analysis and presentation. 

126 students answered the question, providing a range of responses. As noted, responses were grouped into larger categories for analysis. The librarian responsible for data analysis (this post’s author) acknowledges others may have grouped the responses differently. Personally, I enjoy seltzer and sparkling water and maintain that as a separate category. Another analyst may group these with soda. Consider: When life gives you lemonade, do you group it with fruit juices?

We are excited to work with the class of 2024 and wish them the best throughout their careers here at GW and beyond. 

Cheers! 

Yes, Chef Comfort Me with Apples The Perfect PieI loved wandering the shelves of my library growing up, looking for titles that caught my eye. All lined up, the books had their own poetry, the occasional pair, trio, or quartet of titles that seemed perfect next to one another (and not just because of the order enforced by the Dewey Decimal System).

We may not be able to wander the library shelves right now, but we do have the opportunity to make poetry.

Stack some books from your collection, snap a photo, and share an image on Instagram. Be sure to tag @himmelfarbgw and #gwspinepoetry for your chance to win a $25 gift card to Politics and Prose. Images must be posted between June 1 and June 30, 2020, to be considered eligible. Only GWU SMHS, SON, and SPH affiliates are eligible to win. Entries will be evaluated for originality and creativity. Winner will be announced July 7, 2020.

 

View this post on Instagram

 

A post shared by Stacy Brody (@stacy_b21) on

Librarian Reserve Corps

Stacy Brody is a Master of Information. Literally. That’s what it says on her degree from Rutgers University School of Communication and Information, at least. And she is using her skills to contribute to the fight against the COVID-19 infodemic.

 

Stacy was warmly welcomed to the Himmelfarb Library team three months ago. Like many of you, she isn’t sure whether those are short or long months - her perception of time seems to have been affected by the pandemic.

As a member of the Himmelfarb team, she supports the work of clinicians and researchers by conducting literature searches, compiling resources for the weekly Intelligence Reports, and maintaining the COVID-19 Research Guide.

Keeping up-to-date with, and searching for, COVID-19 literature requires some creativity. The research is coming out in torrents. New publications are posted on preprint servers and publisher websites, then picked up on Twitter and by the news media before the research community has had the opportunity to evaluate them. The quality of research described in scholarly articles is variable. Original research is showing up in Commentaries and Editorials for rapid dissemination. The delay between publication and appearance on PubMed and other databases is becoming more apparent and more critical.  The norms of scholarly communication and publishing are being challenged in a big way.

Finding and evaluating the evidence to support evidence-based medicine is more difficult when it comes to COVID.

Which is why, when Stacy saw the call on the Medical Library Association’s listserv to support the global response by indexing COVID-19 research publications, she signed up. She hoped that, by applying topic tags to articles, she, in her small way, could make the evidence more findable and usable to the global audience of responders, clinicians, and researchers.

As she hit the Reply button, she didn’t know that she would become co-lead of the Librarian Reserve Corps. She hadn’t yet met her Librarian Reserve Corps co-lead Sara Loree, a medical librarian at St. Luke’s Health System in Idaho, or the visionary Librarian Reserve Corps founder Elaine Hicks, Research, Education and Public Health Librarian at Tulane University. Hicks, reflecting on her own professional experience in public health and emergency preparedness, recognized that the need of Tulane University epidemiologist and GOARN (Global Outbreak Alert and Response Network)-Research lead Dr. Lina Moses, was one no librarian could meet alone. Members of GOARN, a WHO network of 250-plus agencies, institutes, and universities organized to respond to outbreaks, need the literature to support evidence-based public health response efforts. As described above, that evidence is hard to find in an infodemic. For evidence-finding at a global scale, you need an international army of librarians. Hicks, seeing this and dreaming of just such an army, put out a call to the Medical Library Association listserv. The newly formed Librarian Reserve Corps, modeled after the Medical Reserve Corps, supports evidence-based response efforts by providing resources for evidence-based public health.

The initial efforts to tag articles quickly grew - not only because the number of COVID-19 research articles has grown but also because we have learned more about the skills and expertise of our volunteers and the needs of GOARN-Research! Librarian Reserve Corps volunteers continue to index articles on a daily basis and have since expanded their services. Volunteers now conduct literature searches and monitor the media. They work to connect groups working on systematic reviews and meta-analyses.

Librarians have the skills needed to fight the infodemic and help our public health and medical professionals fight the pandemic.

Stacy recognizes that many of the Librarian Reserve Corps’s volunteers contribute to the infodemic-pandemic response at this global level and at the local level. They provide search support for clinicians and researchers. They help students and faculty access the library resources they need to continue their work virtually. They help professors transition to online instruction. As librarians, Stacy acknowledges, we are often in behind-the-scenes roles. She is honored to be part of this amazing, talented, dedicated team of volunteers making librarians famous.