Skip to content

Primary care providers who have attended Children's National Hospital's Pediatric Health Network updates know that I have consistently advised against ordering SARS-CoV-2 antibody tests for individual patient use; the state of knowledge about accuracy and interpretation of such tests is not sufficient to give any advice to individual patients based on the result, it is purely a research and epidemiologic tool at this point. Now, we have a new meta-analysis that examines the current state of knowledge at a very detailed level.

The study is published in the Cochrane Database of Systematic Reviews, hands-down the highest quality source for systematic reviews. If you refer back to my recent post on the "evidence pyramid," you'll see that Systematic Reviews are at the top. That's not to say that any systematic review, including a Cochrane review, is the final word on the subject; in fact, sometimes the reviews are so strict that some useful publications are excluded because they don't meet the pre-determined trial design level of quality. Not infrequently, Cochrane reviews come to no conclusion due to lack of high-quality studies, but we know as clinicians we still must make management decisions in many settings where we lack high-quality studies to guide us.

This particular review is more than 300 pages, but rest assured I'm just going to give you the highlights here. Here are some key points to consider when digesting the findings:

  • The review only covers publications through April 27, 2020. However, the authors do plan frequent updates, an advantage of an online journal like the Cochrane Database.
  • They include "preprints" in their studies for analysis. These are the non-peer reviewed submissions to online sites like medrxiv that I cautioned against in my prior posting. However, in this case the Cochrane review is sort of like undergoing the typical journal peer review process, so I'll give them a pass on including these preprints. It is important to note, however, that preprints comprised about half of all the studies included in the meta-analysis.
  • Overall, they identified 1430 studies to screen, using a detailed search strategy. On further analysis using their predetermined content and quality criteria, they distilled that down first to 266 studies to look at in detail. Of those, only 57 reports of 54 studies met final quality and content criteria to be included in the meta-analysis. This degree of whittling down the study numbers is not unusual in a very broad search needed for meta-analysis. Studies are excluded for a variety of reasons, including not only problems with study design but also with not providing enough detail to assess the study conclusion.

I include the diagram below from the report to illustrate how quality criteria are summarized from the 57 included reports. Here, "green is good." So, you can see that even among the studies that passed the criteria for inclusion, most of them had significant problems.

The authors had several key conclusions:

  • Most of the studies included only hospitalized patients, which could lead to some bias by studying patients at the more severe end of the disease spectrum. We don't know if the results could apply to those with asymptomatic or mild disease.
  • Antibody testing in general seemed to have lower reliability early after onset of symptoms, which is true for most infections. It takes time for antibody production to develop after infection. However, most studies did not follow patients for more than about a month after onset of symptoms, leaving unknown how long antibody can persist after infection.
  • Overall the studies involved a multitude of different assays, each possibly different in terms of sensitivity and specificity, making any broad conclusions more difficult.

Like all Cochrane reviews, the authors included a "Plain Language Summary" of the results, and I think it's helpful to see their bottom line for implications: "...antibody tests could have a useful role in detecting if someone has COVID-19, but the timing of when the tests are used is important.... The tests are better at detecting COVID-19 in people two or more weeks after their symptoms started, but we do not know how well they work more than five weeks after symptoms started... Further research is needed into the use of antibody tests in people recovering from COVID-19 infection, and in people who have experienced mild symptoms or who never experienced symptoms."

I would add to this that the overwhelming majority of patients studied were adults; we don't know much about the pediatric population.

Caveat emptor.

2

It's always been hard to keep up with the medical literature, especially to figure out what original articles are of high enough quality to warrant a change in your clinical practice. It's not enough to just read the abstract, or to be reassured because the authors are from a reputable institution or the article is published in a reputable journal. I've been teaching Evidence Based Medicine (EBM) in various formats for over 20 years, including a full graduate school course for a while. I've learned a lot, both from reading but also from my students and colleagues, about how to sort through the jungle of words and diagrams in medical articles to pick up those rare pearls of good information.

EBM officially came into being in the early 1990s and, like most things, it has evolved. What hasn't changed much, however, are the forces that result in low quality evidence being published and advertised:

  • Pressure on researchers to "publish or perish." This not only involves job security and academic promotion but also a natural desire to make a name for oneself.
  • Pressure from academic institutions to ask their researchers to "hype" their studies in the hopes of increasing organizational rankings in national publications and also increase charitable donations.
  • Complicity from the lay press, anxious to describe in breathtaking fashion a new study, even if it has no direct relevance to clinical practice or improving lives of their viewers/readers.
  • Efforts from commercial organizations, such as pharmaceutical companies, test developers, and device manufacturers, to sell their products.
  • Predatory journals who will publish anything for a price. (One "gotcha" study showed how one of these journals published a report taken straight from the pages of a "Seinfeld" script - clearly totally bogus and obviously published in such a journal without any editorial review.)
  • Failure of the medical community as a whole to convey the inherent uncertainty in medical science - very few things are absolute "facts."

All of this just got worse in the pandemic era. Individual clinicians, researchers, and organizations seem bent on being the first to report the newest covid finding, and publishers and the lay press are anxious to help them. Unfortunately, things have moved too fast. Just recently, 3 major journals (New England Journal of Medicine, Lancet, and Annals of Internal Medicine) retracted publications due to, in my opinion, sloppy editing - plain rookie mistakes likely due to being in too much of a rush. (Actually as I'm writing this I heard about a potential new retraction with Proceedings of the National Academy of Sciences regarding mode of transmission of SARS-CoV-2). It is even harder now for those of us at the point of care to digest the onslaught of poor science looking for the truly helpful articles. However, there is still hope, and here are some quick guides to survival in the Pandemic Era of Medical Practice (PEMP, I just made up that acronym).

The image above is one I've used many times, most recently at a talk I gave at the AAP NCE meeting last fall. It is my version of the "evidence pyramid," a hierarchy of studies much misunderstood by the general medical public. Simply explained, results utilizing the study design types at the lower end of the pyramid are more likely to be shown to be wrong when subsequent studies, usually from a higher design type in the pyramid, are performed. Also, note that pure bench studies and animal studies aren't even part of the pyramid; those studies would not immediately impact clinical (human) medical practice. Also be aware that a poorly-designed randomized controlled trial (RCT) wouldn't be near the top of the pyramid; bad science can occur at all levels and trumps the pyramid ranking.

The vast majority of design types we are seeing related to COVID-19 are case series, i.e. just a report of what was tried and what happened, usually of a retrospective nature. It's not that these studies are bad, but compared to a randomized, placebo-controlled double blind trial of a new therapy, it just doesn't stand up. The gap between the lower and upper ends of the pyramid are magnified when we are dealing with a completely new disease like COVID-19.

(BTW, if you are wondering about GOBSAT, I wish I had invented the acronym but I didn't. It stands for Good Ol' Boys Sittin' Around a Table, another word for expert opinion. Again, if that is all we have to go on, I'm certainly interested in what experts think, but it's astonishing over the years how often GOBSAT opinions are reversed when better studies are performed.)

So, here's a quick and dirty approach of how I keep up with the flood of medical studies. First, I look at the abstract. If it sounds like something worth reading more, I then go immediately to the Methods section of the article. Yes, I know that section is the most painful of all, but that's where I figure out study design and whether the study may have critical flaws that would affect study results. Also, in spite of modern-day editing, even the best journals still allow conclusions to appear in the abstract that aren't supported by the study itself; usually they just represent the authors' conjectures but aren't labelled as such. If the Methods section doesn't pass muster, I don't read the rest of the article. If, however, the Methods look reasonably sound (remember, this is biology, we can't expect perfection in any study) I look through the results and discussion to see if this is something that would apply to my patients.

One more point that has just surfaced during PEMP. I'm starting to see increased alerts about manuscripts submitted to pre-publication web sites. Prior to the pandemic, these were sites where authors submitted data to be looked at by other scientists. They were not necessarily even submitted to a journal, just a way to increase transparency and actually a good thing. One key important fact is that the documents have not undergone any peer review at all. Unfortunately, now many authors are submitting results of case series and the like to these sites, and the lay press and even otherwise sound academicians are referring to these as "publications" when in fact they are nothing of the sort. As a reviewer for many medical journals and author of a few scientific articles, I can tell you that most articles submitted for publication undergo many, many significant changes before publication. I wouldn't advise clinicians to even look at these postings, they are useful only if someone is trying to design a research study on a similar topic. Some of the web sites include medrxiv.org and biorxiv.org. Again, nothing wrong with these sites other than how they are currently being misused by a few individuals.

So, I would advise you all not to be too discouraged by the confusion and flood of information. Listen to the lay press so you know what your patients and families are hearing, read the key articles, and be prepared to answer questions in your practice.