Skip to content

2

Last night we had our first (virtual) meeting of the new season for the Montgomery County Pediatric Society, great to "see" everyone even in the online sense. As I mentioned at the time, I was planning to post a brief comment about a systematic review of PCR and antibody testing for SARS-CoV-2, not necessarily because it is so earth shattering but because it is a nice summary of the current state of the art and a reminder of difficulties in test interpretation.

The article is in a journal probably none of you have ever come across, BMJ Evidence-Based Medicine. However, EBM has been dear to my heart since before the term was invented in the early 1990s. The authors performed a detailed review of publications to try to synthesize evidence on the diagnostic accuracy for all known tests for SARS-CoV-2 and arrive at some conclusions about the clinical effectiveness of this testing. An important caveat, though: the search is current only up to May 4, 2020, so it does not include anything about the newer antigen tests as well as any data published subsequently. Their methodology is sound, though complex, and the basic conclusions are still accurate today.

For detection of the virus, and basically we are just talking here about PCR tests, the authors found that overall sensitivity was 87.8% with 95% confidence interval 81.5% to 92.2%. In the strictly PCR studies, all the patients were diagnosed with COVID-19 so no good way to estimate specificity. The sensitivity might sound good, but it depends on what clinical situation you are dealing with, plus note that it does not include asymptomatic or pre-symptomatic patients for the most part.

A subset of viral detection methods used isothermal amplification assays with PCR as a reference standard, so here it was possible to make a guess about accuracies. The sensitivities and specificities ranged from 74.7% to 100% and 87.7% to 100%, respectively, but because of inconsistencies among the various studies the authors did not feel it was valid to pool the results to provide a single estimate of those numbers.

The results for the antibody studies were more problematic. Of the 10 studies the authors felt had sufficient information to calculate sensitivity and specificity, sensitivity ranged from 18.4% to 96.1% and specificity from 88.9% to 100%. Needless to say, the sensitivity results in particular aren't very encouraging. Antibody testing continues to be solely a research and epidemiologic aid; beware any use for single patient decision-making.

The take-home point of all of this is that we have a long way to go before we are able to make optimal use of SARS-CoV-2 testing for clinical decision making. False negatives in general are uncommon for PCR and related tests, but remember that stage of illness and technique of specimen collection are important determinants of results and need to be considered in interpreting tests for individuals.

I'd be surprised if any of this blog's readers would catch the reference to TW3, an abbreviation for a short-lived television series, That Was The Week That Was, first appearing in England on the BBC and then in the US. It was a bit of a parody of current events, and I'm not even sure how I still remember it except for the fact that its writing crew included one of my favorite children's book authors, Roald Dahl, as well as a couple of future members of the Monty Python comedy group. That obtuse reference aside, this past Thursday I was planning to post a list of 3 new items of interest regarding our current pandemic; then, our President developed COVID-19 disease.

At this point we should all be pleased that he appears to be stable, but I wish we had more details of how contact tracing and quarantining is going related to all the possible contacts. By now (Sunday October 4) all contacts should have been notified and plans for testing, quarantine, and/or isolation should be complete. Regardless, those original 3 items are still worthy of mention.

The Children's National Hospital-National Institute of Allergy and Infectious Diseases 3rd Annual Symposium. This event was held on September 29 and initially was established to build on collaboration in research and education between CNH and NIAID. This year the symposium focused on COVID-19 with quite a lineup of speakers, including Dr. Anthony Fauci from NIAID, Dr. Peter Hotez from Baylor College of Medicine (and formerly from GWU), and Dr. Ezekiel Emmanuel from U. Pennsylvania. The presentations were a mix of more general and/or clinical presentations along with basic science updates, particularly in immunology. Anyone can access the entire day's session at a CNH link. I highly recommend the above 3 speakers and also the question and answer periods between the different sessions.

National Academy of Science, Engineering, and Medicine Framework for Equitable Allocation of COVID-19 Vaccine. On October 2 NASEM held a webinar discussing their 237-page report, long awaited by many of us who have been following the progress of this very important advisory committee. Anyone can download a digital copy of the report at no cost. The committee was co-chaired by Dr. William Foege, a former director of the CDC, and Dr. Helene Gayle, who has held many key international healthcare positions including heading the Bill and Melinda Gates Foundation and then CARE. (I might add she is a graduate of CNH's pediatric residency, though before my time there.)

I was very impressed by the thoughtfulness, breadth, and depth that went into the report. This group actually completed the entire plan in about 2 months, an amazing feat. Of course the report is a lot to get through, but if you want to browse look at page 86 of the PDF document about the allocation framework, the Table on page 90, the discussion beginning on page 92, and the graphic below from page 94. The exact plans for implementation will depend primarily on the Advisory Council for Immunization Practices, NIH, and other agencies and when/if various vaccines are approved. Clearly it will be a very complex undertaking, but I am very much in agreement with the group's foundational principles and plan. I hope to have time to describe the plan in slightly more detail at the next Montgomery County Pediatric Society meeting on October 12.

"The Carnage of Substandard Research During the COVID-19 Pandemic." This is a direct quote from the title of an article published in BMJ last week, quite an eye-catcher! Despite the sensational title, it's an excellent discussion of the difficulties of interpreting medical research and reports in the pandemic era, although all of these problems existed previously. The author is a bioethicist, and she highlights some key issues including the number of retractions or withdrawals of articles, the large number of studies published only on pre-print websites that do not undergo peer review, and the overall substandard research methods, perhaps fueled by the urgency of the pandemic but resulting in hasty conclusions. It's a short article, take time to read it and decide where you stand.

Of course the Internet is full of information, sadly most of it unreliable. The pandemic has amplified the problem, with mounting concerns that previously reliable sources might be tainted by political and social issues that obscure scientific data. Below I list 3 sites that I think are generally useful and reliable resources for clinicians. There are many more sites that I check regularly, but it can get overwhelming to keep up with multiple data sources.

  1. New York Times graphs - These interactive graphs demonstrate important changes across the US. Be prepared to put up with pleas to subscribe to the NYT and other unwanted advertisements. See https://www.nytimes.com/interactive/2020/04/23/upshot/five-ways-to-monitor-coronavirus-outbreak-us.html.
  2. American Academy of Pediatrics/Children's Hospital Association COVID-19 Report - I call this a one-stop shopping site for pediatric healthcare providers. You can access individual state data regarding pediatric infections, updated every 2 weeks, plus link to state and regional health departments, among many other sites. Try it out at https://services.aap.org/en/pages/2019-novel-coronavirus-covid-19-infections/children-and-covid-19-state-level-data-report/.
  3. Centers for Disease Control and Prevention - Yes, the CDC has taken a hit recently, under fire for undue political influence, internal plotting, and who knows what else. Still, they have a lot of great information both for clinicians and for the public. If you feel some of the advice doesn't make sense and you are worried about accepting it, send me a comment through the Blog and I'll look into it. Access the CDC COVID-19 site at https://www.cdc.gov/coronavirus/2019-ncov/index.html.

This question was submitted by a primary care pediatrician during my last live webinar update to Children's National Hospital's Pediatric Health Network (PHN) on September 1, 2020. (You can view the entire presentation if you want, about an hour.)

That was a pretty daunting question to answer on the fly, and the answer becomes more difficult as time goes on. Some of you may have seen the story by Lena Sun in the Washington Post on September 13 (online on September 12) detailing some political squabbling about publications in the CDC's Morbidity and Mortality Weekly Report (MMWR). Some in the Department of Health and Human Services apparently have felt for some time that MMWR reports are biased against the current administration and that such reports are sometimes politically motivated rather than purely scientifically driven.

I have subscribed to MMWR for as long as I can remember, I receive the weekly reports electronically as well as any advance reports ahead of publication. The Post commentary focused on a report from July of extensive SARS-CoV-2 spread within a summer camp in Georgia. The commentary didn't mention another MMWR description about a good camp outcome in Rhode Island, published in August, where the camp administrators adhered more stringently to infection control measures. I happened to have mentioned both reports in my September 1 PHN presentation because I thought it very effectively showed us what preventive health measures worked and what didn't work in camp settings. These experiences could offer some important lessons to apply in other settings such as schools.

In the midst of all this squabbling it may help to understand how MMWR features differ from traditional original medical articles in journals. The MMWR discussions are relatively brief and in particular generally do not go into great detail explaining methodology. The methodology is key to assessing validity of results of any study. Sometimes the MMWR features end up being published in more traditional journals with more details provided. Having said this, in general I have found the MMWR to have very useful information with only rare instances where findings subsequently were found to be in error. (One instance of error I remember very well was a published association of a household mold, Stachybotrys chartarum, with cases of pulmonary hemosiderosis in infants in Cleveland. The methodology was flawed, and CDC retracted the findings a few years later.)

Of course we also are challenged by potential political influences at the US Food and Drug Administration (FDA) already involving hydroxychloroquine, convalescent plasma, and now COVID-19 vaccines. I do need to mention a potential conflict of interest with the FDA: we have had a joint pediatric infectious diseases fellowship with the FDA since the late 1980's, highly successful, and I know many of the scientists and physicians at the FDA now working on SARS-CoV-2 antiviral, biologic, and vaccine evaluations. Of course I know nothing about the details of their deliberations, but I do feel comfortable knowing the rank and file personnel there will make their recommendations based on science. It remains to be seen whether those recommendations change based on political considerations, but as long as the process and the findings are transparent we can all reach our own conclusions and provide recommendations to our patients and families.

So, to answer the original question posed to me, I do trust both the CDC and FDA to give us fair and accurate information about SARS-CoV-2, just as they have done in the past through many other outbreaks.

Primary care providers who have attended Children's National Hospital's Pediatric Health Network updates know that I have consistently advised against ordering SARS-CoV-2 antibody tests for individual patient use; the state of knowledge about accuracy and interpretation of such tests is not sufficient to give any advice to individual patients based on the result, it is purely a research and epidemiologic tool at this point. Now, we have a new meta-analysis that examines the current state of knowledge at a very detailed level.

The study is published in the Cochrane Database of Systematic Reviews, hands-down the highest quality source for systematic reviews. If you refer back to my recent post on the "evidence pyramid," you'll see that Systematic Reviews are at the top. That's not to say that any systematic review, including a Cochrane review, is the final word on the subject; in fact, sometimes the reviews are so strict that some useful publications are excluded because they don't meet the pre-determined trial design level of quality. Not infrequently, Cochrane reviews come to no conclusion due to lack of high-quality studies, but we know as clinicians we still must make management decisions in many settings where we lack high-quality studies to guide us.

This particular review is more than 300 pages, but rest assured I'm just going to give you the highlights here. Here are some key points to consider when digesting the findings:

  • The review only covers publications through April 27, 2020. However, the authors do plan frequent updates, an advantage of an online journal like the Cochrane Database.
  • They include "preprints" in their studies for analysis. These are the non-peer reviewed submissions to online sites like medrxiv that I cautioned against in my prior posting. However, in this case the Cochrane review is sort of like undergoing the typical journal peer review process, so I'll give them a pass on including these preprints. It is important to note, however, that preprints comprised about half of all the studies included in the meta-analysis.
  • Overall, they identified 1430 studies to screen, using a detailed search strategy. On further analysis using their predetermined content and quality criteria, they distilled that down first to 266 studies to look at in detail. Of those, only 57 reports of 54 studies met final quality and content criteria to be included in the meta-analysis. This degree of whittling down the study numbers is not unusual in a very broad search needed for meta-analysis. Studies are excluded for a variety of reasons, including not only problems with study design but also with not providing enough detail to assess the study conclusion.

I include the diagram below from the report to illustrate how quality criteria are summarized from the 57 included reports. Here, "green is good." So, you can see that even among the studies that passed the criteria for inclusion, most of them had significant problems.

The authors had several key conclusions:

  • Most of the studies included only hospitalized patients, which could lead to some bias by studying patients at the more severe end of the disease spectrum. We don't know if the results could apply to those with asymptomatic or mild disease.
  • Antibody testing in general seemed to have lower reliability early after onset of symptoms, which is true for most infections. It takes time for antibody production to develop after infection. However, most studies did not follow patients for more than about a month after onset of symptoms, leaving unknown how long antibody can persist after infection.
  • Overall the studies involved a multitude of different assays, each possibly different in terms of sensitivity and specificity, making any broad conclusions more difficult.

Like all Cochrane reviews, the authors included a "Plain Language Summary" of the results, and I think it's helpful to see their bottom line for implications: "...antibody tests could have a useful role in detecting if someone has COVID-19, but the timing of when the tests are used is important.... The tests are better at detecting COVID-19 in people two or more weeks after their symptoms started, but we do not know how well they work more than five weeks after symptoms started... Further research is needed into the use of antibody tests in people recovering from COVID-19 infection, and in people who have experienced mild symptoms or who never experienced symptoms."

I would add to this that the overwhelming majority of patients studied were adults; we don't know much about the pediatric population.

Caveat emptor.