Boosting COVID-19 related behavioral science by feeding and consulting an eclectic knowledge base

So much stuff

If there was ever any doubt whether there could be too much of a good thing in science, we are currently witnessing a very clear demonstration of it—both in science in general, and in the behavioral sciences in particular. Stating the obvious: There is just so much COVID-19 related information emerging every single day.

And I’m not only referring to peer-reviewed articles and the many, many preprints. I’m also referring to

  • already published data sets,
  • studies in progress,
  • calls for collaborating in emerging studies,
  • reports from governments, think tanks, and NGOs,
  • newspaper articles and opinion pieces,
  • static and interactive visualizations of results and models,
  • blog posts by researchers, policymakers, and others,
  • videos,
  • webinars organized by professional scientific associations and other institutions,
  • discussions in publicly accessible mailing lists of professional societies,
  • twitter threads on new findings and their critical discussion, and
  • more things I wasn’t even considering. (Please add your ideas on further information sources in the comments below. Thank you very much!)

Boosting COVID-19 related behavioral science by feeding and consulting an eclectic knowledge base

Imagine you had access to an up-to-date knowledge base that would cover all these eclectic information sources as they emerge. Wouldn’t that be really helpful? Instead, you are faced with the current jungle of information.

To deal with the complex matter that is COVID-19, we argue that researchers, policymakers, and other stakeholders need a curated–even if not yet fully vetted—overview over this emerging knowledge. The reason we think that this knowledge base should be eclectic, that is, not only include peer-reviewed articles and preprints is this: To avoid many of the pitfalls of behavioral-science (and other) research on COVID-19 (see Ulrike Hahn’s previous post for an overview), researchers, policymakers, and other stakeholders need to know “what’s out there,” both the Good, the Bad, and the Ugly.

Here are a few use cases to ponder:

  1. Researchers, policymakers, and journalists need help in critically assessing the emerging research (as published in preprints or other non-peer reviewed formats). It seems that most of the (publicly accessible) critical discussion on emerging research is happening on twitter, blog posts, and other more ad-hoc outlets. Even preprints of replies to already published research (or preprints) are too slow for this or are not happening often enough.
  2. To avoid duplication of efforts, researchers need to know what studies have already been run, are running, planning to run, or are looking for more collaborators.
  3. When planning new studies, researchers should profit from the good, innovative, ideas floating around and avoid making mistakes that have already been made, learning about ways to circumvent them in the first place. Again, most such ideas and discussions currently emerge on twitter and other social media platforms, so only considering preprints, let alone published papers, is not enough to stay up to date.
  4. Meta researchers need to know what other initiatives are already running (e.g., tracking the fate of preprints), so they can avoid duplication or join efforts.

Read the full article on The Psychonomic Society