Welcome to the walkthrough of the COVID-19: Antivirals demo Nest (open in your original tab). In this walkthrough, we'll explain the core functionalities of Nested Knowledge through this Nest. We encourage you to work through the Nest as you follow the walkthrough. The Nest available to you is a copy of the original and may be freely modified, so roll up your sleeves and get your hands dirty!
This Nest is a copy of a previously-completed review presenting the evidence regarding the safety and efficacy of anti-virals that had randomized controlled trial (RCT) evidence reported in the treatment of COVID-19 as of January 2022.
You've landed on your demo Nest in AutoLit, and you're looking at the Nest Home page. This page includes a menu on the left of the page, the protocol in the center, and discussion about the Nest on the right. The menu includes links to all modules & configurations available to you in AutoLit. We'll now walk through these modules one by one. (click the title in the menu to navigate to the the corresponding module).
The Literature Search page allows import of studies to a nest and shows where studies were sourced. This review includes two searches - an API-based (automatic integration) search of PubMed and a file-based import from Embase. Hover and click the “History and Details” column to see greater detail about the searches, including when they were run and any query structuring available. The PubMed search is API-based and may be run on demand. Hover the Pubmed row and click the “Run” button to update this search- you may import some new records!
Records may be imported through other means. Click the “Other Sources” menu item under “Literature Search” to view records that were individually added as expert recommendations. 19 such studies were imported into this Nest. Try importing the DOI or PMID of your favorite study using the “Add by Identifier” form on the right of the page
Once studies are imported into a nest, they are “Screened” for relevance to the review in the Screening Module. Click the Screening menu header to visit this module.
This screening module displays studies that have yet to be screened, allowing you to decide to include or exclude from the rest of your review and analysis. So far in our review, 91 studies have been screened and 16 included. Try including a reference by clicking the include button. Exclude a reference by selecting an exclusion reason from the drop-down menu and then clicking the exclude button. You may also skip studies you aren't yet sure about, or jump to a prior study, using the buttons under the Navigation menu.
Why are study abstracts so colorful? We peform ML-based PICO annotation of abstracts using a model derived from RobotReviewer. To turn off PICO highlighting, toggle off the slide button in the legend just beneath the abstract text.
Abstract text may also be underlined with User Keywords, which are configured under the Settings menu item.
The Tagging module allows included studies to be categorized according to their characteristics, such as design, population, outcomes, etc. Nested Knowledge uses hierarchical tags to describe characteristics.
Click the “Configure Study Tags” menu item to get started. Tag hierarchies consist of tags (visualized as points) and relationships between them (visualized as connecting lines). The tag hierarchy in this review includes 7 “root” tags - the highest level categories we're considering in the review. Hierarchies should be created and read as a series of “is a” relationships. For example, “Adverse Event” is a “Outcome”, “Septic Shock” is a “Adverse Event”. Hover around the hierarchy to explore tags and read off the “is a” relationships a you go.
Inside the Tagging module, tags may be applied to studies, indicating that a concept is relevant to a study.
In the Tagging form, select any tag from the dropdown menu, then click Apply Tag; it should now appear in the Tagging Table.
Click a row in the Tagging table that has a non-empty excerpt column to view past applied tags and their “excerpts”, which user-entered pieces of text, typically extracted from the manuscript, supporting the tag.
Study Inspector is the tool in AutoLit for reviewing and searching your past extracted data. Each row in Study Inspector is a study, and columns may be user-selected in the upper left dropdown menu. Studies may be searched into the table by creating Filters. Filters may be created using the Add Filter dropdown menu, but oftentimes the typeahead search bar is fastest. In the below example, we are filtering to studies with a full text uploaded and using the typeahead menu to find all studies tagged with Mortality. Try out the title/abstract (TIAB) filter by typing “Lopinavir” into the search bar.
Please see our Extraction Documentation page to review how Extraction was configured for this Nest. Click the Extraction menu item to view and perform Extraction for this review.
The Study Design form specifies intervention arms in the study (Standard of care and 2 different Remdesivir dosages, in this case) as well as outcome measurement timepoints in the study (0 and 11 days).
The Extracted Data form contains means, medians, dichotomous rates, and categorical counts corresponding to baseline chararcteristics and outcomes for the study. Modify some of the data points, which will be auto-saved. If you enter incomplete or invalid data (e.g. a negative value for N), the leading Status column of the table will show a red X. Hover to view the error message.
At this point, we've reviewed all the evidence gathered in AutoLit for the COVID-19 Antivirals Nest. Now let's navigate to Synthesis Home to draw some conclusions from our evidence, by clicking the Synthesis menu heading.
Click the PRISMA button in the bottom left of the page to view a PRISMA 2020 flow diagram. The diagram is auto-populated based on searches imported and studies screened in AutoLit.
We can see that the 2 searches and 17 (19 - 2 duplicated records already imported in search) expert recommendations are displayed in the diagram. The diagram may be right clicked and saved as an arbtirary resolution SVG or exported in a variety of formats.
Navigate back to Synthesis Home and click the Qualitative Synthesis box. Qualitative Synthesis (QLS) displays data gathered in the Tagging Module. Each slice in the sunburst diagram is a tag. Its width corresponds to how frequently it was applied. Its distance from the center corresponds to its depth in the hierearchy (how many “is a” relationships are between it and its root tag). Click a slice to filter studies displayed to those where the tag was applied. Clicking multiple slices filters to studies with all the selected tags applied. The rightmost bar shows relevant studies (bottom) and some data about the tag (top), like its frequency, excerpts, and tags that were commonly applied with the selected tag.
In this tag selection, we see that Mortality and Length of Stay were reported as outcomes in 7 of 16 included studies. Click the rows of the study table to take a deep dive into the extracted data.
Navigate back to Synthesis Home and click the Quantitative Synthesis box. Quantitative Synthesis (QNS) displays data gathered in the Extraction Module. QNS contains 3 different analyses automatically computed from extracted data.
The Summary tab contains pooled estimates of outcomes, broken out by interventions. Interventions may be expanded to different levels of precision, while outcomes analyzed may be selected from the dropdown menus. In the below example, we find a 7.3% mortality rate among all antivirals, against an 11.6% mortality rate for control/standard of care; Arbidol suggests a lower rate but is only supported by a single study.
The NMA tab computes a Network Meta-Analysis, which estimates effect sizes between pairwise comparisons of interventions on an outcome. The NMA comes with a network diagram (showing how commonly interventions were compared with one another), an effect size matrix, and forest plots (accessed by clicking on a cell in the effects matrix). Use the intervention expansion menu on the right of the page to refine interventions analyzed.
You've now seen how a review may be completed & shared with the Nested Knowledge platform. We encourage you to head back to AutoLit and explore the variety of configuration options, and ever-growing feature set we didn't get to cover here. If you're feeling ambitious, start your own Nest from scratch!
Use this documentation to guide you through more complex topics, and as always, please reach out to our support team via email and make requests on Nolt.