Nested Knowledge

Bringing Systematic Review to Life

User Tools

Site Tools


wiki:policies:ai_regulation

Artificial Intelligence (AI) Compliance with Legislation

Nested Knowledge offer a web-based software-as-a-service (SaaS) application for use in secondary medical research. Artificial intelligence features are integrated into the application. Nested Knowledge is committed to monitoring and complying with AI legislation in applicable countries.

Note that this contains only an analysis of compliance with laws specific to artificial intelligence tools in biomedical evidence synthesis applications, interpreted by the reasonable efforts of the Nested Knowledge team with respect to user activity that would otherwise comply with all relevant laws that are not explicitly related to artificial intelligence tools. This overview does not constitute legal advice nor a full disclosure of artificial intelligence methods.

European Union AI Act

Nested Knowledge complieted the Compliance Check provided by the EU regarding which aspects of the AI Act apply to Nested Knowledge. Because we are a non-generalized AI system that does not impact any of the critical industries listed in the AI Act, but where our system does involve interaction with natural persons, our requirements were “Transparency Obligations”:


Transparency Obligations: Natural persons You need to follow these transparency obligations under Article 50: The AI system, the provider or the user must inform any person exposed to the system in a timely, clear manner when interacting with an AI system, unless obvious from context. Where appropriate and relevant include information on which functions are AI enabled, if there is human oversight, who is responsible for decision-making, and what the rights to object and seek redress are.”

  • Nested Knowledge carries out these Transparency obligations in several ways: First, with our full Disclosure of AI Systems, available to all users. Second, when the AI systems provide assistance, this is audited and trackable within each project; users also opt into each AI system. Third– and this is something to drive home with medical writers–we recommend that users who employ AI in any step of a review disclose the use of AI systems in their written outputs, meaning that the tools within Nested Knowledge that were used should be reported (e.g., if users employ Robot Screener in a dual screening process, this should be reported in the Screening methods of any report).
  • Transparency is central to Nested Knowledge's methods, as is appropriate in a scientific process! In addition to the AI Disclosure above, Nested Knowledge released a full overview of our AI practices and philosophy in the context of new guidance from NICE (the UK HTA body) regarding AI in the systematic review process. It goes over the Transparency pillar in Nested Knowledge processes, but the short answer is– wherever an AI is employed, the tool provides (1) Transparent AI methods (in effect, transparency of how the AI works); (2) Transparent audit records of AI actions (e.g. the Robot Screener decisions); (3) As relevant, transparency of data provenance (e.g. the Smart Tagging Recommendations trace their extractions back to the exact quote from the underlying full text).
  • The most important protection of clinical data is the fact that the tool is not used for extracting from patient health information (PHI) or personal identifiable information (PII). So, with only cohort-level data, we strictly limit the potential for either AI or for malefactors to obtain clinical data that is not available in the published literature is low. Note also that, while high risk AI tools have additional constraints/rules regarding quality systems and enforcement, these are not necessary in the context of published data.

    Q.“What if a user uploads a document with personal health information (PHI) into Nested Knowledge?”

    A. That would be a violation of our Terms of Service and we would reserve the right to terminate that user's account and remove the PHI; but in addition, all full texts are restricted to the users and Organizations the nest owner shares the project with, so such documents would not be made generally available even in Synthesis' shareable outputs. However, this is a wider issue than just AI– ensuring that the data uploaded into Nested Knowledge is both (1) appropriate to our systems, i.e., does not contain PHI and (2) shared only with the appropriate audience is one of our key security topics.

Risk analysis under Article 6 and Annex III:

  • Based on review of Article 6 and Annex III, Nested Knowledge is not a High-risk application:
    - Nested Knowledge is not used in a safety system,
    - Users perform a narrow procedural task (by their classification, this is in comparison to general AI systems),
    - Neither Nested Knowledge nor its users do not perform any of the Annex III activities: the tool does not collect biometric data, is not related to infrastructure, and contains no AI systems providing educational, vocational, employment, workers management, access to private or public (financial) services, law enforcement, migration, justice, or voting/democratic services.

Bias and Discrimination:

  • The Bias and Discrimination topic is vital, though certain aspects are not relevant to Nested Knowledge systems. Note that, in the EU AI Act, controls center around tools that, unlike Nested Knowledge, either involve biometrics or are used in a system that could lead to discrimination (e.g. screening resumes for employment).
  • However, Nested Knowledge is committed to assisting with minimizing bias beyond these categories. The primary prevention system for AI bias in evidence synthesis is human curation (see the overview of AI practices above; human curation is one of the key pillars). This is because AI systems, especially those trained on a limited set of data, can over-fit models or otherwise come to confident but incorrect conclusions. The best quality-control is to have each decision assessed by a curator: in Robot Screener, inclusion and exclusion decisions are surfaced to an adjudicator. In Smart Tagging Recommendations, the AI extracted contents are shown to a human extractor for confirmation. In Search Exploration, AI-recommended terms are only adopted by the user, not automatically added by the AI.
  • Then, in addition, while meta-analytical statistics are not an AI system, bias can certainly impact or limit the veracity of the findings; Nested Knowledge offers Critical Appraisal systems for rating the risk of bias of all candidate studies and has I-squared values and Funnel plot features to help find outliers or potentially biasing studies and to assess the evidence quality/heterogeneity. None of these are perfect preventions of bias– any evidence synthesis process involves combining inherently disparate sources, which introduces bias with respect to the study methods, included populations, and even the outcome reporting practices– but bias mitigation is an integral part of both the systematic review process generally and Nested Knowledge's tools in particular!

Communication and Compliance

This policy will be updated on an annual basis and leadership will regularly oversee this policy to make sure the content remains up-to-date with global artificial intelligence regulations. This policy will also be updated in the case that any party identifies a relevant piece of legislation to leadership.

Revision History

AuthorDate of Revision/ReviewComments
K. Cowie10/04/2024Created
K. Kallmes10/04/2024Approved

Return to Policies

wiki/policies/ai_regulation.txt · Last modified: 2024/10/04 16:48 by kevinkallmes