Nested Knowledge offer a web-based software-as-a-service (SaaS) application for use in secondary medical research. Artificial intelligence features are integrated into the application. Nested Knowledge is committed to monitoring and complying with AI legislation in applicable countries.
Note that this contains only an analysis of compliance with laws specific to artificial intelligence tools in biomedical evidence synthesis applications, interpreted by the reasonable efforts of the Nested Knowledge team with respect to user activity that would otherwise comply with all relevant laws that are not explicitly related to artificial intelligence tools. This overview does not constitute legal advice nor a full disclosure of artificial intelligence methods.
Nested Knowledge complieted the Compliance Check provided by the EU regarding which aspects of the AI Act apply to Nested Knowledge. Because we are a non-generalized AI system that does not impact any of the critical industries listed in the AI Act, but where our system does involve interaction with natural persons, our requirements were “Transparency Obligations”:
“Transparency Obligations: Natural persons You need to follow these transparency obligations under Article 50: The AI system, the provider or the user must inform any person exposed to the system in a timely, clear manner when interacting with an AI system, unless obvious from context. Where appropriate and relevant include information on which functions are AI enabled, if there is human oversight, who is responsible for decision-making, and what the rights to object and seek redress are.”
Q.“What if a user uploads a document with personal health information (PHI) into Nested Knowledge?”
A. That would be a violation of our Terms of Service and we would reserve the right to terminate that user's account and remove the PHI; but in addition, all full texts are restricted to the users and Organizations the nest owner shares the project with, so such documents would not be made generally available even in Synthesis' shareable outputs. However, this is a wider issue than just AI– ensuring that the data uploaded into Nested Knowledge is both (1) appropriate to our systems, i.e., does not contain PHI and (2) shared only with the appropriate audience is one of our key security topics.
Risk analysis under Article 6 and Annex III:
Bias and Discrimination:
This policy will be updated on an annual basis and leadership will regularly oversee this policy to make sure the content remains up-to-date with global artificial intelligence regulations. This policy will also be updated in the case that any party identifies a relevant piece of legislation to leadership.
Author | Date of Revision/Review | Comments |
---|---|---|
K. Cowie | 10/04/2024 | Created |
K. Kallmes | 10/04/2024 | Approved |