Trustworthy AI Lab at Ostfold University College

The Trustworthy AI Lab is part of an interdisciplinary and international research community to evaluate and discuss the mindful use of artificial intelligence in innovation projects, as part of the Z-Inspection® Initiative.

The Lab intends to encourage debate and reflection on the responsible use of Artificial Intelligence, acting as a meeting place for the international community, such as the Z-Inspection® assessment method for Trustworthy AI and its affiliated labs.

The labs intend to encourage debate and reflection on the responsible use of Artificial Intelligence, acting as a meeting place for the international "community" of researchers, politicians, sector leaders and citizens, creating events, offering the possibility of teaching and doing research.

The Lab aims at connecting with other labs, researching and participating in use-cases with other labs in the world, applying for funding (EU/national), integrating trustworthy AI in teaching, and inviting teachers from other labs to lecture for students at the university college.

The Z-Inspection® approach is a validated assessment method that helps organizations deliver ethically sustainable, evidence-based, trustworthy, and user-friendly AI-driven solutions. The method is published in IEEE Transactions on Technology and Society. 

Z-Inspection is listed in the new OECD Catalogue of AI Tools & Metrics.

Z-Inspection® (website) is distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license. 

Lab members:
Dr. Frode Ramstad Johansen (head, innovation processes)
Dr. Gunnar Anderson (organisational development)
Dr. Pedro Kringen (medical, advisor, Z-Inspection® initiative)
Prof. Hadi Strømmen Lile (law and ethics)
Dr. Fredrik Andersen (philosophy, health)
Dr. Leonora Onarheim Bergsjø (education, digital ethics)
Prof. Roberto Zicari (advisor, Z-Inspection® initiative)

Publications involving members:
Vetter, D., et. al. (2023). Lessons Learned from Assessing Trustworthy AI in Practice. Digital Society2(3), 35.
Zicari, R. V., et. al. (2022). How to Assess Trustworthy AI in Practice. arXiv:2206.09887 [cs.CY]
Zicari, R. V., et. al. (2021). On assessing trustworthy AI in healthcare. Machine learning as a supportive tool to recognize cardiac arrest in emergency calls. Frontiers in Human Dynamics3, 673104.
Zicari, R. V., et. al. (2021). Co-design of a Trustworthy AI System in Healthcare: Deep Learning based Skin Lesion Classifier. Frontiers in Human Dynamics3, 688152.
Zicari, R. V., et. al. (2021). Z-Inspection®: a process to assess trustworthy AI. IEEE Transactions on Technology and Society2(2), 83-97.

 

……………………………………………………………. 

New info!
Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment.
“Responsible use of AI” Pilot Project with the Province of Fryslân, Rijks ICT Gilde & the 
Z-Inspection® Initiative.
Quoting our report:
This report is made public. The results of this pilot are of great importance for the Dutch government, serving as a best practice with which public administrators can get started, and incorporate ethical and human rights values when considering the use of an AI system and/or algorithms. It also sends a strong message to encourage public administrators to make the results of AI assessments like this one, transparent and available to the public.
Link: https://arxiv.org/abs/2404.14366

……………………………………………………….. 

Emneord: Trustworthy AI, Z-Inspection, artificial intelligence Av Frode Ramstad Johansen
Publisert 10. apr. 2024 08:52 - Sist endret 13. mai 2024 11:00