leiwand.ai
We use AI, to test AI
leiwand.ai is a deep-tech start-up that develops AI-based technologies to detect and reduce algorithmic bias and discrimination in AI systems.
Making AI Trustworthy for All
An interdisciplinary team from data science and social sciences enables a 360° degree expertise for trustworthy AI with the highest ethical standards.
The company is an independent subsidiary of winnovation and data scientist Rania Wazir, developing tools that make artificial intelligence trustworthy, drawing on our expertise in cross innovation and algorithmic fairness.
-
We believe AI development should be shaped by knowledge from a range of different fields.
At leiwand.ai, we are a team of mathematicians, data scientists, NLP experts, social scientists, innovators, philologists and project managers that work together on guiding the entire AI development process towards positive impact, fairness and sustainability.
If you want to use or develop artificial intelligence, we can provide our AI expertise in preparing your systems to conform to quality standards, such as those that are required by regulations like the EU AI Act.
-
From AI system inception to retirement, we bring societal, human and planetary needs into the equation. leiwand.ai devises strategies to maximize positive impacts and minimize risks throughout the AI system’s life cycle.
We offer AI development support, strategies and guidance to assess the conformity and impact of your AI system. We can increase your AI system’s functionality for diverse user groups.
Our aim is not only to help our customers understand, develop and deploy fair and transparent AI systems: through our continuous research and testing, we are creating technology that can test your AI system’s quality.
In other words: We use AI, to test AI
-
We are currently creating our very own in-house technology for pre-assessing bias risks in artificial intelligence systems.
With this technology, we will be able to identify potential adverse effects of AI systems early in their development, procurement, and certification process.
The Risk Radar will run on a carefully curated expert database that will be filled with thousands of AI Incidents.
The technology will facilitate targeted risk assessments and fundamental rights impact assessments, as required by the new EU AI Act for high-risk applications.
These applications are encountered in fields like human resources, health, finance, education and public administration.
The goal of leiwand.ai is to help organizations and companies develop and deploy trustworthy AI - AI systems that deliver what they promise, fairly.