Model Card Authoring Toolkit & Wikipedia
-
Global, Wikipedia Contributor Circles, Online
- Where did this use case occur?
-
2022
- When did this use case occur?
-
Researchers as facilitators (academia)
- Who were some of the key collaborators
-
15 people from English and Dutch Wikipedia communities joined the pilot.
- How many people participated?
-
AI, Social Platform
- What are some keywords?
What was the problem?
The threshold of tech literacy required to contribute to AI tools is normally very high, leading to bias and exclusion in AI models. Researchers tried to define participatory decision-making methods to help communities develop AI tools that are better aligned with their collective values, without leaving anyone without the necessary tech literacy behind.
How does the community approach the problem?
The Model Card Authoring Toolkit intends “to help community members understand, navigate and negotiate a spectrum of models via deliberation and try to pick the ones that best align with their collective values” [1]. Researchers tested the toolkit in their workshops with Wikipedia contributors to help them in their discussion of how their community’s values align with the different AI models in use at their collective content editing software.
What were the results?
Scholars’ results suggest that the use of the Model Card Authoring Toolkit helped improve the understanding of the potential use of AI-based systems. Further, the toolkit acted as an enabler to engage community stakeholders to discuss and negotiate the trade-offs, and facilitate collective and informed decision-making in their own community contexts.
How participatory was it?
Collaborate
The toolkit has been developed to help communities better account for how non-tech savvy members would perceive the potential use of AI-based technologies.
What makes this Use Case unique?
'AI-based tools are ubiquitous and are often introduced without the consent of the community. They carry numerous biases and raise red flags. This use case offers a distinctive example of how these biases can be brought to the attention of the broader community prior to the launch of such tools.' -Sem