MENU
use case

Model Card Authoring Toolkit & Wikipedia

Governing artificial intelligence-based technologies is prone to get more important by day, with tremendous effects on how communities value, organize and function. This use case illustrates one possible way to increase diverse participation in the evaluation of AI-based technology usage. 
Model Card Authoring Toolkit & Wikipedia
  • Global, Wikipedia Contributor Circles, Online
    • Where did this use case occur?
  • 2022
    • When did this use case occur?
  • Researchers as facilitators (academia)
    • Who were some of the key collaborators
  • 15 people from English and Dutch Wikipedia communities joined the pilot.
    • How many people participated?
  • AI, Social Platform
    • What are some keywords?

What was the problem?

The threshold of tech literacy required to contribute to AI tools is normally very high, leading to bias and exclusion in AI models. Researchers tried to define participatory decision-making methods to help communities develop AI tools that are better aligned with their collective values, without leaving anyone without the necessary tech literacy behind. 

How does the community approach the problem?

The Model Card Authoring Toolkit intends “to help community members understand, navigate and negotiate a spectrum of models via deliberation and try to pick the ones that best align with their collective values” [1]. Researchers tested the toolkit in their workshops with Wikipedia contributors to help them in their discussion of how their community’s values align with the different AI models in use at their collective content editing software.

Technique
At first, community members individually tested the different AI models and documented how well each of those work against their shared values. Specifically, they focused on how each AI model handles usage trade-offs i.e., between minimizing false positives (catching all the potential damaging edits) and false negatives (not falsely labeling good edits as damaging) and how this affects the community’s objective to put out the highest quality content possible. In addition, the community assessed and discussed the models’ fairness in treating different editor groups equally.  Later, they had a discussion to agree on which model to use. 

What were the results?

Scholars’ results suggest that the use of the Model Card Authoring Toolkit helped improve the understanding of the potential use of AI-based systems. Further, the toolkit acted as an enabler to engage community stakeholders to discuss and negotiate the trade-offs, and facilitate collective and informed decision-making in their own community contexts.

In Our Opinions
In Our Opinions
In Our Opinions
In Our Opinions
In Our Opinions
In Our Opinions
In Our Opinions
In Our Opinions
In Our Opinions
In Our Opinions
In Our Opinions
In Our Opinions
In Our Opinions
In Our Opinions

How participatory was it?

Collaborate

The toolkit has been developed to help communities better account for how non-tech savvy members would perceive the potential use of AI-based technologies. 

What makes this Use Case unique?

'AI-based tools are ubiquitous and are often introduced without the consent of the community. They carry numerous biases and raise red flags. This use case offers a distinctive example of how these biases can be brought to the attention of the broader community prior to the launch of such tools.' -Sem