Participatory AI – How to make better AI?
Artificial intelligence (AI) permeates everyday life: it exists in music platforms, media streaming, product purchasing, food recommendation systems, and even in the phones we use. AI operates behind the scenes, like a black box, but they are ubiquitous nonetheless. While we know that biases can exist in these systems, without a basic understanding of the workings of AI, people working on AI can’t even begin to identify how to recognize and remove biases that appear throughout their daily engagement with AIs. Participatory AI seeks to change professional’s relationship to AI, allowing for broader understanding that enables awareness not only of the role of AI in our lives, but of those potential biases.
So what is AI?
AI has multiple meanings. For example, a university researcher in the field would define it differently than a Spotify developer describing how the app recommends songs. A workable definition that all levels can understand comes from the industry analyst Susan Etlinger: “the ability for computers to learn, to reason, and to interact”.
Within the last decade, AI has become integral to the software platforms, apps, and devices used extensively in daily life. These systems are primarily developed by for-profit corporations whose main concern is selling products. Historically, AI developers were a fairly homogenous group, with similar racial and socio-economic backgrounds. This lack of diversity ensured that the biases and assumptions of developers informed how the products they developed, such as Google Home or Amazon Alexa, worked. For example, the first versions of many “smart assistants” recognized male voices more easily than female voices, assuming the executive who needed the product was going to be a man.
Within the last two years, many companies have started to address concerns about bias. Although some companies, such as IBM, have committed to developing ethical ways to design AI by incorporating new voices into their digital product development cycle, the underlying lack of diversity at tech companies gives little hope that truly inclusive commercial AI systems will be created in the near future.
Our collective Feminist. Posthuman. Queer. AI (referred to also as Feminist.AI) is an international collective of hundreds of people who meet both virtually and in person, including a 700 person meetup group in Los Angeles. We want to change how AI is created. We do so by designing with communities using culturally informed design methods.
How do we do this?
We use the Cultural AI Design Methodology to create Participatory AI. The Cultural AI Design methodology was developed for culturally focused and exploratory AI Design by Christine Meinders, working in concert with a wide array of collaborators during the past three years. Those collaborators were largely female-identified and non-gender conforming people. The methodology draws on the work of academics in culture and technology, including Rebecca Fiebrink, Alison Adam, Sara Ahmed, Rosi Braidotti, N. Katherine Hayles, Joy Buolamwini, Karen Barad, Anne Burdick, Genevieve Bell, Shaowen Bardzell, Lucy Suchman, Kate Crawford, Danah Boyd, Meredith Whittaker and Brenda Laurel, along with the work of community.
Using the Cultural AI Design methodology, there is a focus on problem framing and exploration rather than only solutions-based tools and products. As an experimental design method, it is explicitly about possibility and can be remixed or altered, in order to think about things differently. What makes the tool particularly exciting is its ethical and cultural focus, as well as the speculative approach to the design of the artificial it offers.
Cultural AI Design is now open for co-creation with partners including communities, non-profits, academics and corporations. This methodology supports participatory AI and fluid knowledge creation by people around the world, helping all users understand and define AI. Rather than simply focus on screen-based approaches to designing AI, we are co-designing the tool to incorporate modular, multi-sensory, and multiple embodied insights into the tools we’ve previously used. You can use the tool broadly by exploring the artificial from Artificial Intelligence to Artificial Life. You can also highlight additional elements like philosophy, whether using the feminist.ai approach or incorporating philosophies, ethics, and frameworks from corporations, academia, and creative coding communities.
Contextual AI Design Tool
We use the methodology to learn more about how people think about AI, and to learn about how to think about AI ourselves. Currently this research exists in the forms of a paper toolkit, an in-browser toolkit, and hardware tool. We are currently using the paper version of this toolkit with communities: people are encouraged to donate ideas about AI, and suggest new ways to design AI, and pair ethics and philosophies in the design of the tool. For example, with the paper tool we have incorporated filters including bias, level of connectivity, and level of autonomy, and Individuals and communities are encouraged to suggest further filters.
Because this is a data collection process, the tool doesn’t favor any constituent — whether corporate, creative coding, academics, or ethics — yet works with individuals and entities in all of these spaces.
How it works:
What follows is a walkthrough of our design process.
1 – Identify who is proposing the AI project.
Anyone creating a project proposal can share ideas with a larger group. We also ask our community members – currently those who are part of the Feminist.AI group – to “opt in” to identifying demographic information.
2 – What is the goal?
In this part of the design process, we look at why the project is necessary as well as ways to design with AI and how they can get you to the desired endpoint. There are many reasons people design AI projects. Some want to simply learn about AI; others are trying to understand a problem or think about new ways to understand a problem; and some people want solutions.
We suggest a few questions that can help inspire this step. The following categories provide entry points for those new to AI in thinking about the ways AI can be used:
- Explore: Is the end goal simply to learn more about AI and what it can do?
- Create: Is the end goal to generate new ideas based on the program developed?
- Find Patterns: Is the end goal to analyze data for patterns to learn more about themselves or others; for example, an individual’s own health data or data already collected by a city that may offer new insights if analyzed from a new perspective?
- Predict: Is the end goal to develop a tool that helps to predict regularly shifting information, such as housing prices or music trends?
3 – Perspective & Culture?
What is the perspective and the bias of the creator, and what is the culture and thinking of the intended user/s? The word culture is intentionally broad: culture can mean many things – such as customs, instruments, artifacts, etc. – any outputs from specific social groups. The culture section also helps to highlight any disconnects between the creators of an AI project and their intended users.
4 – What input/information are we putting into the system? And who is creating that information?
We examine who creates and decides what data (input) goes into the system. Using facial recognition as an example, we would identify “faces” as our primary input. But what type of faces and whose faces are we using? If we are making our own face training set, we want to be able to generally talk about the demographic making the faces.
5 – What are the rules, and who created them?
Rules should be as general as possible. During one project, an individual proposed a “caring for” algorithm (or rule) to be central to the project. This could mean so many things, like providing physical care or emotional care, or it could indicate the use of emotion analysis in an app. In examining what we mean closely, we can create the rules for our system more carefully and with greater control.
6 – What is the actual or imagined output?
What does the AI project do? Does it recommend something? Or does it find patterns? When we start to co-design AI, we can see the differences between what we wanted our AI to do, and what it actually does.
7 – Structural Landscape
What is the larger approach to knowledge creation? Is this reinforcing an existing perspective, systems or theory? Or is it proposing alternatives to the big picture? We are referring to the ground on which you are trying to create. For instance, in a mental health project are we focusing on creating in a capitalist system or are we creating a community-based mutual aid work? There is no wrong answer, but variables like this are often taken for granted in AI development.
8 – Material & Form
Having some type of physical material that is not screen-based is an important entry point into AI. That means we can think about using trees (rather than Siri) or movements of people or completely new materials (possibly 3D printed) when we design AI. The form may lend itself to different types of algorithmic design.
9 – Time
Is your project happening right now? Are you designing for something that you think is 3 years away? Is that the right amount of time to get enough data to be effective?
Contextual Normalcy: Designed by community, for community
Here is an example of an application of the Cultural AI Design Methodology, applied to our project, Contextual Normalcy. Contextual Normalcy is a participatory AI research project in which we use AI and crowd sourced data to question “normalcy,” creating contextual experiences of feelings for mental health. Using the Cultural AI design method, we started to question the landscape (capitalist, Western mental health), created data (we co-created questions about how we think about feelings with users around the world) and crowd-sourced responses through an app and website and other data research tools in VR to create new ways to think about mental health with contributions from around the world.
We are excited to be opening up the Cultural AI Design Methodology and tool to communities around the world! Please email us at firstname.lastname@example.org to receive a physical version of the tool. You can also contribute to the co-creation of the tool through donating your ideas about AI here — where community is suggesting and contributing design approaches. We are working with creative coding communities, corporations, and hardware companies to co-create this knowledge design tool. Additionally, if you would like to contribute or co-create as part of Contextual Normalcy, please find the app here and email us for more information.
Thank you Emily Zilber for additional text / content edits.
Special thanks to Shusha Niederberger for help with editing, the form of the text, and her kindness!