How can we build participatory AI technology?

What does meaningful AI governance look like when it's designed by the people most affected by the technology? Two recent workshops in Malawi offer a powerful answer—and a model that challenges how we think about participatory technology development.

Last week, researchers from NYU's Peace Research and Education Program worked with partners from Ushahidi, UNDP Malawi, and the Malawi Peace and Unity Commission to convene stakeholders across two Malawian universities. But these weren't typical academic gatherings. Alongside faculty and students from Malawi University of Science and Technology (MUST), Malawi University of Business and Applied Sciences (MUBAS), and Mzuzu University (MZUNI), the rooms filled with village chiefs, women's group representatives, youth leaders, and community members—the people who will actually use and be affected by the technology being designed.

The Challenge: Emergency Reporting Without Barriers

The Malawi Voice Data Commons aims to create a voice-based crisis reporting system for populations that current technologies systematically exclude: rural communities with limited literacy, no reliable internet access, and languages that major tech platforms ignore.

To make this concrete, facilitators designed a "Reporting Race" simulation. Participants physically experienced how existing emergency systems create compounding disadvantages. Those who could read English, owned smartphones, and lived near district offices moved forward. Those who couldn't—moved backward. The same barriers, creating cumulative exclusion.

Then they redesigned it together. When requirements around literacy, cost, and language were removed, everyone moved forward.

Governance Beyond Templates

The technical challenge was only the starting point. The harder questions emerged when workshop participants worked through real governance scenarios:

  • What happens when a university researcher requests access to voice data?

  • When the WHO wants recordings for public health messaging?

  • When photos could help emergency responders but might compromise privacy?

  • How should consent work in multilingual, low-literacy contexts where written forms are meaningless?

  • Who actually has the authority to grant that consent—and who benefits when the data becomes "AI-ready"?

Rather than imposing external frameworks, communities debated these tensions using their own decision-making structures. Traditional authorities, women's groups, youth leaders, and elders worked through scenarios until consensus emerged—not predetermined answers, but context-specific governance that communities could trust, influence, and revise over time.

This raised questions that can't be answered from outside: What constitutes a legitimate governance body in a crisis-affected context? How can women and youth hold actual decision-making power—not symbolic participation, but real voice and vote? What obligations do researchers carry when community data is repurposed years later?

Why Participatory Governance Matters

These workshops surfaced a principle that should be obvious but remains radical in practice: data for peace cannot be governed from a distance.

The Malawi Voice Data Commons isn't extractive research where communities provide raw material for external analysis. It's an attempt to build public-interest data infrastructure where communities govern their own digital futures from the beginning—before the first line of code is written, before the first voice sample is collected.

This matters because AI governance debates often happen in conference rooms far removed from the populations most affected by algorithmic decisions. Policy frameworks get designed in Geneva, Brussels, or Silicon Valley, then exported as universal solutions. But governance that works requires understanding how consent operates in specific cultural contexts, how existing power structures shape who gets heard, and what communities actually need from technology designed to serve them.

What Community-Rooted Governance Looks Like

The Malawian model demonstrates several key principles:

Communities experience technology barriers holistically. The reporting simulation showed how literacy requirements, device costs, language exclusion, and geographic distance don't exist as separate problems—they compound for the same people, creating systematic exclusion.

Governance questions are practical, not abstract. Rather than debating principles in the abstract, participants worked through concrete scenarios that surfaced real tensions between research benefits, privacy risks, emergency response needs, and community autonomy.

Decision-making authority must match local legitimacy. External templates for "community consent" often miss how authority actually operates. Traditional leaders, women's groups, and youth organizations have different roles and legitimacy depending on context—governance frameworks need to reflect that complexity.

Participation means power, not consultation. True participatory governance isn't asking communities to validate pre-made decisions. It's giving them authority to shape the rules, deny requests, and change frameworks as circumstances evolve.

Beyond Malawi

These workshops offer lessons for anyone building AI systems meant to serve marginalized populations. Technology designed for crisis response, public health, or emergency services can't simply be dropped into new contexts. The governance frameworks that determine who controls data, who benefits from its use, and who can say no are as important as the technical infrastructure itself.

When communities co-design AI governance from the start—not as consulted stakeholders but as decision-makers—the questions change. The systems change. And the power dynamics that usually characterize technology deployment begin to shift.

Next
Next

Data Commons: Building Infrastructure for Peace Technology