Build Something Real: Introducing "AI and the Global Majority: Peace, Power, and Participation"

A new course at NYU CGA, co taught by Marine Collins Ragnet and Katerina Siira, Fall 2026

Somewhere in Nairobi right now, a young worker is reviewing images of graphic violence so that the chatbot you will use this week refuses to show them to you. Somewhere in northern Chile, fresh water is being pumped from an aquifer to cool a data center running a model trained partly on languages the internet has almost forgotten. Somewhere in Brussels, lobbyists are rewriting the legislation that was supposed to govern the systems their employers just shipped.

This is the AI industry. It is not weather. It is not neutral. It is not inevitable. And the most important question about it is almost never asked in the classrooms where it is taught: who decides, and on whose terms?

That is the question at the heart of AI and the Global Majority: Peace, Power, and Participation (GLOB1-GC2630), a new interdisciplinary course we are launching at NYU's Center for Global Affairs this fall. It is not a survey. It is not an AI ethics lecture series that ends with a reading list and a slightly worse conscience. It is a cross concentration, in person, experiential course in which you will actually build things.

What you will actually do

Comparative policy analysis. You will learn to read the EU AI Act, a Kalinga provincial ordinance, and an emerging African Union framework side by side and understand what each one does, misses, and protects.

Participatory research methods. You will learn the techniques that let you design a study with a community rather than about one. This is a concrete methodological skill that almost no other AI course in the country teaches, and it is one of the most marketable things you can take into policy, development, or research work after graduation.

Real policy products. You will draft briefing materials, toolkits, and monitoring frameworks that leave the classroom. Our partners, including the United Nations University, will review what you build. Some of it will be used.

Stakeholder communication. You will present your work to the people it was designed for. That means learning to write and speak in the registers of both academic analysis and operational policy, and to adjust on the fly.

You will also hear from people who do not usually make it onto your syllabus. Data workers. Indigenous governance leaders. Journalists embedded in AI supply chains. Regulators from jurisdictions rewriting the rules in real time. You will not just read about them. You will talk to them.

Three project tracks

Working in small teams, you will develop one of three policy products over the semester:

Impact and comparative analysis. Map how AI systems affect similar populations across different national contexts. Draft briefing materials for UN working groups on Children's Rights.

Education, capacity, and local agency. Design adaptable toolkits that help communities not just understand AI but shape it, across high and low resource settings.

Community monitoring and participation. Build mechanisms that treat affected communities as active stakeholders with knowledge and agency, not passive subjects of governance.

At the end of the semester, you will present your work to the stakeholders it was built for.

Why this is not another AI ethics class

Ethics in AI is often taught as philosophy with a technology skin. This course treats ethics as an operational question. How do you build a consent framework that actually works in a rural setting without reliable internet? How do you measure regulatory capture in a specific jurisdiction? How do you design oversight for a system your partner community did not ask for?

You will leave with answers, or at least with the analytical tools to keep working on the questions. The ethical frame sits inside everything we do, but it is not an abstract appendix. Every policy product you draft will have to survive a stakeholder review. Every design decision you propose will have to account for the fact that someone, somewhere, will have to live with it.

What you will leave with

  • Fluency in comparative policy analysis across different regulatory and cultural contexts

  • Practical experience with participatory research methods

  • A portfolio of applied policy work tied to real partner organizations

  • Direct exposure to the networks operating at the intersection of AI, rights, peace, and international development

  • A working vocabulary for the questions that will shape the next decade of technology governance

Who it is for

Students across concentrations. Security studies, development, human rights, environmental affairs, technology policy, media, anthropology, economics. No prior technical background in AI is required. What the course does ask for is rigor, curiosity, and a willingness to sit with complexity.

Why this course exists

The two of us built this course out of the fieldwork we are doing right now with PREP's Peace AI program across Kenya, Malawi, the Philippines, and Brooklyn. The conversations in those field sites kept pointing at the same gap: there was almost nowhere for graduate students to learn this work in a way that was grounded, applied, and serious about its ethics. So we built it.

The work ahead is some of the most consequential this generation will do. We built this class so you have a place to start.

The logistics

AI and the Global Majority: Peace, Power, and Participation GLOB1-GC2630 | Fall 2026 | In person

Co taught by Marine Ragnet and Katerina Siira NYU Center for Global Affairs

Registration details through the CGA course catalog. Questions welcome at mar1121@nyu.edu

See you in the fall.

Next
Next

Who Decides, and on Whose Terms? A Call for Papers on AI and Power Concentration