---
title: "democraicy: algorithms with civic conscience"
slug: democraicy
type: project
kicker: "concept document · 2020 · unfunded"
accent: indigo
pills:
  - 2020
  - concept
  - unfunded
role: "lead"
order: 7
summary: "An early concept document on algorithms with civic conscience. Never funded; still readable."
---

A project concept written in November 2020. Never funded as such, but it became the seed of a line of thinking we have continued in every workshop we have run on technology since.

## what it was

In the autumn of 2020, when mass-market AI was still a distant promise, we wrote a proposal for something that was unusual at the time: that algorithms are not neutral, that the decisions programmers make (about who is heard, who is silenced, who is visible) are political decisions, and that someone ought to create spaces where engineers, philosophers, anthropologists, artists, and political scientists could articulate together what it would mean to design an algorithm with a civic conscience.

The proposal called these spaces *mini-agoras*: groups of a dozen people, weekly two-hour meetings for three months, structured as agile workflows. In the first hour, one of them would present from the perspective of their discipline; in the second, semi-structured debate. At the end, each participant would submit a proposal for how AI agents should be designed in order to form sustainable virtual societies, democratic by structure, free from drift toward totalitarianism, civil war, or the elimination of the weakest among them.

The intended output: a kind of manifesto of democratic AI, a set of principles that would raise programmers’ awareness of the power they hold when they make a code decision.

## why it mattered

The project was not funded. It stayed as a concept document.

What matters is that, in November 2020, before ChatGPT, before the generative-AI boom, before mainstream conversations about AI alignment, we put a question that today is asked everywhere: **how do we design technologies that strengthen, rather than erode, democratic structures?**

That question stayed with us as the red thread of the organisation. *democraicy* was not an isolated project; it was the first articulation of a line that continues with the [Cabinet of retrofuturist curiosities](project-cabinet-retrofuturist) (2024) and with [AI4NGOs](project-ai4ngos) (2025).

## what the proposal contained

- **The problem:** technology is not an innocent tool; the implicit and explicit decisions in the code affect millions of lives.
- **The proposed approach:** deliberative mini-agoras with interdisciplinary participation.
- **The hard questions:** who decides the characteristics of AI agents? How do you avoid totalitarian drift? How do you formalise in machine language what we call agreement, law, constitution?
- **The workflow:** agile principles applied to democratic deliberation. One coordinator, eleven participants. Twelve sessions of two hours each. Consistent note-taking.
- **The output:** principles synthesised into a manifesto, plus individual design proposals.

## what we learned

A concept written without funding is still a kind of evidence: it shows what was thinkable, and to whom, at a particular moment. We keep this document readable for that reason, not as an achievement.

What we did keep from the exercise was a habit: speaking about technology in the vocabulary of citizenship (agora, constitution, citizen, manifesto) rather than the vocabulary of innovation or disruption. That habit shows up in how we run workshops today.

## materials

- Original concept document (PDF, December 2020).
- Subsequently underwrote our work on the *Cabinet of retrofuturist curiosities* and on *AI4NGOs*.

## read alongside

- [Cabinet of retrofuturist curiosities](project-cabinet-retrofuturist): the first concrete project where AI entered the workshop as a tool.
- [AI4NGOs](project-ai4ngos): the youth-worker training on AI in civil society, run for CCIF Cyprus in Cluj.
- [Manifesto: critical technology](manifesto-critical-technology)
