Justice: The Blindspot in AI for Good

As part of our continued work to place justice at the heart of innovation, Ronald Lenz, HiiL’s Director of Innovation, reflects on developments and gaps observed at the AI for Good Summit in Geneva.

Reflections from the 2025 AI for Good Summit

I just returned from the AI for Good Summit in Geneva. The sheer scale of the event was staggering.  – With over 15,000 participants and 150 exhibitors, the event buzzed with optimism over the potential of AI to transform everything from climate action to health summit.  Ministers, scientists, ethicists, startup founders, and engineers spoke passionately about machine learning models that decode proteins, predict floods, and revolutionise farming. But where was Justice? 

Not just legal tech demos or ethics checklists—but a serious reckoning with how AI could help close the massive global justice gap that leaves over 5 billion people without meaningful access to fair and effective legal systems. In a world marked by inequality and exclusion, it is difficult to have a conversation about “AI for Good” without talking about justice. 

The Blind Spot in AI for Good

From the opening keynote, ITU Secretary-General Doreen Bogdan-Martin set a serious tone:

“The greatest risk today is leaving the most vulnerable further behind.”

She challenged us to ask how we might bend the arc of AI toward justice. But as the summit unfolded, justice remained largely a footnote. Panel after panel focused on AI for healthcare, education, agriculture, and climate, but few asked how societies will fairly resolve disputes, protect rights, or enforce laws in an AI-shaped world.

And yet, without accessible justice systems, how can any of these advances be sustained? Justice isn’t just a sector; it’s the infrastructure of trust that all other sectors rely on. Without fair systems to resolve disputes, protect rights, and ensure accountability, even the most advanced AI health or climate solution will rest on shaky ground.

2.6 Billion Offline – and Left Behind? 

Globally, 2.6 billion people remain offline, most in rural and low‑income regions. As long as AI tools assume high‑speed internet or ignore local conditions, they will simply miss those who arguably could benefit from them the most. We heard that “the digital divide is becoming a development divide,” as FAO’s chief noted – leaving entire villages invisible to aid and innovation. Some innovators are rising against the tide. MamaMate, an AI-powered digital tool,  works offline, on solar power, and even “speaks directly to mothers in their own languages”. It tracks infant care, offers culturally relevant health advice, and checks in on mental well-being – all without the internet. That’s AI designed for inclusion, not just efficiency.  

Such thinking offers huge potential for increasing access to justice through AI. What would an offline, solar-powered justice assistant look like? Could it help a farmer mediate a land dispute or a mother understand her legal rights after domestic abuse? That vision is not far-fetched. With a global justice gap of 5.1 billion, it’s urgently needed. 

Small Models, Bigger Impact 

It wasn’t just about “frontier AI”. In practice, many innovators are betting on miniature AI. I heard repeatedly that tiny local models can be far more useful in emerging regions than a massive cloud model. For instance, DeepSeek’s R1, a 67 billion parameter model, is designed to run entirely on a smartphone. R1’s design shows how you can have powerful reasoning AI offline, on device, and open source

This represents a revolutionary shift. It means that crucial services, from farming advice to legal aid, could eventually use AI without internet or high fees. Several people from developing countries agreed. For them, an “AI revolution” means low-bandwidth, on-device solutions trained on local data and dialects, not only exportable supermodels. 

AI as a Digital Public Good

Across panels, speakers cast open-source AI itself as a Digital Public Good or part of Digital Public Infrastructure (DPI). DPI considerations were highlighted in building trust and sovereignty.  I found it encouraging that terms like “open source,” “public service,” and “global commons” kept popping up. It shows people are thinking about the collective ownership of AI. But again, I wondered: labelling it a public good is one thing; actually funding and governing it is another.

Researchers at ETH Zurich/EPFL unveiled an open, multilingual large-language model trained on Switzerland’s new “Alps” supercomputer. This model, freely available, supports some 1,500 languages. In other words, Switzerland is investing in public AI infrastructure rather than keeping it proprietary.

The Agents among us

On the exhibition floor, the speed of innovation was palpable. Attendees gathered around life-like AI robots and a humanoid skeleton that could respond to its environment. While these demos were dazzling, they also served as a reminder: without clear accountability, even the best AI can go dangerously off-track. Meanwhile, developers shared quiet breakthroughs in AI agents, systems that collaborate, learn, and problem-solve autonomously. Google, for example, is co-developing an open Agent2Agent (A2A) protocol, allowing different AI tools to communicate and cooperate across systems.

The implication was clear: by the time regulators finalize legislation, multi-agent ecosystems will already be deployed. We urgently need new frameworks and ways to audit these complex AI webs.

Governance: from principles to power

Governance was a dominant theme. A standout session hosted by the Omidyar Network tackled the challenge head-on: how do we move from principles to protocols? Speakers like Marietje Schaake and Peggy Hicks emphasized that rights and ethics need to be coded into systems, not just pasted on after the fact. I kept hearing the same concern: we have no shortage of AI principles, but how do we make them enforceable? How do we operationalize ethics and rights into working code, into systems that truly serve all people? Justice systems offer a unique opportunity here. They deal with questions of fairness, accountability, and trust by design. If AI can be aligned with justice, it can be aligned with democratic values at a foundational level.

But as former HiiL-CEO Sam Muller pointed out in an AI governance roundtable :  we can’t rely solely on top-down approaches. The future may lie in federated governance: shared standards, local control, and open architectures that reflect the needs and values of diverse communities. His question was a powerful one:

“Can the 60% of the world, the countries that don’t benefit from ‘might is right’, come together to shape AI that reflects their values and interests?”

If they do, justice must be part of that conversation.

Justice is missing: Let’s bring it in

I didn’t leave Geneva frustrated.  I left hopeful and energized as the pieces are already on the table. We have smaller, open-source models designed for real-world constraints. Fit for purpose infrastructure for the Global South.  Governance approaches that favor decentralization and inclusivity.

DLA Piper announced a new AI Law and Justice Institute as part of the UN AI for Good “law track,” convening judges, technologists, and advocates to focus on AI’s legal impact. 

At HiiL, we believe that with AI we can build an open, federated system that enables any government, organization, or community to deploy AI-powered justice services:  fast, affordably, and tailored to local realities. Justice, after all, is not a luxury or a peripheral good. It’s an essential condition for health, prosperity, and climate resilience. Without it, none of the other AI breakthroughs will hold.

A Call for Next Year

So here’s my challenge for 2026: Let’s put justice at the center of the next AI for Good Summit. Let’s make it unmissable. Let’s ensure that the billions currently outside the system have a seat at the table—and tools in their hands.

AI has the potential to transform justice. But only if we choose to build it that way.