Announcements
ROOST in Review: A Look Back on 2025

ROOST in Review: A Look Back on 2025

Written by

Camille François

Published

What We Shipped, What We’re Learning, and What’s Next

As 2025 winds to a close, we’re taking a moment to reflect on how far ROOST has come since we officially launched at the Paris AI Action Summit. Back then, ROOST and its coalition were warmly welcomed as "an overdue idea" with potential to "revolutionize trust and safety." Ten months later, we’re closing the year with strong shared technical foundations and a growing community.

This has been a year of great progress at the intersection of safety and open source, and we are incredibly grateful to everyone who has joined us to develop, maintain, and distribute open source building blocks to safeguard global users and communities.

Here’s a look back at what we built in 2025 and where we’re headed in 2026.

What We Shipped in 2025

  • Laying Foundations
    This year, we built in four ways: we open-sourced existing IP, secured source code donations from leading companies, revived orphaned but critical projects, and built new tools from the ground up. We published our short-term roadmap and outlined our more ambitious long-term goals. We’re focused on core building blocks that cover all functions of safety: detection, investigation, review, and enforcement.

  • Technical Design Committee (TDC)
    We launched ROOST’s inaugural Technical Design Committee, which gathers technical leaders from across the ecosystem to guide architectural decisions, steward shared standards, and ensure ROOST’s roadmap is shaped through transparent, community-driven governance. Open source is also a method!

  • Osprey
    We introduced Osprey, an open source investigation and incident-response tool donated by Discord. Osprey gives trust & safety teams lightweight but powerful capabilities for detecting, triaging, and responding to harmful content and community incidents.

  • Coop
    We announced Coop, an open source content review and moderation platform which we acquired from Cove. We’ve expanded upon the technology to support end-to-end, human-in-the-loop moderation workflows, including NCMEC-compliant CSAM reporting.

  • gpt-oss-safeguard
    We partnered with OpenAI to release a critical piece of their safety infrastructure, gpt-oss-safeguard, giving everyone access to a powerful “bring your own policy” reasoning model. We also hosted a hackathon with Hugging Face and OpenAI to welcome builders and partners curious to test and iterate on this model. This is an important milestone that normalizes expectations around the public’s ability to study, modify and reuse critical safety systems. You can catch up with our panel from the Paris Peace Forum to hear a few different perspectives on this release.

  • ROOST Model Community
    We introduced the ROOST Model Community, bringing together developers, researchers, and practitioners to co-create, share, and refine open safety AI models.

We also spent 2025 listening to the people tasked with securing online spaces, engaging thousands of practitioners—especially those focused on child safety. We heard challenges and perspectives at events like INHOPE Summit, UN Open Source Week, NCMEC’s CyberTipline Roundtable, and the Child Dignity conference at the Vatican.

Three themes came up again and again: small and mid-size teams are overloaded; policy is moving faster than tooling; and the ecosystem needs infrastructure that’s interoperable, transparent, and auditable.

Thankfully, we are starting to see progress. ROOST’s awesome-safety-tools directory on Github maps existing and ongoing contributions to the open source safety stack, from hash matching and content classification through investigation workflows, rules engines, and AI guardrails.

As a moving snapshot of where open safety is headed, this map provides reasons to be hopeful—in 2025, a number of organizations made major contributions. For instance, Roblox is publishing safety models and guardrails for voice and LLM applications, and IBM is contributing both guardrails and governance tooling that connects risk taxonomies to enforceable controls.

At the same time, this map shows that some of the most urgently needed open source building blocks—especially for child safety—remain unavailable. ROOST is prepared to support organizations to not only license and release open building blocks for safety, but also ensure projects are effectively licensed, packaged and maintained, usable in production, and interoperable enough to compose into end-to-end workflows.

What’s Ahead

In 2026, we’ll focus on turning the foundations laid this year into durable, widely adopted safety infrastructure. By investing in open source systems and common building blocks, we aim to reduce duplication, strengthen collective defenses, and raise the baseline for safety outcomes across the ecosystem.

Among our top priorities are adoption and continued iteration on Osprey for investigations and incident response, and maturing Coop toward an end-to-end, human-in-the-loop moderation platform.

We will work to grow the ROOST Model Community as a place where developers, researchers, and practitioners co-create and refine safety AI models in the public interest (join us!). We will expand partnerships and integrations that support deployment of these models at scale while remaining accessible to smaller teams.

Finally, we will continue to formalize transparent, community-driven decision-making around architecture, standards, and roadmap priorities.


Thanks to everyone who contributed time, code, and expertise in 2025. See you in the New Year, and on our Discord to engage with the ROOST community about what’s coming next.