Announcements
Introducing ROOST's Roadmap & Inaugural Technical Design Committee

Introducing ROOST's Roadmap & Inaugural Technical Design Committee

Written by

Juliet Shen

Published

We're excited to share the formation of the initial draft of the ROOST short-term tools roadmap and our inaugural Technical Design Committee (TDC). Both represent important steps in building safety infrastructure that's community-driven and openly developed.

The ROOST Project Roadmap: Ready for Input

Today we're also releasing an initial ROOST project roadmap, which outlines the short-term development priorities for ROOST's open source trust and safety infrastructure across the DIRE framework: Detection, Investigation, Review, and Enforcement.

This roadmap is not fully inclusive of the open source trust and safety system we believe users need – for example, we’ve heard repeated pleas for an openly licensed, novel CSAM classifier from many users – but instead focuses on the parts of the system ROOST is directly building.

What We're Building

Osprey, addressing Investigation, is a high-performance rules engine for real-time event processing and behavioral analysis. The first version (v0) is already in production at organizations like Bluesky, processing hundreds of millions of events per day. Our 2026 priorities include code-free rules management through UI, “shadow mode” for testing rules before production, and batch processing for historical analysis—all designed to remove friction from analyst workflows and make Osprey accessible to everyone.

Coop, addressing Review and Enforcement, provides human-centered review infrastructure that works across different formats (ie. Actors and content), while protecting reviewer wellbeing and streamlining best practices in reporting. Version 0 delivers essential review capabilities including queue orchestration, context-rich interfaces, HMA integration for hash matching, and enhanced NCMEC reporting. The 2026 roadmap includes in-tool quality assurance, expanded search capabilities, semantic hash detection, and integrated feedback loops with Osprey.

ROOST Model Community, addressing Detection, makes open source safety models accessible through partnerships with AI model creators. We started with gpt-oss-safeguard in partnership with OpenAI, and are excited to expand the models working with ROOST. 2026 has a particular focus on expanding user access to sample policies for specific harm types, such as terrorism and violent extremism content (TVEC), and enabling integrations between ROOST tools and user-demanded classifiers.

Now it’s time for open design

This roadmap draft was informed by user and ecosystem research, partner conversations, and the ROOST team’s experience in trust and safety. It is a living document that will never be complete, but it will guide ROOST engineering efforts, community project teams, and help users understand the technical vision.

Before ramping up development, we want current users, potential users, and trust and safety experts to weigh in. For example:

  • Does this prioritize the right features for the problems you're facing?

  • What's on your safety team's wishlist that isn't reflected here?

  • Where do you see gaps in the open source safety ecosystem that ROOST could help fill?

  • Where do you anticipate AI to create challenges, and does this roadmap address them?

  • Where do you think AI can help trust and safety, and does this roadmap incorporate that?

We Want Your Feedback

Please share your thoughts, suggestions, and questions through GitHub Discussions. Whether you're evaluating ROOST for your platform, coordinating across safety teams, or already using ROOST tools, your perspective will help us build infrastructure that truly serves the community's needs.

We're also hosting regular office hours for each project's working group via Google Meet and discussions on our Discord server—join us to dive deeper into any aspect of the roadmap or ask questions about implementation.

Meet the Technical Design Committee

The Technical Design Committee (TDC) helps guide the overall technical direction of ROOST's projects (read more about ROOST project governance). The inaugural TDC members bring deep expertise across large-scale systems, complex child safety issues, information security, open source development, and a variety of safety issues ranging from violent extremism to fraud and scams.

Introducing the seven individuals who will lead this part of ROOST’s journey:

Hailey Elizabeth is a Staff Software Engineer at Bluesky and the Technical Lead of the Trust and Safety Engineering team. She leads the architecture and implementation of Bluesky’s automated moderation and threat detection pipelines, investigative tooling for bad actor identification, and open-source AT Protocol moderation tools. Hailey also has a background in product development with React Native and was previously a member of Bluesky’s product team.

Dr. Rebecca Portnoff is an expert on AI and child safety, responsible and ethical AI systems, and multi-stakeholder strategy for driving impact. She holds a B.S.E. from Princeton and a Ph.D from UC Berkeley, both in computer science, and is currently the Head of Data Science & AI at Thorn. Rebecca is an MIT Tech Review 35 under 35 innovator, and a Fast Company AI 20 technologist.

Sam Toizer leads the Safety Product team at OpenAI, responsible for product safeguards at the model, system, and enforcement levels. Prior to OpenAI, Sam worked at Twitter for eight years, including leading the Information Integrity team which worked on misinformation, civic integrity, and crisis response. Sam’s background is in computer science.

Shu Lei is a technology executive with two decades of experience transforming ideas into scalable software systems. Shu Lei has built a career at the intersection of innovation, safety, and global impact. After early roles in enterprise CRM and startup ecosystems, Shu spent the last 10 years leading Google Ads and YouTube Trust & Safety solutions — initiatives that safeguard users, reduce harmful content, foster advertiser trust, and shape the future of responsible digital platforms.

Tim Pepper is an engineer and executive with almost 30 years experience in open source software development. Tim’s work has touched many dimensions of software stack code (Linux/kernels, storage, mobile and embedded systems, power and performance, cloud orchestration, AI/ML frameworks) as well as how that software is developed and maintained (steering and conduct committees, mentorship, open source program offices, public policy, legal compliance, software supply chain security). In what free time remains, Tim dabbles at being a backyard farmer, homebrewer, woodworker, Vespa rider, and amateur triathlete.

Tom Thorley is the Senior Director of Safety & Integrity at GitHub. His responsibilities include ensuring user safety, countering harassment, malware detection, content moderation, and eradicating CSAM and TVEC on GitHub. Prior to GitHub, Tom spent a decade at the British government's signals intelligence agency, GCHQ, where he specialized in issues at the nexus of technology and human behavior. After leaving government service, Tom became the first Director of Technology, Engineering and Solutions at GIFCT, where he worked to prevent terrorists and violent extremists from exploiting digital platforms, building GIFCT's Hash-Sharing Database and Incident Response Framework.

Naren Koneru (Co-Chair) leads engineering for Safety at Roblox. Prior to his time at Roblox, Naren spent a decade in open source software, building large scale data management systems for some of the largest enterprises.

Vinay Rao (Co-Chair) is the Chief Technology Officer of ROOST (Robust Open Online Safety Tools), where he leads the development of an open-source software stack to enhance online safety. He has been in the field of safety for nearly two decades. Most recently he was Head of Safeguards at Anthropic, where he developed systems for monitoring and securing AI usage for safe deployments. Previously, he led safety teams at YouTube, Stripe, Airbnb, and Google, covering ad fraud, fraud- and credit-risk, account and API integrity, offline safety, and coordinated and adversarial abuse.

This committee brings together builders who've operated at vastly different scales—from startups to the world's largest platforms—and across centralized and decentralized architectures. They've designed systems that process billions of events daily, led teams through novel safety challenges, and contributed to open source communities. Over their 18-month terms, they'll help ROOST make critical technical decisions, evaluate technology roadmaps, and anticipate how both abuse patterns and AI-powered safety systems will evolve.

Building Together

ROOST believes safety infrastructure should be freely available, transparent and auditable, and community-governed. Achieving that vision requires input from the full spectrum of people building and operating trust and safety systems. Join us in shaping what comes next.