Back to Home Engineering

Building an Agentic Team from Open Source

March 2026 7 min read Ben Fider

The Discovery

I was scrolling X and came across a link to an open source project called The Agency: a community-built library of nearly 180 specialized AI agent definitions, organized into divisions like a real organization. Engineering, design, marketing, product, project management, testing, strategy, sales, and more. Each agent has a defined role, a personality, specific deliverables, and a workflow.

I cloned the repo, copied the agent files into my Claude Code configuration, and within minutes I had access to a full cross-functional team of specialized AI agents. Not generic prompts. Purpose-built specialists that think and communicate differently based on their role.

The most interesting thing about this moment in AI is not what any one company is building. It is what the community is building together, in the open, for everyone.

What Changed

Before the agent library, I was working with a single AI that tried to be everything: developer, designer, strategist, copywriter, QA tester. It was capable, but it lacked the focused perspective that comes from a defined role.

After installing the agents, the dynamic shifted. I was no longer talking to one generalist. I was directing a team of specialists:

Role-Specific Thinking

A Brand Guardian agent reviews copy differently than a Content Creator agent writes it. A Security Engineer spots risks that a Frontend Developer does not prioritize. Each agent brings a distinct lens to the same problem, the way real team members with different backgrounds do.

Parallel Perspectives

When working on a feature, I can dispatch a UX Researcher to evaluate usability, a UI Designer to propose component patterns, and an Accessibility Auditor to check compliance, all working on the same problem from different angles. The multi-agent orchestration pattern becomes natural when the agents are already defined and ready to go.

Built-In Quality Gates

The testing division includes agents like the Reality Checker (which defaults to "needs work" and requires evidence for approval) and the Evidence Collector (which demands visual proof for everything). These agents create natural quality gates in the development process without adding process overhead.

How I Use It

In practice, the agents map to the work I'm already doing. The ones I use most frequently:

  • Trend Researcher for market intelligence, competitive landscape analysis, and identifying emerging patterns in AI adoption
  • Content Creator for drafting thought leadership content, LinkedIn posts, and articles
  • Brand Guardian for reviewing public-facing copy, messaging, and positioning for consistency and tone
  • UX Researcher and UI Designer for evaluating usability, proposing component patterns, and refining the user experience
  • Frontend Developer for implementation guidance, code review, and performance optimization
  • Accessibility Auditor for WCAG compliance checks, ensuring the site meets AAA standards across every page

I do not use all 180 agents. Nobody would. The value is in having the right specialist available when the task calls for it, without needing to configure or define that specialist from scratch.

The Open Source Advantage

What makes this particularly interesting is that the agent library is open source, community-maintained, and improving continuously. It started as a Reddit discussion about AI agent specialization. Within 12 hours, over 50 people had requested it. Within months, it had grown to 60+ agents across 9 divisions with support for multiple AI coding tools.

This is a pattern worth watching. The most powerful AI capabilities are increasingly being built by communities, not just companies. The Model Context Protocol servers that connect AI to external tools, the agent libraries that give AI specialized roles, the prompt engineering techniques that make outputs more reliable: all of this is being developed in the open and shared freely.

For organizations evaluating their AI strategy, this matters. The barrier to assembling a sophisticated AI-augmented workflow is dropping rapidly, not because any single vendor is lowering prices, but because the open source community is building the infrastructure and sharing it.

The Broader Pattern

This article is part of a larger story about how individual tools and integrations compound into something greater than the sum of their parts. The analytics MCP gives Claude access to usage data. The nightly automation pipeline runs the development review while I sleep. The agent library provides specialized perspectives on demand. Each piece works independently, but together they create a development environment where a solo builder operates with the breadth and rigor of a multi-person team.

None of these tools required building from scratch. The MCP servers are open source. The automation runs on GitHub Actions. The agent library was a community contribution. The pattern is: find the best open source tools, integrate them into your workflow, and let them compound.

The organizations that will move fastest on AI adoption are not necessarily the ones with the biggest budgets. They are the ones paying attention to what the community is building and putting it to work.

BF
Ben Fider
Founder & Owner, Framepath Partners

Build Your Agentic Capability

Interested in how open source AI tools and agent workflows can accelerate your team's capabilities?