Skip to main content

A Better Signal Than a Benchmark: An Agent Chose AgentHub on Its Own

· 9 min read

A simple Scaffold-ETH planning task became a useful test of whether an AI agent knew when to reach for better context.


One of the hardest parts of AI tooling is not just making context available. It is teaching agents when to use it.

That sounds small, but it is the difference between:

  • an MCP server that sits there unused
  • a context system that agents actually reach for at the right moment

At AgentHub, we have been working on both sides of that problem:

  • building expert knowledge packs
  • delivering them through MCP
  • improving onboarding so agents can install AgentHub and keep a short persistent note about when to check it

That raised a natural question:

If AgentHub is installed and available, will a capable agent actually decide to use it on its own?

We ran a small but meaningful test around scaffold-eth-2.

Better Context, Better Fixes: Why AgentHub MCP Won a Real React Test

· 7 min read

Same task, four delivery modes, one clear winner.


When people talk about AI coding tools, the conversation usually centers on the model: Which model is smartest? Which one writes the cleanest code? Which one reasons best?

That matters. But there is another variable that gets less attention and has real practical impact: how the model receives context.

At AgentHub, that question matters because we are not just building a registry of expert packs for APIs and frameworks. We are also building the tooling layer that delivers those packs to models in a structured, usable way.

So we ran a practical test on a realistic React problem:

A React 18 page renders 5,000 searchable rows with row selection. Typing is laggy, rows rerender too often, and the selected row appears to reset after filtering.

We asked agents to diagnose the root causes, propose a production-sane fix, include complete code, and explain when to use useDeferredValue, useTransition, memo, and stable props.

Then we compared four different context delivery modes.

The Interview Question

Every agent received the same practical React 18 task: explain why a page with 5,000 searchable rows felt laggy, why rows rerendered too often, and why selection appeared to reset after filtering. Then each agent had to provide a minimal but production-sane fix with complete code and explicitly explain when to use useDeferredValue, useTransition, memo, and stable props.


The Four-Way Comparison

We kept the task and answer constraints constant, and only changed how React context was delivered:

  • A / MCP — The agent used the AgentHub MCP server to retrieve React pack context.
  • B / Direct File — The agent read the local react/0.4.0.md file directly.
  • C / No Pack — The agent answered without AgentHub context.
  • D / Inline Pack — The React pack was pasted directly into the prompt.

This is the kind of comparison that matters in the real world. Not “can a model define useMemo?” but:

  • does it diagnose the right problem?
  • does it pick the right React 18 primitives?
  • does it write code you would actually ship?
  • does the delivery method change the answer quality?

The Results

Here is the final ranking from the latest run:

CandidateDelivery ModeTotal /60Score /10Rank
AAgentHub MCP599.81
BDirect file579.52
CNo pack549.03
DInline pack538.84

The reviewer’s scoring rubric covered:

  • root-cause diagnosis
  • API choice
  • tradeoffs and caveats
  • code quality
  • completeness
  • senior-level practicality

In other words, we ranked the responses based on whether they would actually help a real engineer solve the problem well. We kept the task and answer constraints fixed, changed only the context delivery method, scored each answer across those six dimensions, then totaled the results out of 60 and normalized them to 10.

And the top line is straightforward:

In this React comparison, AgentHub MCP finished first: 59/60, 9.8, rank #1.

This was not a toy prompt, and the difference was not cosmetic. The best answer was better because it:

  • separated urgent input updates from expensive list work more cleanly
  • diagnosed the identity bug behind the selection issue
  • paired useDeferredValue, useMemo, memo, and stable props in a more coherent way
  • delivered more complete, production-sane code

What Actually Improved

The most important result is not the number itself. It is what changed in practice.

The MCP-backed answer was strongest where real engineering answers usually break:

  • It diagnosed the problem as three separate issues, not one vague “performance problem.”
  • It clearly distinguished urgent input updates from non-urgent derived rendering work.
  • It treated the selection bug as an identity problem, not just a rendering glitch.
  • It explained why useDeferredValue was the right first move here, and why useTransition was not the right tool for the controlled input itself.
  • It produced the most complete implementation without bloating into a generic performance essay.

That is what better context delivery should do: not make the answer longer, make it sharper.


Why MCP Likely Helped

We want to be careful here: this is a strong signal, not a universal law.

But the result fits a pattern we care about deeply at AgentHub:

the strongest outcome did not come from the most raw context. It came from the best-delivered context.

In this run:

  • MCP beat direct file access
  • direct file access beat no pack
  • inline pack came in last

That last point is especially interesting. A lot of people assume that pasting more context directly into the prompt is the safest option. In this case, it was not. The inline-pack answer was still good, but it was the weakest of the four. The reviewer specifically called out unnecessary implementation noise and less disciplined code despite strong conceptual explanations.

That suggests something important:

Context delivery is not just plumbing. It is part of the product.

MCP appears to help because it gives the model a cleaner retrieval path:

  • less friction than raw file handling
  • less cognitive overload than a giant inline blob
  • better structure than “just answer from memory”

For developers building AI tooling, that is a useful takeaway.


Why This Matters for AgentHub

AgentHub is often described as a registry of expert packs. That is true, but it is incomplete.

The packs matter. The spec matters. The peer review matters. But the delivery mechanism matters too.

If the same React pack produces a better answer through MCP than through a raw inline prompt, then the tooling is part of the quality story. That is good news for AgentHub, because it means the project is not just curating better content. It is also creating a better path from content to output.

For teams building:

  • internal coding assistants
  • agent workbenches
  • AI-native IDE experiences
  • API support copilots

the practical lesson is simple:

invest in context delivery, not just context collection.

Better answers do not come only from “having documentation.” They come from giving models the right documentation in the right shape at the right moment.


What We Are Not Claiming

A result like this is exciting, but we want to stay disciplined about it.

This post does not claim:

  • MCP is always the best delivery mode for every task
  • that one run is enough to prove a universal rule
  • inline context is always bad
  • React performance work can be reduced to one canned pattern

What we are saying is narrower:

  • this was a realistic engineering task
  • the task was held constant
  • the delivery mode changed
  • AgentHub MCP produced the best result in this comparison

That is a meaningful product signal.


The Big Takeaway

This React benchmark reinforced something we believe strongly:

Better context delivery produces better engineering work.

In this test, AgentHub MCP did not just look cleaner architecturally. It produced the top-ranked answer on a real React problem, outperforming direct-file, no-pack, and inline-pack alternatives.

That is the kind of result we care about at AgentHub: not abstract elegance, not benchmark theater, but measurable improvement on work developers actually do.

If you are building agent tooling, coding copilots, or internal AI systems, you should treat context delivery as a first-class product decision.

If you want a concrete place to start, AgentHub now gives you both:

  • the expert packs
  • the MCP tooling to deliver them well

The strongest result did not come from more raw context. It came from better-delivered context.

First Steps: Contributing a New Agent to AgentHub

· 4 min read

AgentHub thrives on new contributors and fresh perspectives. Whether you want to add support for a new API, share hard-won best practices for a tool you love, or help others avoid common pitfalls, your contribution makes the community—and every LLM agent—stronger.

This guide will walk you through your first AgentHub contribution, with clear steps and helpful advice at every stage. (No screenshots, just actionable info!)


1. Fork and Clone the Repository

Start by forking the AgentHub repo to your own GitHub account. Clone your fork locally so you can work with files and version control.

git clone https://github.com/<your-username>/agent-hub.git
cd agent-hub

2. Pick a Tool or API

Look through the /agents/ directory. If the tool or API you want to cover isn’t there, you’re in the right place! You can open an issue to “claim” your agent idea, but this is optional.


3. Study the Spec and Example Agents

  • Open Agent Spec: Read /spec/open-agent-spec-v0.3.0.yaml (or check the latest on GitHub) to understand the required fields and structure.
  • Review Examples: Look at existing agents in /agents/<tool>/—they set the bar for clarity, depth, and reasoning.

4. Write Your Agent Markdown File

  • Create a new folder in /agents/ named after your tool, e.g.,

    agents/my-awesome-api/
  • Add your Markdown file, following the spec (name it after the version, e.g., 0.3.0.md).

  • The Markdown file should use YAML frontmatter for the meta block and then capture:

    • Best practices
    • Pitfalls and “gotchas”
    • Key patterns, example requests, and expert logic
    • Anything an LLM would need to reliably use the tool

5. Add DESIGN_NOTES.md

Every agent comes with a short DESIGN_NOTES.md. Here’s a template to get you started:

# DESIGN NOTES: <Tool Name> Agent
* **Goal / Scope:** What problem does this agent solve?
* **Key Prompts & Reasoning:** Outline major prompt snippets and why they work.
* **Edge‑case Handling:** How does the agent avoid common pitfalls?
* **References:** Docs, blog posts, or code samples you consulted.

This is where you explain why you made certain choices and how future maintainers or contributors should think about evolving the agent.


6. Open a Pull Request (PR)

  • Commit your changes and push to your fork.

  • Go to the main AgentHub repo and open a Pull Request from your branch.

  • In the PR description:

    • Link to any related issues (if you opened one)
    • Paste a sample “agent in action” prompt if you have one
    • Mention anything you’d like feedback on

You can open a draft PR if you’re still working—early feedback is encouraged!


7. Collaborate and Iterate

  • The AgentHub team aims for a review turnaround of ≤12 hours during launch week, and <24 hours after that.
  • Feedback is friendly, specific, and transparent.
  • Expect suggestions, discussion, and (often) a bit of collaborative iteration before merging.

8. Celebrate Your Contribution!

  • Once your PR is merged, add yourself to the all-contributors list—you’re part of AgentHub history.
  • You may be invited as a code owner for your agent or tool area as you continue to contribute.
  • Every agent helps shape the quality and reach of the entire ecosystem.

Quick Tips for First-Time Contributors

  • Iterate openly: Your first draft doesn’t have to be perfect—collaboration is the heart of AgentHub.
  • Be transparent: Document your reasoning and references in DESIGN_NOTES.md.
  • Ask questions: Open a GitHub Discussion, join Discord, or comment in your PR. The community is here to help.
  • Be respectful: See the Code of Conduct for our commitment to a welcoming environment.

Ready to Start?

  • Fork the repo
  • Write or adapt an agent
  • Open your first PR
  • Join the conversation

Happy agent-crafting! 🌱


Questions or feedback? Visit GitHub Discussions or join the next office hours!


Let me know if you want an even more granular step-by-step, a checklist, or FAQ—happy to keep building out resources!

Building Better LLM Agents: Why Open Specs and Community Matter

· 4 min read

The Challenge: LLM Agents Are Only as Good as Their Foundations

The AI world is obsessed with what LLMs can do: generate code, automate workflows, interact with APIs, even reason through complex problems. But under the hood, much of today’s “agentic AI” is powered by ad hoc glue: bespoke prompts, untracked agent specs, wikis, and tribal knowledge lost in Slack threads.

The result?

  • LLM agents that repeat old mistakes or miss best practices
  • Slow onboarding for new team members
  • “Secret sauce” that never scales, because it’s never written down

If you’ve ever debugged code your own LLM wrote, you know: the difference between a “good enough” agent and a great one comes down to two things—knowledge and context.


The AgentHub Philosophy: Knowledge as a Shared Asset

AgentHub was born from a simple, radical question: What if every LLM agent had access to the community’s best thinking, right from the start?

Instead of every developer, ops team, or AI startup reinventing the wheel—or worse, repeating each other’s bugs—we believe in a different model:

  • Open Specification: Anyone can contribute, review, and improve the definition of what an “agent” is and should be.
  • Peer Review: Every agent file is curated, discussed, and refined out in the open. Design notes, rationale, and lessons-learned are part of the artifact—not locked in someone’s brain.
  • Shared Ownership: Maintainers guide, but contributors co-own the future. You don’t need to be a “core dev” to make a big impact.

This isn’t theory: it’s how AgentHub runs every day.


Why Open Specs Win: Beyond Hype and Vendor Lock-In

In the gold rush to build AI agents, it’s tempting to reach for the newest CLI, or trust a vendor’s proprietary “magic prompt.” But this path has limits:

  • Opaque configs—You can’t audit, extend, or even really understand them.
  • Vendor lock-in—Your agents are married to one company’s interface or language model.
  • One-off hacks—Each team solves the same problems differently, and none of it compounds.

Open specs—like AgentHub’s Open Agent Spec v0.1—break this cycle:

  • Any LLM can read or use the agent file.
  • Best practices become explicit, portable, and testable.
  • A new contributor can learn, adapt, and improve—no more “lost context.”

How Community Elevates Every LLM Session

Imagine a world where every API, SDK, or tool you use comes with a living agent file:

  • The agent encodes the community’s latest best practices, warnings, and workarounds.
  • When you paste it into your LLM session, you instantly benefit from months of accumulated knowledge—not just boilerplate.
  • As the community learns (from PRs, bug reports, or real-world failures), the agent file evolves. Your workflow gets better without you having to “catch up.”

This is the power of shared, open context. It’s how the open-source movement transformed infrastructure—and it’s how AgentHub aims to transform LLM agent development.


Building In Public: Mistakes, Debates, and Transparency

AgentHub isn’t just a registry; it’s a conversation. Every agent PR is visible. Every debate, edge-case, or design “gotcha” is discussed in the open. Success is measured not by how many agent specs are published, but by how many contributors return, iterate, and grow together.

And with a Code of Conduct and peer mentorship at its core, we’re building a space where new ideas—and new contributors—are always welcome.


The Future: Open, Reliable, and Human-Readable Agents

The AI world moves fast, but robust LLM agents require careful, open collaboration. With AgentHub, we’re setting a new bar:

  • Agents are auditable and portable
  • Best practices are codified, not lost
  • Every developer, ops team, and API maintainer can contribute

And as new frameworks, models, and workflows emerge, AgentHub is built to supplement and standardize—not compete or fragment.


Join Us

The next wave of AI automation is only as strong as the knowledge we share.

  • Browse AgentHub
  • Copy an agent file for your next LLM project
  • Suggest an improvement or contribute your expertise

Let’s build a smarter, more reliable future for LLM agents—together, and out in the open. 🌱


Want to get involved, ask questions, or start a debate? Hop into GitHub Discussions or join us for our next live office hours!

How to Use an AgentHub Agent File With Your Local LLM Stack

· 4 min read

Unlock expert-level LLM coding sessions in three simple steps.


If you’re exploring large language models (LLMs) for software development, you know the promise: write, test, and refactor code in seconds. But getting great results—especially with real-world frameworks and APIs—can be hit or miss.

AgentHub changes that. With a curated agent file, your LLM is seeded with community wisdom, not just surface-level docs. This tutorial will show you the absolute fastest way to upgrade your LLM coding experience—using the React agent—with any local or hosted LLM setup.


What You’ll Need

  • An LLM environment: This could be OpenAI ChatGPT, Claude, LM Studio, Ollama, or any local stack that supports “context” or “system prompt” injection.
  • The AgentHub registry: Browse the latest agent files here.
  • Your development goal: What do you want your LLM to do (e.g., write a React component, interact with a REST API, etc.)?

Step 1: Find the Right Agent File

AgentHub is organized by tool and API. For React, simply go to:

https://github.com/FIL-Builders/agent-hub/tree/main/agents/react

You’ll see one or more versioned .md files.


Step 2: Copy the Agent File’s Contents

Click into the latest versioned spec file, such as 0.3.0.md. Copy the entire Markdown contents to your clipboard.

Tip: The agent file is readable—skim it for gotchas, expert tips, and important context!


Step 3: Paste the Agent Into Your LLM’s Context

Every LLM interface is different, but the workflow is the same:

  • ChatGPT, Claude, or Web UI:

    • Paste the agent Markdown spec as a “system prompt” or at the top of your first message.
    • Then, start your coding conversation as usual.
  • Ollama, LM Studio, or LocalAI:

    • Use the context injection or system prompt feature in your tool (check docs).
    • Paste the Markdown agent spec before any user instructions.
  • Custom LLM Agents:

    • If you’re building with frameworks like OpenDevin, Agent-LLM, or Autogen, load the agent Markdown spec into your prompt builder or context window before running code tasks.

Example:

SYSTEM:
(paste agenthub/agents/react/0.3.0.md here)

USER:
Write a React component for a login form with validation and a “forgot password” link.

What Happens Next?

The LLM will “see” and internalize the expert strategies, common mistakes, and recommended patterns encoded in the agent file.

  • Generated code will be more robust and idiomatic.
  • Edge cases and pitfalls (like prop handling, hooks misuse, or state leaks) will be avoided.
  • Testing, refactoring, and doc generation will follow best practices by default.

You just gave your LLM a senior developer’s guidance—for free.


Can I Do More?

Yes!

  • Stack agent files: You can paste in several (for example, React and Redux, or GitHub API and Jest testing).
  • Edit/extend: Tailor the Markdown spec to your project quirks—add extra “dos and don’ts” as your team learns.
  • Contribute back: If you improve an agent file, open a Pull Request so the community benefits.

FAQ

Q: Do I need a plugin or parser to use AgentHub files? A: No. Any LLM that can read context/system prompts can use AgentHub files directly.

Q: Is this only for coding? A: No! You can use agent files for API calls, workflow orchestration, documentation, and more—any task where expertise helps.

Q: How do I know the file is up to date? A: Each agent file is peer-reviewed and versioned. Check the agents directory for updates.


Next Steps


Welcome to AgentHub—where every session is a step closer to expert-level LLMs. Happy agent-crafting! 🌱


Want more guides or a video walkthrough? Let us know in GitHub Discussions—we’re just getting started!

Introducing AgentHub: The Open Registry for LLM Agents

· 5 min read

Build better LLM agents—together, in the open.


The LLM revolution is upon us. From local development with open-source models to sophisticated agent frameworks, the energy around language model–powered software is undeniable. But as any builder knows, there’s a gap between “it works” and “it works well.” Today, we’re excited to introduce AgentHub: the open registry and living specification for API-savvy LLM agents, designed to close that gap—one thoughtful file at a time.


Why AgentHub?

As the LLM agent ecosystem explodes, developers face a new kind of fragmentation:

  • Every tool or API integration has its own quirks, best practices, and subtle pitfalls.
  • Knowledge is siloed in random blog posts, internal wikis, or brittle prompt files.
  • There’s no open, community-reviewed standard for codifying and sharing agentic expertise.

AgentHub exists to solve these problems, with a simple idea: What if every LLM-powered project came with a peer-reviewed “agent file”—a concise guide for LLMs, written by experts, that encodes real-world know-how, design logic, and hard-won lessons?


What is an Agent File?

An AgentHub agent file is a lightweight, human-readable Markdown file with structured frontmatter—curated by community experts for each API, SDK, or developer tool. Think of it as a starter kit for your LLM:

  • It teaches the LLM best practices, common pitfalls, and the “gotchas” only experts know.
  • It seeds your LLM session so that every code generation, test, or integration is sharper, safer, and more reliable.
  • It’s not a prompt, nor a config or executable; it’s a data file—portable, inspectable, and easily loaded at the start of any LLM workflow.

Without an agent file, your LLM can use a tool. With an agent file, your LLM can master it.


How Does It Work?

Using AgentHub is as simple as:

  1. Browse the AgentHub registry for the API or SDK you want to use.
  2. Copy the relevant agent Markdown file.
  3. Paste it into your LLM session, or supply it as initial context—however your tool or workflow allows.

Suddenly, your LLM is working from a base of hard-earned expertise, not just guesswork.


What Makes AgentHub Different?

AgentHub isn’t a black-box CLI or proprietary SaaS. It’s built on a few core ideas:

  • Open Specification: The Open Agent Spec v0.1 is readable by any LLM stack—no vendor lock-in, ever.
  • Peer-Reviewed Agents: Every agent file is community-curated, with transparent design notes, rationale, and lively discussion in PRs. We publish quality over quantity.
  • Built in Public: All design, review, and learning happens out in the open. Mistakes are lessons, not secrets.
  • Supplement, Don’t Replace: AgentHub is meant to supplement and standardize—not compete with—your favorite frameworks (OpenDevin, Agent-LLM, GPT Engineer, etc). Use our agent files anywhere an LLM can read context.

Who’s It For?

  • Developers building agentic tools
  • LLM infra and ops teams
  • API/SDK authors who want to enable the next wave of AI-powered integrations
  • Anyone running LLMs locally or in production and wants more reliable, transparent agent behavior

If you care about best practices, clear reasoning, and community-driven knowledge—you’re in the right place.


A Concrete Example: Using the React Agent

Say you’re building a local LLM tool that generates React code. Normally, your model might hallucinate imports, miss idioms, or fumble hooks.

With AgentHub:

  1. Browse the registry and grab the latest React agent Markdown file.
  2. Paste it into your LLM’s initial context—no extra plugins, no special parser.
  3. The LLM now understands expert React practices, edge cases, and anti-patterns—making your generated code cleaner and your iteration loop faster.

You’re no longer starting from zero; you’re starting from community wisdom.


Get Involved

AgentHub is in its earliest days, and we want your help shaping it:

  • Explore the founding agent files and read the spec.
  • Suggest or author a new agent for your favorite API or tool.
  • Join the conversation: every PR and issue is a chance to teach and learn.

First-time contributors are celebrated—add yourself to the all-contributors grid!


Building Together, One Thoughtful File at a Time

AgentHub is about more than file format—it’s about making LLM development more robust, transparent, and welcoming for everyone. We believe the future of agentic AI is open, community-driven, and practical.

Ready to help build it? Browse the registry, grab an agent file, and craft something better. Happy agent-crafting! 🌱


Want more? We’re just getting started. Tutorials, deeper dives, and example walkthroughs are coming soon.


AgentHub operates under a Code of Conduct to ensure a welcoming, harassment-free environment for everyone.


Feedback, questions, or ideas? Jump into GitHub Discussions or join our next office hours!

AgentHub: A Manifesto for an Open, Collaborative Launch

· 5 min read

1. Why We Pivoted — From “Big Bang” to Community Seed

AgentHub began with an ambitious idea: “Ship 100 pre‑built agents and wow the internet.” On paper it sounded impressive—but it also risked two problems we care deeply about:

  1. Authenticity – Mass‑generated assets feel like marketing, not craft.
  2. Trust – A surprise drop positions outside developers as spectators, not partners.

So we re‑plotted our course. Instead of chasing volume and splashy numbers, we chose quality, transparency, and conversation as our north stars. We will earn the community’s confidence one meticulously written agent file at a time and invite developers to build the registry with us, not after us.


2. What AgentHub Is (and Intentionally Is Not)

We AreWe Aren’t
An open specification (Open Agent Spec v0.1) that any LLM stack can read.A closed vendor format or one more proprietary “prompt framework.”
A curated registry of Markdown agent files with structured frontmatter—each a distilled, peer‑reviewed guide to using a specific API or SDK.A monolithic CLI or SDK. Developers copy‑paste—or parse—the specs any way they like.
A conversation starter: every agent PR includes design notes, context, and room for debate.A one‑click magic box. We value understanding over black‑box convenience.

In short: AgentHub is the schema + library layer of the AI toolchain, intentionally lightweight so it can slot into any workflow—today or tomorrow.


3. Our Three‑Phase Launch Story

Phase 1 – Building a Solid Foundation (Weeks 1‑5)

Goal: Ship something we can be proud to show the world.

  • Open Agent Spec v0.1 Public RFC, three external reviews minimum, v0.1 tag locked.

  • Governance Starter Kit Clear rules (CODE_OF_CONDUCT, CONTRIBUTING, MAINTAINERS, CODEOWNERS) so newcomers know the handshake.

  • Founding Agents (10–15) Hand‑crafted examples for React, Stripe, Postgres, etc., each with .md, passing CI, and a candid DESIGN_NOTES.md.

  • CI / Lint / Smoke Tests A < 90 s GitHub Action that validates spec compliance and proves each agent runs inside LangChain, OpenAI Assistants, and LlamaIndex.

Why it matters: We won’t invite guests into a half‑built house. This is our quality guarantee.


Phase 2 – Preparing to Welcome the World (Weeks 4‑7)

Goal: Turn a solid repo into a hospitable community space.

  • Docs & Cookbook Site Auto‑deployed static docs with copy‑paste recipes for three frameworks.

  • Live Contributor Channels GitHub Discussions, a low‑noise Discord, and an “office‑hours” calendar.

  • Maintainer Collaboration Personal invitations to upstream project owners; adding them to CODEOWNERS turns them from by‑standers into co‑stewards.

  • Recognition & Ops All‑contributors bot, issue templates, and a 24‑h PR triage rota for launch week.

Why it matters: Great docs plus fast, friendly reviews are the two strongest signals that say “Yes, your contribution belongs here.”


Phase 3 – Our Community Launch (Week 8 and beyond)

Goal: Go public with a focus on collaboration and transparency, not vanity fireworks.

  • Launch Blog & Demo A narrative of why we built AgentHub this way, a 2‑minute live coding video, and a promise: “First PR review in under a day.”

  • Live Stream We’ll build an agent file on air—mistakes allowed, questions encouraged.

  • Hacker News & Product Hunt Posts The message: “Help us write the missing agent for your favourite tool.”

What success looks like: The comment threads become pull‑requests, not flame‑wars. Within 72 hours the first external agent is merged; within two weeks we see repeat contributors.


4. How We’ll Know We’re Winning

Health SignalTargetWhy We Care
Community PRs merged (first 72 h)≥ 3Shows real interest, not drive‑by stars.
PR review turnaround (launch week)≤ 12 hResponsiveness is the first impression.
Repeat contributors (first 2 weeks)≥ 5Indicates a welcoming process and meaningful work.
Public projects tagged #BuiltWithAgentHub≥ 5 in first 14 daysProof that the registry is useful, not just “neat.”
Qualitative sentiment> 70 % positiveHealthy discourse beats raw star counts.

Vanity metrics like GitHub stars are nice but secondary; our true KPI is engaged, returning collaborators.


5. Where You Fit In

  • Spec hawks – Help bullet‑proof v0.1; find the edge cases.
  • Agent artisans – Pair‑review founding agents or draft one yourself.
  • Doc whisperers – Turn terse agent specs into crystal‑clear cookbook pages.
  • Community champions – Host office hours, triage issues, keep the tone generous.

If something feels muddy or misaligned with our collaboration • transparency • quality ethos—raise it. This manifesto is a living document.


6. Timeline Snapshot

WeekMilestone
1‑2Spec v0.1 & governance docs merged
2‑510–15 founding agents + CI pipeline green
4‑5Docs site live with first cookbook recipes
5‑7Maintainer collaborations & contributor channels open
8Public launch (blog, video, live stream, HN/PH)

Let’s Build the Registry We Would Want to Contribute To

AgentHub isn’t a product dropped from a mountaintop—it’s a conversation starter. With this plan, we’re inviting the developer community to sit at the table from day one.

Quality over quantity. Transparency over hype. Collaboration over control. Stick to those values and success will follow—measured not just in stars, but in shared ownership and the momentum of builders who choose to stay.

Onward, together.