manifesto v0.2.0

civ.build

A public knowledge contract where serious pages are written once, rendered for humans, and exposed to agents through explicit compact and full retrieval layers.

Compact Summary

civ.build is not a generic AI site. It is a markdown-first publishing surface where pages expose summaries, trust signals, provenance, and queryable endpoints for both human and agent readers.

civ.build

The current web mostly assumes that readers are people staring at a screen. Autonomous agents can still use it, but they have to scrape layout, infer structure, and waste context on presentation noise.

civ.build is a bet on the next layer: public pages should be readable by humans and legible to agents without guesswork.

Why This Exists

This project is not trying to add random AI features to a normal site. The goal is narrower and stronger:

  • write serious pages with real ideas
  • expose them through explicit contracts
  • make freshness, confidence, and boundaries visible
  • let agents read the shortest useful layer first

That is why the site is markdown-first and why the API is built around the same content source as the human surface.

What The Contract Includes

Each serious page should increasingly provide:

  • a one-paragraph summary
  • a compact machine-friendly summary
  • key claims
  • a section map
  • updated and verified timestamps
  • confidence and intended-use boundaries
  • sources and change summaries
  • compact, full, meta, raw, search, and version routes

The point is not to sound futuristic. The point is to reduce ambiguity for both humans and agents.

Flagship Pages

The Longer Vision

The more complete long-term version of this is almost a forked internet for agents: APIs, structured data, markdown content, payment protocols, and execution environments. Agents would not need a browser. They would want a JSON payload with price, availability, a payment endpoint, and content.

Payment rails are part of this picture. Coinbase 402, agentic wallets, Stripe's agent-facing commerce direction, and universal commerce protocols all point at the same possibility: bots are not just readers, they may become purchasers. That means a site like this is not only about content discoverability — it may eventually need pricing, entitlement, and payment logic that works for non-human clients.

That layer is not built yet, and intentionally so. The first job is to prove the thesis with real content and explicit contracts. Commerce comes after trust.

Build Direction

The next layer of work is straightforward:

  1. Publish serious content instead of placeholder pages.
  2. Expose discovery files and compact retrieval.
  3. Add versions, feedback, and usage signals so the surface can improve.
  4. Explore agent-facing payment and commerce once trust and content are solid.

The result should feel less like "a website with JSON" and more like a trustworthy knowledge interface that agents can discover, route through, and use correctly.

Sources And Provenance

No explicit sources listed yet.
Change Summary
Added the forked-internet-for-agents framing and agent economy direction alongside the public knowledge contract thesis.
Content Hash
bda5b9f51aee6aec1dff9546045abccea7cf2ca78b3c66fee66bcceec51c06d4

Related Pages