Episode #46: AI Agents Build Compilers, SVG-First Apps, Markdown for Agents, and Why Code Is Now Cheap
An SVG Is All You Need: Self‑Contained, Durable, and Interactive
There’s a fun idea floating around web dev: what if an SVG file was all you needed to build an app? Imagine a single SVG that’s not just an image, but also your UI description, layout, and even some behavior—all in one portable, declarative document. Because SVG is just XML, it’s inherently structured, scriptable, and easy for tools or agents to read and transform, unlike a random canvas blob or screenshot. In a world of increasingly complex web stacks, this pushes a radical opposite vision: ultra-minimal apps where your “frontend” is basically a smart SVG that can be rendered anywhere, styled, versioned, or even edited visually like a design file.
Code Is Still the Best Abstraction
No-code and low-code tools are great for quick wins and empowering non-engineers, but they lock business logic into proprietary UIs, get messy as pipelines grow, make testing and versioning hard, and end up costly—so the real trend is moving toward smaller, integrated data stacks orchestrated by code-first, declarative tools like Dagster, where your automation lives in open YAML or Python instead of a vendor’s black box.
Introducing Markdown for Agents
Cloudflare just launched something called “Markdown for Agents,” and it’s basically SEO for AI agents instead of just humans. The big shift is that traffic is moving from classic search engines to AI crawlers and agents, which don’t want bloated HTML—they want clean, structured text they can parse cheaply and reliably. Feeding an AI the full HTML of a page can use 5x more tokens than the same content in markdown, which directly translates into higher cost and slower pipelines for any agent reading the web. Cloudflare’s play is simple: if an agent sends an Accept: text/markdown header, their network will fetch your normal HTML, convert it to markdown on the fly, and return that instead, along with a header that even tells you how many tokens the markdown roughly uses. This is already being used by popular coding agents like Claude Code and OpenCode, and it works today on Cloudflare’s own docs and blog. On top of that, every markdown response ships with “Content Signals” headers like ai-train=yes, search=yes, ai-input=yes, so you can explicitly say “this content is allowed for AI training, search and agents,” with more granular policies coming later. For anyone building AI agents—or running a site that wants to be discoverable by them—the takeaway is clear: start treating agents as first-class visitors and make sure your content can be served as markdown, not just HTML.
Building a C compiler with a team of parallel Claudes
Anthropic tested what happens if you treat AI like a full dev team instead of just autocomplete. They spun up 16 Claude agents in parallel and asked them to build a new C compiler in Rust that can compile big real‑world codebases like the Linux kernel, QEMU, FFmpeg, Postgres, and Redis. Each agent worked in its own container, grabbed tasks, edited code, ran tests, and pushed to a shared repo, with a strict CI and test harness making sure they didn’t break everything. The result: around 100k lines of compiler code that actually works on multiple architectures, but is still slower than GCC and not production‑ready yet. It still relies on existing tools for some low‑level pieces, but that’s not the point—the experiment shows that with good scaffolding, AI agents can coordinate and ship large, complex systems end‑to‑end. So the simple takeaway for developers: we’re moving from “AI helps you write functions” to “AI can own entire projects,” and our new job is increasingly about setting goals, designing guardrails, and reviewing what these agent teams produce.
Code is cheap. Show me the talk
Linus Torvalds once said, “Talk is cheap. Show me the code,” but in 2026 that quote is basically upside down. With LLMs, an average developer can spin up 10,000 lines of decent, working code in hours instead of weeks, so raw code has become abundant, almost commoditized. The old signals we used to judge projects—beautiful READMEs, clean architecture, perfect comments—can now be one-shot AI output, so they no longer prove real effort or expertise. That means the real bottleneck has shifted: it’s not typing or syntax anymore, it’s our ability to think clearly, design systems, and read and evaluate mountains of code to separate solid engineering from AI slop. For seniors, this is a superpower: if you can imagine, articulate, and architect well, you can compress months of work into days by steering AI tools, then spend your energy on hard trade-offs, governance, and quality. But for juniors, there’s a risk of becoming dependent on the “genie,” shipping code you don’t understand and never building the fundamentals to judge what’s good or dangerous. So the big shift is this: in modern software development, good talk—clear thinking, problem framing, and critical reading of code—is now more valuable than good code itself, because code is cheap, but trustworthy judgment and accountability are not.