3
Navigating the Agentic Architecture 5:05 Lena: So we’ve got the foundation—the CLAUDE.md and the planning—but I want to dig into the actual "engine" here. You mentioned subagents earlier. How do they fit into this terminal workflow? Is it like Claude is hiring extra help?
5:20 Miles: That’s a great way to put it. Think of it as a "Delegation Layer." Your main Claude session is the project manager, but if you ask it to do something massive—like "Audit the entire security of my API"—it doesn't have to clog up its own brain with every single file it reads. Instead, it spawns a subagent.
5:38 Lena: And these subagents have their own "clean" memory, right? So they don't get confused by the conversation we were having about the UI colors.
5:46 Miles: Precisely. Isolation is the whole point. You might have an "Explore" subagent that’s specialized just for reading the codebase. It’s fast, it’s read-only, and it’s often running on a cheaper model like Haiku to save you money. It goes out, finds the relevant files, and then just hands a summary back to the "Boss" Claude.
6:04 Lena: I love that. It’s like having a specialist who does all the research and then gives you the "TL;DR." But I saw something about "Agent Teams" too. Is that different from subagents?
6:14 Miles: Oh, Agent Teams are the next level. They’re still in research preview, but the idea is that you have a lead agent coordinating multiple "teammates" who can actually talk to *each other*. They have a shared mailbox and a task list. So one teammate might be building the backend, another is building the frontend, and they’re messaging each other to make sure the API endpoints match up.
6:35 Lena: That sounds incredibly powerful, but also like it could get expensive fast. If each of those agents is a separate Claude instance, those tokens must be flying.
6:45 Miles: You’re not wrong. That’s why the "Max" plans exist. If you’re a power user doing this all day, you’re looking at these subscription tiers because the API costs would be astronomical. I mean, one developer tracked ten billion tokens over eight months! On a Max plan, that’s just a flat monthly fee, but at API rates, that would have been fifteen thousand dollars.
7:06 Lena: Wow. That really puts the pricing into perspective. It’s basically the difference between "pay-as-you-go" and "all-you-can-eat" for your AI team. But even with a big team, the human still has to be the one calling the shots, right?
7:19 Miles: Absolutely. You’re the architect. And the tool gives you these high-level "Primitives" to work with. You’ve got Commands—which are basically saved prompts for stuff you do all the time—and then you’ve got Skills. Skills are fascinating because they’re not just "do this," they’re "think like this."
7:37 Lena: Like a "Security Expert" skill?
0:41 Miles: Exactly. If you load a security skill, Claude doesn't just look for bugs; it adopts a specific reasoning pattern. It starts thinking about SQL injection, input validation, and auth bypasses. It’s like giving your AI team a specialized training manual for the day.
7:55 Lena: I’m starting to see why this is so much more than just a chatbot. It’s a programmable system. And it’s all happening in the terminal, which means it has access to your actual tools. I saw it can even run shell commands and manage git?
8:09 Miles: It can. It’ll run your tests, see that they failed, read the error message, and then try to fix the code to make the tests pass. It’s that autonomous loop. And with things like "MCP"—the Model Context Protocol—it can even reach outside your computer. It can query your database, check your GitHub issues, or even look at a Figma design and try to turn it into code.
8:30 Lena: That Figma integration is wild. "Build this component to match the design," and it just... does it?
8:36 Miles: It’s getting there! It reads the actual schema of the design. But this is where we have to talk about "Trust-Then-Verify." Claude is amazing, but it can be very confident even when it’s guessing. You have to be the one reviewing every diff. The tool shows you exactly what it’s changing before it saves, and you have to have that "product manager" hat on.
8:56 Lena: Right, because if you blindly accept everything, you might end up with a working app that’s a total security nightmare. I think I read that AI-generated code can have a higher rate of vulnerabilities if you aren't careful.
9:09 Miles: It can, about two-and-a-half times higher in some cases. Usually, it's simple things—forgetting to sanitize an input or accidentally exposing an API key. That’s why those "Hooks" are so important. You can set up a hook that says, "Every time Claude writes a file, run a security scan." It’s an automated guardrail.
9:29 Lena: So you’re building a system that’s not just fast, but also self-correcting. It feels like the ultimate "vibe" is one where you’re just steering the ship while the AI handles all the heavy lifting in the engine room.
9:42 Miles: That’s the dream. But to get there, you have to be willing to drop out of "vibe mode" when things get serious. For complex business logic or security-critical paths, you still want to be very explicit. Vibe coding is for the velocity; engineering is for the reliability.