4
Who’s Accountable for the Algorithm 7:35 Lena: It’s funny, Miles—whenever we talk about accountability, everyone starts pointing at someone else. The engineers point at the data scientists, the data scientists point at the product managers, and the product managers point at legal.
7:50 Miles: It’s the classic "Spider-Man pointing at Spider-Man" meme! But in a responsible AI world, that doesn't fly. You need a RACI matrix—identifying who is Responsible, Accountable, Consulted, and Informed for every single move.
8:05 Lena: I've seen those in corporate settings before, but how does that actually look for an AI project?
8:10 Miles: Well, the taxonomy of roles is getting much more specialized. At the top, you’ve got the Board of Directors and the Chief AI Ethics Officer. They’re setting the tone and the strategy. Then you have the AI Ethics Board—this is the cross-functional group that evaluates the high-risk decisions. They’re the ones who deliberate when principles conflict—like when transparency might compromise security.
8:32 Lena: Oh, I hadn't thought about that. Sometimes you can’t be 100% fair and 100% accurate at the same time, right?
0:39 Miles: Exactly! Those are the trade-offs that an Ethics Board has to navigate. It’s not always a clear-cut "right versus wrong." Sometimes it’s "right versus right." And then, down at the functional level, you have the AI Governance Manager and the ML Engineers who are actually embedding these practices into the development lifecycle.
8:58 Lena: So, it’s not just a "check-the-box" at the end. The engineer is thinking about bias while they’re actually writing the code.
4:43 Miles: Precisely. And the key is that every AI system needs a "named owner." Someone who is answerable if things go sideways. The U.S. Office of Management and Budget has even started mandating specific governance structures for federal agencies. It’s becoming a standard expectation.
9:20 Lena: What about the role of the "Human in the Loop"? Is that a specific job title, or is it more of a process?
9:26 Miles: It’s more of a process, but it requires specific authority. Take the "Robodebt" case in Australia—they had human operators, but the management prioritized speed over accuracy, so the humans just rubber-stamped the AI's mistakes. That’s not oversight; that’s just a facade.
9:42 Lena: That’s a chilling thought. So, for oversight to be real, the person has to actually have the power—and the time—to say "no" to the machine.
5:20 Miles: Exactly. They need the authority to override, repair, or even decommission a system if it’s causing harm. And they need "context persistence"—a record of what the AI was thinking—so they aren't just guessing why it made a certain choice.
10:03 Lena: It sounds like we’re building a whole new "org chart" just for AI. But I guess that’s necessary when the stakes are this high.
10:10 Miles: It really is. And once you have the people in place, you have to give them the tools to see what’s actually happening inside the models. You can’t govern what you can’t see.
10:19 Lena: Which brings us to the whole "black box" problem. If I’m on an ethics board, how am I supposed to understand what a complex neural network is doing?
10:28 Miles: That’s where transparency and explainability tools come in. Things like SHAP and LIME—they help "un-box" the decision. They show you which features—like age or income—were the biggest factors in a specific prediction.
10:41 Lena: So, instead of a shrug, the AI gives you a chart that says, "I made this call because of these three things."
10:50 Miles: Right! And then the human can look at that and say, "Wait, why is 'zip code' such a huge factor here? Is that just a proxy for race?" That’s where the real governance happens. It’s the conversation between the data and the human values.