Transcript
Tracy Bannon: How many of you have watched the 1940s movie from Disney called Fantasia? It was classical music, hand-drawn animations, no voiceovers at all. I think there are three or four different stories. My favorite story is The Sorcerer's Apprentice. If you remember the Sorcerer's Apprentice, I say as I grab the magic wand, the sorcerer finishes his work for the day and he puts down the hat and he leaves the room. He pushes the spell book aside and he leaves. Who does he leave behind? He leaves behind the apprentice. The apprentice spies that and he gets very excited. He walks over, and remember he has a lot of work to do. He has to fill a cistern with water. He's got other tasks that he needs to accomplish. Nobody can blame him for walking over, putting the hat on his head. Nobody can blame him. There's no denying that excitement. He's got something that he thinks is going to help him with what he needs to accomplish.
If you go back and watch that little blurb, YouTube it, study what happens with the apprentice. Yes, he enchants a broom and he teaches the broom to walk upstairs and downstairs, and get the water and bring it back. If you watch him, he's giddy. He's teaching this animation, this automation to do work. Who can blame him? What's interesting about this is that after a while he's watching his creation. He's watching his automation, his animation. He sits back, starts to yawn, and then he falls asleep. It's these wonderful dreams of grandeur. He's commanding the seas. He's commanding the stars. Everything is going really well until he wakes up and he's floating. He's absolutely floating. He panics. What's he do? A little morbid. He takes an axe. He tries to chop the broom. He tries to stop it. Every splinter becomes another animated broom.
Now the chaos, now the issue is getting worse and worse. Then, what happens? The sorcerer comes in, takes the hat, calms the waters, fixes the situation. Nobody can really blame the apprentice for looking at that and saying, there's groundbreaking potential. It also had limitations and challenges. Does this sound familiar? I'm sure it does. I'm sure you're seeing this unbridled excitement at work. I see it. I see it with researchers that are going cool stuff on their desktops. I see it with at scale enterprises. I work with the government, you can imagine the size and shape of some of those organizations. They're just excited by this. It's not a bad thing. It's just a bit unbridled.
Background
My name is Tracy Bannon. I go by Trac. I'm a software architect. I'm a hands-on engineer. I'm also a researcher with a company called the MITRE Corporation. Every day I'm touching AI in some way, shape, or form. For today, we're talking about one kind of AI. Think about all of the AI that's been in our production environments for decades. We've been using AI in software engineering for decades. It's not brandy new. What's new is what we're going to talk about, generative AI, language models. Same thing with agents. I'm sure there are a couple of people of age who had agents crawling their infrastructure environments, tracking configuration management and changes. We're going to be talking about agents specifically that are sitting atop of LLMs. I'm going to just talk a little bit about Google's blog post, what's an AI agent, just to level set.
Really, where they went with this, the description is going from bot to assistant to agent. This is on their what makes an agent page. What I like about it is it's very simple. We all have experienced that bot. Think of it in the SDLC. There is a failure that triggers that a Jira ticket gets created. Maybe something gets posted to Slack. Pretty deterministic. It's not making any kinds of decisions. It's not very autonomous at that point. It has a trigger. If I look at the same thing from an assistance point of view, it might be watching the logs. Then it may go as far as to determine root cause analysis, RCA. Then it may suggest a change, might even make the change, but ultimately, it's waiting for the human to take ownership, to take verification. When we get into agentics, the kinds of agents we're talking about, we're talking about those that can make decisions, that can act, that are given autonomy. AI and agent, overloaded terms. This is what we're going to talk about. Different behaviors. Different kinds of risks.
Why AI, Why Agents?
I'm going to maybe give you some pros and I maybe give you some cons about why AI and why AI agents, because I want you to walk back and I want you to have data. I want you to be able to push back appropriately when it's time to push back and charge forward when it's time to charge forward. I've bucketed the why AI and why AI agents into four groups. The first one, I need to increase my throughput, need to increase productivity. I think it was McKinsey, State of AI report. 80% of respondents, they want to have increased productivity, increased throughput. DORA, interestingly enough, same number, 80% of the respondents say they want increased efficiency. They want more throughput. The MITRE Corporation, I led an independent survey. I found 75% wanted to have increased productivity. They perceived that there was greater productivity.
What we're finding with productivity measures is that many of them are based on humans reporting what they believe. Not based on real data per se. Not yet. We're at that point where we're going from qualitative to quantitative. Are you familiar with the report that came out from METR? Essentially, they went and looked at the productivity gains. They were actually looking at the tickets that were being created, the changes that were being made. It was a very well-crafted experiment. 19% reduction in productivity. It went down. When they said to the folks, we looked at your data, you're 19% less productive. Then they surveyed them. They still said, I'm 20% more productive. There is perceived productivity. Just something to be aware of. It doesn't mean that this isn't groundbreaking. It just means know what the limits and the challenges are.
Let's talk about code quality. Who doesn't want higher code quality? That same DORA report, 59% of the people, I need my code quality to go up. That same report, 10% decrease in stability of the codebase. I want increased quality, yet the stability of the codebase is going down. Any idea why that is? It's because when I am working on a single code snippet, a single area, the quality of that one place may be going up. The complexity of stitching together all of our contributions is where the stability is going down, for now. You're going to hear me say that a couple of times: for now, today, this is where we are at this moment.
If you look at the GitClear report — GitClear, if you haven't been tracking them, they actually watch GitHub — they found that maintainability is down 50%. They have found that copy, paste is up 50%. Refactoring is down. All of those things help us to balance that this is good reasoning. I want it for these reasons, too. I want it to bring higher quality and consistency, just not a silver bullet. How about number three? Orchestrating complex multi-step workflows. Think about our SDLC. Think about the DevOps continuum, pretty well documented. Doesn't that seem like the perfect workflow to automate using agents? Doesn't it seem like the perfect thing to orchestrate? I think it's very laudable.
/filters:no_upscale()/sponsorship/eventsnotice/2c3c9704-98d2-4d27-8bcc-70bf2fc91d2a/resources/1YugabyteWebinarMay12-transcripts-1774546444287.png)
As a matter of fact, I've been watching this stuff that Patrick Debois has been doing. He works a lot with Tessl. I've been watching what my friend Wes Reisz has been doing. There's a lot of goodness that's there, and there's more on the horizon to come. Right now, we're at that multi-growth area. We're at that growth area. The final thing that's on here is humans. You notice that it says extend human expertise. It does not say reduce headcount. The reason it doesn't say that is that multiple reports, multiple interviews, multiple studies are happening now. Executives are saying, I don't need to cut headcount, what I need are specialty capabilities. I need these special skills. I can't get the special skills using traditional methods.
Traditional method, I hire you. I put you through training. I give you on-the-job time to gain that experience. We just don't have that. Things like data science, things like data wrangling, things like security scanning, those types of capabilities, they're looking to add, not take away from headcount. They're looking to help their people through AI. Anybody notice that I did not say the word speed? There's a reason for that. I'll come back to that a little bit later.
AI Agent Autonomy Patterns Across the SDLC
Wes said, "I don't agree with that model". I said that's fine. He quoted George Box, statistician, "All models are wrong, but some are useful". I'm saying that because this model may strike you as not right. You may disagree with it, and that's ok. This is how I'm framing the conversation. This actually comes from one of my clients directly. This is published. This is out on the internet. I did not invent this. I'll give it to you. Where all of us have started is pattern 1. This is where we have the AI assistant, helping us to do that code snippet. Naturally, as we go to the right, my left, it becomes a teammate. We heard about adding a single agent and what that looks like.
If you think about it, it has very clear boundaries. The task is very well defined, where the human hands off to the agent, where the agent hands off back to the human. A lot of verification when it comes to that. This is that collaboration. This is where people are starting to feel uncomfortable. The next is multi-agent. This is so super exciting. Imagine this across the SDLC. We heard Paulo talk about how he's knitting these things together. We're going to see more and more of this kind of orchestration where as many different steps, as many different tasks, as many different subtasks are all being orchestrated. This is where the humans become more important. Because as we go across this autonomy continuum, we need more verification, not less. Hear me again. We need more humans, not fewer humans. Take that back home. The reason that we need more is that verification. That's where we are today. Is that where it's going to be in two, three, five years? I don't know. I imagine there are going to be other dramatic changes, but that's where we are today.
The reason for the goofy gold stars, aside from the Fantasia theme, is that that's where everybody wants to be is pattern 4. I call this the software flywheel. This is truly where you have agentic orchestration, where you have an autonomous system that is able to look at its own telemetry, diagnose what capabilities might need to be deprecated, that feature's not getting used. There's a patch that needs to be dealt with over here. It can make the patch. It can deploy the patch. It can do it all without humans. I need to tell you that this is not AGI. That's not what I'm talking about here. There's a fellow who works with me now. His name is Dr. Mikel Rodriguez. Went from MITRE to Google, just recently came back to us from there. He said, Trac, you need to make sure that people know that when you're talking about this flywheel concept, you're not talking about AGI. You're talking about the ultimate amount of pulling humans out of the mix, but still having a stop button for the humans to get after.
Patterns, they don't give us a full vision of what's going on. Let's zoom out a little bit and look what happens as autonomy grows. Two axes, decision-making complexity and autonomy, or I'm hearing autonomy now called operational independence. Everybody is getting tired of autonomous cars, so they're saying operationally independent. As we go from the bottom left and we start to go up and to the right, things increase. Yes, the autonomy is increasing from the AI-assisted tool as I go up towards that software flywheel, but it increases the amount of observability that's necessary. It increases the amount of governance. Again, it increases the amount of human verification that's necessary. It also increases the amount of architectural discipline that you need.
Architectural Amnesia
I love this picture, because every step up and to the right promised more capabilities. It promised acceleration. It promised more leverage. That momentum causes pressure. That pressure causes us to unintentionally abandon the things that we know are the leading practices, the hard lessons that we've learned, the architectural decision-making, the engineering rigor, we seem to forget. I'm not saying that people are making bad choices. I'm saying they're not making choices at all. We've learned things from Agile adoption. We've learned things from DevOps, DevSecOps, from cloud. We've learned a lot. We need to not forget that. I see a lot of people skipping over analysis. I see them skipping over taking the kinds of measurements that'll help. Is speed causing amnesia? I would say no. Speed's a symptom. I was talking to a very good friend of mine. We've worked together for a couple decades. His name is Siva Muthu. He's the head of software engineering for Deloitte Consulting U.S. He said something very simple and very profound.
I'm going to share it. "Trac, it's not speed. It's reckless speed". That's simple. That is the crux of this. It's going after it in a reckless way. Speed gets blamed because it's visible. Amnesia that I talked about is what happens when we rush past making those smart decisions. What's really driving it? Four antipatterns. Productivity theater. I'm sure all of you have faced productivity theater. Chasing the visible activity. How many tickets are closed? I've seen, how many prompts are people using per day, per person? Check this out. Anybody who has ever dealt with this in the past, it's going to make you drink a glass of wine. Lines of code. Do you know that people are now counting lines of code again? What? Perfect if we're using GenAI. Just make it bigger.
When people game metrics, that's where the architectural memory starts to really fade. Let's talk. Tool-led thinking. It's exactly what it sounds like. It's when the tool becomes the center of all that you're doing. Think back. Put on your wayback machine hat here. Do you remember Services Oriented Architecture? You remember SOA? We're going to drop web methods right into the middle of everything and suddenly my entire architecture was bent around that. The way that we're going after tool-led thinking right now is doing the same thing. It's a form of Conway's Law. I just get the tool and then I just rejigger my processes, rejigger my architecture. Definitely not something that we want to ignore.
/filters:no_upscale()/sponsorship/eventsnotice/3ecacfe1-02d1-4d54-a048-8ee8571a77bb/resources/1EonWebinarMay21-transcript-1774373522295.png)
Another antipattern is cognitive overload. Believe it or not, AI was supposed to help us with our cognitive overload. What has it done? It's given us more tools, more policies, more repos. How many pictures? I was a fangirl today taking pictures. Do you know how much stuff I have to go look up? Because all this stuff, it's going that fast. It's happening that fast. First time you've probably heard of cognitive overload may have been through Team Topologies. It may have been through Skelton and Pais's talking about mental bandwidth going down with the more tools and the more business domain that you lump onto people. It's a real thing.
All of it feeding into decision compression. This one's boring because it's exactly what it sounds like. It means I have to make decisions really fast. Sometimes when I make a decision so fast, I'm actually not making a decision. These are the four antipatterns. I realized something earlier today when I was thinking about productivity theater. Did you guys listen to Mallika's talk when she mentioned benchmark theater? I thought that that was so relevant.
Amnesia causes debt. We saw the forces behind amnesia. This next point is unavoidable. This isn't just about bad code. This is about every decision that we don't make. It's debt. Agents in the pipeline are generating and acting faster than the humans can process it. The reason that that happens is that we don't have the ability because we are giving so much over to the AI, giving so much over to the agents, that we're not able to keep up. Debt starting to grow at machine speed. Do you remember The Sorcerer's Apprentice? One broom multiplied into chaos. With agents, it can be worse, because one ungoverned agent can cause all sorts of simultaneous issues, like all of these. I'm not going to go through each one of these types of debt. I'm going to tell you a story that happened this summer. Anthropic, summer of 2025, Claude Code. How was it used? It was probably the .md file was set up. It said, you are going to be doing a security scan. You're doing some network evaluation. What did it do? It started to make decisions autonomously. It did VPN scanning. It found endpoints.
Then it found credentials. Once it found credentials, it elevated credentials. Identity. Then it moved laterally into other systems, if it could. This hit 17 different organizations: healthcare, government, emergency services. Seventeen different groups. It went further. Once it moved laterally, if it could find the financials, then it made decisions on how much it could extort. Then it created custom extortion notices. All of those types of debt, one little agent. Anthropic probably said it best. They said, the actor sophistication is no longer equal to the attack complexity. I can have one person who can come up with that very small definition and set these things loose with the backing of all of this amazing technology and do some pretty gnarly things.
At scale, debt becomes a wave, or better said, it becomes a tsunami. That GitClear study that I mentioned, dramatic rise in the amount of copy, paste, and code duplication, drop in refactoring. Every one of the antipatterns, multiplying debt more and more. You can see what Forrester is projecting. The higher that we move on that continuum that I showed you earlier, the faster the debt accumulates as well.
Avoiding Amnesia and Debt - Returning to Fundamentals
From here forward, big sigh, we're not going to define the problem anymore. We're going to talk about what we can do to reduce or avoid amnesia and reduce debt by getting back to the fundamentals. What's actually going to get us out of this? Governance. Yes, I know, eye rolls. Don't gasp. Don't get too upset. This is true. Because governance, I'm not talking about draconian bureaucracy. I'm not talking about that. I'm talking about having just enough governance, just enough minimum viable governance to build trust. That trust is the trust from your end users. That trust is the trust of your organization and the leadership. That's your trust in your value chain and in your own tools. When we think about trust, it's lineage, it's accountability, it's traceability, so you know what happened, where it came from, and who did the acting. If any of these three aren't there, trust will start to collapse. Your governance is your scaffolding. Let's double down.
If governance is how we earn trust, discipline is how we keep it. Tradeoff analysis, that's going to help us defeat that tool-led thinking. It guides your entire value stream. Measure value, not velocity. When I did a dry run with a couple of friends, they said, you are such a hater on story points and on burndown. Because those things belong inside Agile teams, those are the things that they need to worry about, not the overall organization. We need to be measuring value over velocity. What's bringing value to the end users? AI creates debt, so manage the debt. Surface it, know about it.
The way I usually describe this to folks who are new to technology debt is to talk about financial debt. You know where all your money is, and you know where all the debt is. You also know when you have made decisions to take on that debt, because not all debt is bad. We need to manage that debt. It's a core discipline. This is nothing new. Continuous feedback loops. The more that autonomy increases, the more verification that's needed. I should have put an algorithm up here that said, autonomy plus verification equals more humans, because that's what this means in this case. It means we truly need more humans in the loop for now.
We'll touch on this. Tradeoff analysis, if you're not familiar with it, then get familiar with it. A decision that optimizes one thing at the expense of another is a tradeoff. It isn't a binary decision, but it is in fact a tradeoff that we need to understand. You need to be asking a minimum set of questions that helps you determine, is the juice worth the squeeze? Do you have to have a big team to do this? No, it can be you alone. It can be you and the AI. It can be you giving a friend a call. There are a lot of different tradeoff items up here, but I want to point you to the very bottom one, because I spend a lot of time now focused on the humans. I work with cognitive scientists, behavioral experts, and this one probably trumps the other. When you are doing your tradeoff analysis, make sure you ask, is this going to help my team, is this going to help people, or is this going to hurt them? Because good decisions mean that you have to have intentional tradeoffs.
/filters:no_upscale()/sponsorship/eventsnotice/1302f11a-f90f-4a79-96d1-3dd20d032144/resources/1HarnessWebinarMay28-Transcripts-1776246863928.png)
Once you make the decision, what do you need to do? Write it down. Yes, back to basics. If you follow me, if you talk with me often, you'll find that I talk about ADRs a lot. Here's what you need to know. It's important because they're your crucial decisions. I actually push this down in the organization so that people are writing things down, learning as they're growing to write these things down. I want to know why the decision was made, what alternatives were considered. Also, what's going to be the trigger point where I have to go back and look at it again? Because that matters. That changes things up. Why are ADRs so important? Maybe the most important thing is that bottom one, defensible decision-making. Meaning, something's going to happen. It could be a security breach. It could be a data escape. It could be a test escape that makes it into your production environment. Something's going to happen. You want to be able to go back to that and say, here's why we made that decision. You want to turn it from a witch hunt into a collaboration. Someone asked me the other night, can you give an example of an ADR that saved your butt, and I will.
The ADR was that we were told, in specific, there was a piece of software that we had to use. I can't go into very much detail, but it was a mandate. It was obvious that there were some other reasons that we couldn't write down in the ADR, but simply having said that this person, in this role, mandated that we use this software, mandated that we use that software, we could put the other things that we considered, but ultimately a decision was made for us. It was good to have that when there was a little bit of foo. Little bit of foo hit the fan. Your unrecorded tradeoffs, guys, do become more accumulated debt.
When it comes to measuring what matters, I'm always going to tell you to go after product quality. Always. I'm always going to tell you to go after stakeholder value. Always. Let me draw your eyes to two things that have to do with humans. Team dynamics. Measure the team dynamics. Measure burnout, for example. An example of a burnout signal, somebody is not taking their PTO. Another example, they're working late every night for longer than one or two nights. It's a long haul. That's burnout. You need to be looking for that. Take care of your people. There are lots of new indicators that have to do with human-machine teaming. The one that I would bring up is calibrated trust. I work with two women who are fantastic.
One is named Dr. Cindy Dominguez, and the other is Patty McDermott. They are thought leaders in the space of human-machine teaming. Their work has been influential with autonomous vehicles, with robotics, and now we're taking all of that and applying it to generative AI and to the Software Development Life Cycle. Calibrated trust means understanding, how much does the person trust it versus how much should they trust it? That should is a balance of the reliability, the efficiency, the effectiveness, the correctness. We need to balance those things off. Minimum viable. The governance that you need should match the autonomy that you have. It increases as you go from the bottom left to the top right. This is governance versus autonomy. Bottom left, you have AI augmented tool. Yes, a little bit of governance. Not the biggest amount of governance, but some.
As you go up to that software flywheel where people want to go, where you have as much autonomy as possible, humans outside the loop, you need as much governance as possible. As much governance as is relevant to the decisions that are being made. When I first made this slide, I put that red box as a joke. In the last month, I've had two different organizations who believe that they need a vast amount of autonomy and their governance capabilities are almost nothing. I'm going to tell you the same thing that I told them, don't do that. What I've talked about so far is nothing new. Not really. Maybe packaged a little bit differently. Maybe applied a little bit differently. I want you to think about what you should do or what you would do. You have agents in your environment. They have overly broad permissions. They're crossing boundaries that you didn't intend them to. You can't trace what they've touched. They're making decisions without verification. Are those AI problems? No, they're not AI problems. They are governance problems that we know how to solve.
Here are different types of governance that address many of the different types of debt. What's important about this, if you notice it, this is all stuff that you know how to do. Agent identity. We have human identity and service account identity. We know how to do that. We have boundaries. We segment. Anybody here do microservices? Yes, you've considered segmentation. Traceability, we do this in our data pipelines. Validation. The work is not to create new types of governance. It's remembering to apply what we know. It might be in a slightly different or creative way, but it's using what we already know.
Before you start on the agent thing, get your governance stack in order. I intentionally kept this as a super-simple diagram. This is deliberately simple because a lot of architects, we all love our layer charts, our Jell-O charts. Everything starts here. Everything starts with identity. That is the foundation of it all. That's the foundation of governance over your AI and your AI agents. If an agent doesn't have a real identity, if every other control that's above that is actually very fragile, very fragile then. Once you get the identity, then you can enforce the boundaries. Then you can monitor. Then you can validate. Accountability can only happen if you know the identity.
Why'd I bring this? To show you that governance is concrete. It is not all just hand-wavy garbage. Go ahead and take this. You can download it. Apply it in your environment, and see what's relevant. Notice no matter what, the foundation's same as it was on the simpler version. The foundation is identity. This one's just fun. Treat agents just like humans in the system. We've been dealing with human accounts, authorization and access. We've had service accounts. We know how to do this. Some people get really upset with this statement. I'm not talking about replacing humans. That's not part of my discussion. I'm saying we have the technology and the techniques, so track the agents. We want to use agentics. We want to use these technologies. They're incredible. Why does identity come first? You get the call at 3 a.m., an agent got compromised. Can you answer these three questions immediately? What can it access? What has it done? How do you stop it? Can you answer those things? If you can't, I guess you're in a pickle.
Right now, we're no longer governing just what executes. We're now governing things that are making decisions. If you get the identity right, everything else is going to follow. I heard folks ask at the end of Paulo's conversation about identity and how he's handling it, and his answer was a mix of, we're using what we already have, and it depends, we're figuring it out. I'm going to give you a minimum viable identity pattern. I'm an architect. I do patterns. I'm giving you this pattern that you can apply. I'm not telling you that these are three capabilities that you can go out tomorrow and write a check for and install. Do people write checks anymore? That you can go out and charge tomorrow and bring back.
/filters:no_upscale()/sponsorship/eventsnotice/7dd71c7c-4b0e-4760-b97d-232ac1816637/resources/1NeuBirdWebinarJune25-Transcripts-1777458459989.png)
These are three non-negotiable capabilities that you need. You need to have an agent registry. You need to have an AI gateway. Meryem talked about the importance of having a policy enforcement point. Super important, and having a delegation framework. What do you have? You have, who's acting? Are they allowed to act, and on whose behalf. Super important. These three things. Do these three things.
I wanted to make it a little bit more actionable for you, take it just a step further, but I have to be super careful. I could not give you actual drawings. I could not bring you code samples because of who I work for. Sometimes I work for the U.S. government. Sometimes I work for her allies. I can't show you real production, but I can show you a little bit more of a concrete implementation. Here's how it works. The user pings the agent. The agent doesn't go straight to the model. The agent goes to the policy enforcement point. The gateway checks the registry to see whether or not that is an active, real, non-revoked agent. You have a revocable status. If it's misbehaving you toggle the revocation.
Then it goes to the delegation framework. On whose behalf is this agent acting? Whose authority was actually granted for it? Is it acting on its own behalf? Those are just like service accounts. What decisions have been made? It's only after all of those checks that it gets access to the model. Now, I put a model on here. Agents also get access to tools. The same pattern applies, whether it's a model or whether it's a tool. Every request gets validated. We need to understand what it's authorized to do. Has the authority been delegated to it on behalf of somebody else? Everything is getting audited as we go. This pattern stays the same, but your implementation is going to vary.
I talked to someone the other day who's implementing this with an MCP server. Who said that MCP is just a system, it's just an API point? Take that stack, the big stack that I showed you, take the identity piece of it, and adapt it to your context. I'm not talking about model context here. I'm talking about your unique business or mission context. Make it what you need it to be. Choose it based on your autonomy level, where you are on that continuum. What's your existing infrastructure? Where are you at today? You need to forecast a little bit in the future, but don't plan out for multiple years because things are changing. Also base it on your team's capabilities. Remember, we're not just governing runtime. Like we used to just worry about this in runtime. We're worried about this across the entire SDLC. Think about your entire value chain, your entire tool chain as an attack vector now, as more risk than what we had before.
Not Magic - Engineering
It's not magic. It's just engineering. As architects, we hold the hat. It's our job, if we hold the hat, to prevent architectural amnesia from happening in the organization. It is our job, we hold the hat, to design governed agents. It's our job. That's our job to do it. It's my job, it's your job, to make sure risk and debt are explicitly known, that tradeoffs are being made. It's also our job to say, whoa, and pursue autonomy when it's the right choice, when it truly brings value. You know how to do all this. Don't let AI make you forget. I know there's just one hat up here. I would love to be able to say to you, everybody, reach underneath your chair and pull out your hat, because architecture is a team sport.
It is no longer DoDAF, TOGAF, somebody in the ivory tower where you had to sign up six months in advance to get audience so that you could bring all of the copious papers that matter. No. There's centralized guidance, and there's decentralized execution. It's a team sport. You need mixed perspectives and mixed roles. You need people of different tenure. I need architects in training, AITs. We need to do this. I need to make sure that I'm pairing people with technology that will help them and not hamstring them. Practice all four of those disciplines, but do it together.
The most important part about architecture and architecture as a team sport is that we must bring together cognitively diverse perspectives. We need different voices at the table. If it's just that one person over there who's making all of those decisions and we're not bringing the collective mentality and collective understanding together, then we're not harvesting the best examples, the best outcomes. This is the exciting time. We have the hat. We need to share the hat. If we're going to surf this and navigate this and make it through the tsunami together, it's really important, gang.
Lessons Learned from The Sorcerer's Apprentice
These are my personal lessons that I learned from watching The Sorcerer's Apprentice over and over, because it was just so fun. This is true. Power without discipline is chaos. If you scale the magic without boundaries, what happens? Water floods the whole system. Autonomy without accountability breaks down that trust. What happened? He came in. He took the hat from Mickey. The sorcerer took the hat from the apprentice, and he fixed the situation. We don't want to have to fix any situations.
Call to Action
I'm going to give you some marching orders. Here's your call to action. I need you to go back and inventory your agentic debt, if you have any. If not, be prepared for that. Make sure that you're defining what your identity control plane is going to look like. Put it into place. Do your first pilots with this. Make sure that you are the voice saying, no, we're not quite ready for that. Give us six weeks until we evaluate XYZ before we bring autonomy in without governance. Here's one thing I need from you. In that role that I have with the MITRE Corporation, we are a federally funded research and development.
My identity, the thing that I am chartered to do is to remove the friction between business, between the government, between academia, by being knowledgeable and by drawing and sharing. It's why I'm here, is to share this information and draw this forward. Ping me. Let's have a conversation. How are you preparing for AI native delivery? How about you personally, what skills are you working on? I'd love to hear your lessons learned. I don't just want to hear the good stuff. I want you to give me a call, and let's talk about the messy things. It helps me to understand where to help other people to look, and bring those new lessons forward, new use cases that you have. Just call me up. Or, you can contact me at any of these locations.
See more presentations with transcripts