AI won’t fix your organization. It will expose it.
Matt Henry
Chief Executive OfficerMatt Henry is Chief Executive Officer of Chatham Financial. He leads the firm’s global platform at the intersection of capital markets expertise and technology, helping clients navigate complex financial decisions with greater clarity and precision.
Summary
As AI increases speed and scale, it exposes misalignment, reinforces the need to focus on outcomes over processes, and makes it critical to distinguish between differentiated work and everything else. Leaders who succeed will align faster, define their true value more precisely, and concentrate their efforts on the few areas where they create meaningful advantage. AI is not a technology story. It is a human story.
Three leadership disciplines for the AI era – and other thoughts from an hour-long conversation with best-selling author Daniel Pink.
The more I experience AI, the less I believe the widespread adoption of LLMs is mainly a technology story. It is a work redesign story. A management story. And, pretty quickly, a human story.
In the 1960 Harvard Business Review article “Marketing Myopia,” economist Theodore Levitt warned that companies (and sometimes entire industries) decline when they define themselves by a particular product or process instead of the job customers hire them to do. Engineer and management expert Michael Hammer famously made the companion point 30 years later – in “Reengineering Work: Don’t Automate, Obliterate” (1990 HBR) – arguing that when a process is wrong, automation simply helps an organization fail faster. Those two ideas, separated by decades, now converge in the AI era.
Both articles came up in my recent conversation with #1 New York Times best-selling author Daniel Pink, for Chatham Financial’s "Edge in Uncertainty with Matt Henry" series. During our chat, I asked him about the next book he might write. His answer was revealing. Dan said he was actually starting to write plays and explore video-first content creation. Why? Because he doesn’t see himself as being in the business of writing books. He’s in the business of changing how people think and helping them apply behavioral science. Books were a vehicle for that. They may not always be the best vehicle.
That exchange captures the leadership challenge of AI. AI might help you write books better (or at least faster), but it won’t tell you if another book is the best medium for your distinctive strategy. It will not clarify purpose. It will not choose priorities. It will accelerate whatever direction a person or organization is already moving.
This leaves leaders in the AI era with three jobs: align the system, define the real work, and focus the newly found leverage where it matters most.
1. AI is about to expose the cracks in your system
Imagine a team trying to cross a desert blindfolded.
Everyone starts in roughly the same place, with the goal of finishing in roughly the same place. Everyone agrees they are heading west. But two groups are off by 5 degrees. Not 50 degrees. Not opposite directions. Just 5 degrees.
At walking speed, the problem is real but recoverable. If each group walks 2 miles an hour for 8 hours, each travels 16 miles. A 5-degree error leaves them roughly 1.4 miles apart. That is frustrating. It creates rework. But if they regroup at the end of the day, they can recover.
Now change one thing: speed. Instead of walking 2 miles an hour, they are moving 500 miles an hour.
After 8 hours, they have each traveled 4,000 miles (for reference, the Sahara is approximately 3,000 miles long, east to west). That same 5-degree error now leaves them roughly 350 miles apart.
That is what AI does inside organizations.
It does not improve navigation. It increases velocity. And that is Hammer’s point in modern form: when the governing assumptions are wrong, technology does not repair them. It scales them. The 500-mile-an-hour drift is what “failing faster” looks like in practice.
A great deal of organizational friction used to hide this problem. Slow processes hid confusion. Bureaucracy often masqueraded as necessity. Hand-offs hid the fact that no one truly owned the outcome. Teams could spend weeks drifting in slightly different directions, but because they were slow, they could often recover and reorganize before the system was forced to notice.
AI strips away some of that disguise.
Finance leaders understand leverage intuitively. AI is simply a new form of leverage for knowledge work. If the underlying asset is sound, leverage helps. If the underlying asset is unstable, leverage magnifies the instability.
The same is true here. If a leadership team is aligned, AI can make it far more effective. If a leadership team is not aligned, AI lets different factions move faster in different directions. If a manager cannot distinguish important work from busywork, AI helps the team produce more busywork (or “workslop”). If ownership is fuzzy, AI does not resolve the fuzziness. The leverage makes conflict more dramatic or the avoidance of conflict more costly to realign.
While tempting, the first question cannot be, “How do we integrate AI everywhere in our organization?”
The first question must be, “Where are we unclear now?”
AI will not remove the need for management. It will be a new leveraged tax on bad management.
Before next week begins, look at your calendar and cancel one meeting that exists mainly as a ceremony around ambiguity. Replace it with a named decision, a named owner, and a deadline. That one act will tell you more about your organization’s AI readiness than another pilot project ever will.
2. Know the job behind the job
The second discipline is more basic than AI strategy.
Leaders need to know what business they are actually in.
Not the current format. Not the inherited org chart. Not the visible deliverable. The real job.
That is what made Daniel Pink’s answer so clarifying. If you think your job is “write books,” then you will defend books. If you think your job is “change how people think and help them apply ideas,” then you are free to use whatever medium does that best. That difference sounds semantic. It is not. It is illuminating.
Levitt’s great insight was that companies get into trouble when they confuse the product with the value. American railroad companies thought they were in the train design business rather than the transportation business. They thought they would win by having the fastest train, but forgot they were actually competing against any transportation, including cars, planes, trucks, boats, the telephone, and the combination of multiple forms. They defined themselves by the medium instead of the customer need.
In finance, very few people are actually in the business of making decks, writing memos, or updating models. Those are vehicles. The real work is judgment, decision quality, capital allocation, risk management, and trusted advice.
The same is true in law. A law firm is not fundamentally in the business of producing legal documents. It is in the business of helping clients pursue opportunity without taking avoidable legal risk. If documents become abundant, that does not eliminate the need for legal judgment. It makes the distinction between document production and legal judgment even more pronounced and impossible to ignore.
AI will be brutal to anyone who confuses the task with the value.
If the thing you think makes you distinctive is really just an intricate process you have perfected over time, AI will likely make that process cheaper, more abundant, and easier to imitate. What remains scarce is judgment, synthesis, taste, trust, and accountability.
So, what is it, exactly, that you do?
There is a second question hidden inside this one, and it is going to become uncomfortable for many firms:
Is the core work actually a team sport?
In some businesses or business units, a team is necessary to create value. The coordinated output of specialists is the product. In those settings, AI increases the premium on clear coordination.
But in other businesses, the product is fundamentally the person, a singular talent: their judgment, reputation, taste, synthesis, and voice. The team may amplify that value, but it does not create it.
The more creative, judgment-heavy, or trust-based the work, the more honest leaders will have to be about whether the institution is truly creating value or merely surrounding talent with process.
Where teams are absolutely necessary, alignment is the only answer. Where teams are unnecessary, they are likely to be replaced by an AI-enabled individual.
Do you actually need a team, or are you confusing overhead with collaboration?
That is not a rhetorical question. It is one of the defining organizational questions of the next few years.
Challenge yourself or your leadership team to describe what you actually do in ten words or less. A railroad company shouldn't say, "We run trains"—they should say, "We move people and goods from A to B." Daniel Pink wouldn't say, "I write books"—he would say, "I help leaders understand and apply behavioral psychology." If the answers describe a process, a tool, or a vehicle, throw them out and keep trying until you land purely on the outcome.
3. Double down on alpha. Find a partner for everything else.
The third discipline is focus.
The first phase of AI is intoxicating. People realize they can suddenly do much more, much faster. More analysis. More content. More experiments. More projects. And the natural conclusion is obvious: now we can do everything.
But that illusion does not last long.
Because AI, executed right, is not only making you more powerful. It is ready to make your competitors more powerful too.
That means the advantage is not going to come from doing more things. It is going to come from being much clearer about which few things actually matter.
In finance language, AI requires that firms focus on their alpha and find partners for their beta.
Every firm needs to ask a brutally simple question: What is the work only we can do?
That is alpha.
It is the judgment, relationship, wisdom, capability, or integrated output that makes the firm worth choosing.
Everything else may still be necessary. But necessary is not the same thing as distinctive.
Everything else is beta.
Beta is not unimportant. It is simply not where enduring advantage comes from. And even within alpha work, the goal is not to drape AI over every existing step. Hammer’s lesson still applies: rethink the process, not just the pace. Distinctive work should be redesigned, even reconsidered entirely, not merely accelerated.
While it’s tempting to use AI to try to expand the processes a company performs, the problem is that your beta is someone else’s alpha.
If another firm has built its entire business model around an activity you are doing on the side, you are likely to lose in that head-to-head contest. You will lose on cost. You will lose on speed. You will often lose on quality. Most of all, you will lose on attention, because they wake up every day trying to get better at the thing you keep doing out of inertia.
Winning firms will not simply be the ones that “add AI” or add a Chief AI Officer to the current org chart. Rather they will evaluate a new org chart that emphasizes their alpha and partners on their beta tasks.
That last point matters. The world of outsourcing processes to a lower-cost country is not the right analogy to finding a partner for your beta. The better analogy is a partner who is necessary to accomplish your alpha, but complements your distinctive skills. For example, the relationship between a great actor and a great agent. Close. Interdependent. Strategic. An amazing agent can’t randomly produce the next Michael B. Jordan, but Jordan wouldn’t be the star he is today without the creative partner (Phil Sun) who let him focus on his alpha.
In the AI era, leadership is increasingly the discipline of deciding what not to do. If you find your organization resisting the hand-off, you aren't protecting your alpha; you are just protecting your overhead.
Identify three necessary, time-consuming internal processes that do not contribute to your distinctive alpha. Next, try to list the specialist partners whose entire business model is built around executing those exact tasks. If you cannot name a partner for those processes, you have a new assignment: go find one, or write a one-sentence business plan for the company that should exist to take this beta off your hands.
The hardest part is human
None of this is hard because the tools are weak.
It is hard because redesign collides with incentives, status, compensation, and identity.
Organizations rarely redesign work quickly when the redesign threatens the people currently benefiting from the old design. That is not a character flaw or necessarily an indictment of top-level management. It is simply how most institutions behave. People are being asked not merely to use a new tool, but to participate in a redefinition of what is valuable and in some cases what their own role is for. Resistance is to be expected.
Leaders who frame AI adoption as a simple upskilling challenge are missing the deeper issue.
The winners will not merely teach people how to use AI. They will create clarity about what outcomes matter, what work is core, what requires human judgment, what should be automated, what should be eliminated, and what should sit with specialist partners.
The AI conversation, done properly, stops being a technology conversation very quickly. It becomes a conversation about leadership, incentives, motivation, judgment, and wisdom.
The leaders who do best in this next phase will not simply be the ones with the most tools, the best prompts, or the highest volume of agents. They will be the ones with the most coherence. They will align faster. They will know what business they are actually in. And they will focus their new leverage on a very small number of things that matter.
AI will not save a confused organization.
It will simply reveal it faster.
Want to learn more?
Contact our team to discuss how Chatham can help with your treasury and risk management needs.
Contact us