The Non-Technical Skill that Accelerates AI Adoption
And How to Build it in your Organization
Most companies think their AI adoption problem is a technology problem. Get the right tools, hire the right AI people, run the right pilots, and you’ll succeed. But the companies I see making real progress share something different, and it’s not their tech stack.
It’s their ability to think about work in terms of data, actions, outputs, and state.
I call this modular process thinking, and the lack of it is the actual bottleneck in most AI initiatives. Not the models, not the tools, not executive buy-in. The inability to describe what your business does at the level of specificity that AI requires.
The numbers bear this out. McKinsey’s 2025 State of AI survey found that nearly two-thirds of organizations are still stuck in the pilot phase. S&P Global found that 42% of companies scrapped most of their AI initiatives in 2025, up from 17% the prior year. These aren’t technology failures. The models work fine. These are failures to connect AI capabilities to the actual work of the business at a level of detail that makes automation possible.
What Modular Process Thinking Actually Is
When most people describe a business process, they are really describing people and how they hand things off. Something like: “Sarah handles lease renewals. She checks the details of the account, sends the renewal notice, and informs accounting.” That’s not a process description. That’s a description of a person.
Modular process thinking means decomposing that into: what data does Sarah look at? What fields, checked against what criteria? What are the decision rules? What happens when the data is ambiguous or missing? Where does the output go? What state does the system need to maintain?
This is not how most people naturally think about work. It’s not engineering (they think in code and systems), not operations (people and workflows), not product (features and user needs). Process thinking at the data level sits in a gap between all of these. And, in my view, it’s the most important factor in whether an AI initiative will move past the pilot phase.
Two Approaches: Organic or Goal-Oriented?
I’ve been watching two very different approaches to developing modular process thinking and successfully adopting AI play out in organizations lately.
First, the organic approach works when you have strong internal tech capability. You give people tools and time to experiment, find the natural adopters, assign a leader to coordinate AI knowledge-sharing, and set metrics. That designated team brings the modular process and AI thinking, and SMEs within the business areas bring the operational understanding. Together, these stakeholders can organically choose goals and spread modular process thinking throughout the organization. Critically, your technical team builds tools for other departments, not just for themselves.
At Moxiworks, this is what we did. Engineering built AI tools that support cross-team workflows (like user story writing bot that plugs into Jira, systems to make internal knowledge more accessible, targeted tools to remove tedious steps from operational processes). We’ve had an assigned leader coordinating AI adoption across the company. We look at metrics and adjust as we discover issues. The adoption has been organic but directed: multiple individual teams solving real business problems that pain them specifically. This brings a real desire to succeed, which, in turn, builds muscle-memory across the different departments in parallel. The knowledge of how to build modular AI-friendly processes spreads all by itself.
But, again, this approach requires a technically capable team, an executive mandate with real backing, dedicated time, and organizational patience. Most companies don’t have all of these (especially if they’re just starting now).
The second approach, the goal-oriented one, is what I think most companies, especially outside of tech, should actually do. Instead of broad experimentation, you pick specific goals with the help of someone who understands the state of AI tech and what projects are likely to actually succeed. You choose a department or process and systematically decompose it: understand the current workflow, identify where AI will have the most impact, decide what risks you’re willing to take, rebuild the process, and put quality controls in place.
This is more structured and less exciting. But it doesn’t require a strong internal tech team or the need to take time away from operational teams. And if done well, it can be faster.
For example, if you want to improve your customer support efficiency, you can go through your support processes end-to-end, evaluate tools and decide where to plug them in. You can proactively choose risk tolerance, staffing, and adjust the whole department accordingly. Once the execution is done, you will have a working AI project that is delivering value, and it can serve as a model to the rest of the organization. Goal-oriented adoption is a great way to successfully get a foot in the AI door for less tech-ready organizations. Once you have that initial foothold, organic adoption can start.
The “Eternal Pilot” Gap
My suspicion is that most companies without the needed level of AI readiness are trying the organic approach anyway, because it feels more natural. They hand out ChatGPT subscriptions, run some workshops, and hope adoption spreads. But without technical support, dedicated time, and a clear mandate, the experimentation stays scattered. Nothing compounds because there’s no systematic learning.
The WRITER Enterprise AI Survey supports this: organizations have super-users delivering extraordinary results, but no mechanisms to spread those practices enterprise-wide. Individual productivity gains are real, but nothing connects them to business outcomes in a systematic way.
Another version of this challenge is a company I recently spoke with that had connected their entire corporate dataset to an AI system and was trying to get employees and vendors to build AI capabilities against it. The infrastructure was solid, but when I asked what specific things they wanted to do differently with AI, they couldn’t answer clearly. They had the pipes but didn’t know what they wanted to flow through them. And they didn’t have the internal modular process thinking or AI execution abilities that are needed to enable this kind of approach to succeed.
In both cases, it boils down to this: if no one can describe the specific goals, the specific actions they want AI to take, the specific data it needs, and the specific output they expect, then the organization is not ready to build.
Starting the Flywheel
There are two hard parts about getting the modular process flywheel going:
First, people who execute on human-driven processes (like Sarah from the introduction) need to be brought together with people (system designers, technologists, consultants) who can tease out the modular process within the intuitive understanding and set up a plan to drive results.
Second, and even more challenging, modular process work doesn’t look like AI adoption. It looks like tedious process documentation. Sitting with operators, asking repetitive questions, drawing diagrams. No one gets excited about this. No one wants to present it to the board.
This creates real tension with stakeholders. AI is supposed to make things faster. So when your AI initiative looks like weeks of process mapping followed by a modest automation that saves two hours a day at first, it’s hard to maintain enthusiasm.
What stakeholders don’t always see is that the team just learned how to think about their work in a way that makes the next project faster, and the one after that faster still. The decomposition work transfers. The first process takes months. The fifth might take days.
But you have to survive the first few to get there. And that requires honest expectation-setting from the beginning.
The Real Divide
The divide between companies that successfully adopt AI and those that don’t is not about technology, and it’s not about which approach they pick. It’s about whether the organization can think about its own work at the level of specificity that AI requires.
The companies doing this well often look the least impressive from the outside. They’re solving boring problems with targeted tools. They’re mapping processes in ways that don’t demo well. They’re building knowledge through real work, not through strategy decks about “the future of AI.”
But they’re the ones where the flywheel is actually turning. And once it’s turning, it’s very hard for everyone else to catch up.


