AI Savvy CEO
    Methodology

    The AI Savvy Readiness Framework: A Six-Pillar Assessment for Mid-Market CEOs

    By Shawn Moore11 min readUS / Canada

    The AI Savvy Readiness Framework scores six pillars — strategy, data, infrastructure, governance, talent, and culture — on a four-stage maturity scale. Designed for mid-market companies ($10M–$1B revenue), it surfaces structural blockers before capital is committed to pilots and produces a sequenced remediation roadmap.

    Why most AI readiness tools are wrong for mid-market companies

    The frameworks that dominate Google results for "AI readiness assessment" were not built for you. They were built for the enterprises that funded their creation — companies with a chief data officer, a twelve-person data platform team, and three years of cloud migration already behind them. Applied to a 300-person professional services firm, or a 900-person manufacturer, or a $120M SaaS scaling into the US, those frameworks overstate the gap, prescribe the wrong sequence, and almost always recommend tooling the company cannot absorb.

    I have spent thirty years operating inside this band. ThinkProfits has worked with more than 250 companies between $10M and $1B in revenue since 1996. The pattern is consistent. Mid-market companies are not small enterprises. They are a different operating system. Their governance is informal, their data is spread across four SaaS vendors nobody audited, their talent bench is thin, and their culture can pivot on a single all-hands email. An AI readiness model that ignores those realities produces a score that looks defensible on a slide and wastes a year in practice.

    The AI Savvy Readiness Framework was built from the mid-market up. It assumes you have a CEO who reads the numbers, a COO who runs the day, a finance lead who wants a defensible capital request, and a technology partner who is stretched. It scores six pillars, each one on a four-stage maturity scale, and it tells you which pillar to fix first — which is almost never the one your team wants to fix first.

    The six pillars

    Six pillars, scored zero to four. A pillar at zero is absent. At one, it is ad hoc. At two, it is defined but inconsistent. At three, it is measured. At four, it is a competitive advantage. The useful score for most mid-market companies is the lowest pillar, not the average — because the lowest pillar is what your next pilot will break on.

    1. Strategy

    Strategy scores how clearly AI is tied to two or three specific business outcomes the CEO can defend to a board. At stage one, the company has a "we're doing AI" posture with no target. At stage two, someone has written a list of use cases. At stage three, every active AI initiative maps to a named P&L line with a quarterly target. At stage four, the strategy actively kills use cases that do not clear the bar — which is the single hardest discipline to install.

    The decision rule for this pillar is brutal. If your CFO cannot, unprompted, state the expected dollar outcome of your current AI work in under thirty seconds, your strategy pillar is at most stage two. The failure mode to watch for is the "innovation theater" pattern — six pilots running in parallel, each sponsored by a different VP, none of them defended by a single number. That pattern burns capital for eighteen months and ends with a board frustrated enough to shut the whole program down.

    2. Data

    Data scores whether the information AI needs to be useful is reachable, clean, and legally usable. The failure mode here is not "our data is bad." Everyone's data is bad. The failure mode is that nobody has mapped which data a specific use case actually requires, and nobody has priced the cleanup. A stage-two data pillar has a catalog. A stage-three data pillar has a catalog plus an honest assessment of which systems would need re-platforming to support the top three use cases. A stage-four data pillar has already done one of those re-platforms and proven the ROI.

    In one engagement with a 400-person distributor, the team was three months into a forecasting pilot when we found the training data was pulled from a reporting layer that aggregated nightly and lost every transaction under $500. The model was accurate on paper and useless in practice. Data readiness is where most pilots quietly die — not in a postmortem, just in a budget that does not get renewed.

    3. Infrastructure

    Infrastructure scores the ability to run, monitor, and pay for AI without surprising your finance team. For mid-market companies the question is almost never "can we build this?" It is "can we operate this next year when the vendor raises prices 40% and the consumption bill is now a material line item?" Stage two means you have something running. Stage three means you know what it costs per transaction, who is on call when it breaks, and what happens if the vendor discontinues the model. Stage four means the cost curve is under active management, not reactive.

    The failure mode is vendor sprawl. A mid-market company does not need seven AI vendors. It needs two, maybe three, chosen deliberately, with exit clauses negotiated before the first dollar is spent. Companies that skip this end up with a shadow stack nobody owns and an annual renewal cycle that pulls sixty executive hours.

    4. Governance

    Governance scores whether the company can say "no" to an AI use case, and whether it can explain why, to a regulator, a customer, or a plaintiff's attorney. At stage one, governance is an unwritten "the CEO will decide if it comes up." At stage two, there is a policy document nobody references. At stage three, there is an AI intake process every project runs through, with a named risk owner. At stage four, governance is fast enough that it accelerates rather than blocks — which is the version most companies never build.

    The contrarian point on governance: install it before you pilot, not after. Every mid-market company I have watched install governance after an incident has taken six to nine months longer to recover than the ones that installed it upfront. In Canada this means a documented PIPEDA posture and, for financial services, a read on OSFI guidance. In the US, NIST AI RMF alignment is the defensible baseline; SEC and sector regulators layer on top.

    5. Talent

    Talent scores the depth of the bench that can actually operate AI inside the business. It is not the headcount of your data science team. For a mid-market company, talent at stage three looks like this: one senior technical owner, two internal translators who sit in the business and can specify a use case properly, and a working relationship with one external partner who does not try to sell you unnecessary platforms. Stage four adds a training cadence that lifts the next layer of managers from consumers to specifiers.

    The failure mode is the hero hire. Companies hire one person from a big-tech AI team, pay them to lead the transformation, and discover six months in that the person never operated inside a mid-market P&L and cannot make the internal political case for the decisions they are recommending. The talent pillar is about the system, not the hire.

    6. Culture

    Culture is the pillar everyone rates themselves highest on and is almost always lying about. The honest question is: when an individual contributor, unprompted, uses AI to cut three hours out of their week, does your company celebrate it, ignore it, or quietly worry about it? At stage two, policy is neutral. At stage three, the company measures and publishes AI-driven productivity gains by team. At stage four, the compensation system rewards them.

    The failure mode is a cultural undercurrent of fear — staff using AI in secret, hiding their use from managers, producing quality improvements the company cannot see or replicate. That is the most expensive form of shadow AI, and it shows up in engagement surveys long before it shows up on a P&L.

    How to score each pillar (0–4 maturity scale)

    The scoring conversation belongs in a single room. CEO, COO or CSO, CFO, CIO, and one outside facilitator. Two hours. Each pillar gets one page. The page lists the four stage definitions, one decision rule, and one worked example. The room debates each pillar in order and lands on a number. The outside facilitator's job is to push back on ratings that are more aspirational than real.

    The output is a radar chart, but the radar chart is not the deliverable. The deliverable is the sequence. Which pillar blocks your next pilot? Which pillar can you lift one full stage in a quarter? Which pillar requires a twelve-month roadmap? Without that sequence, the score is decoration. With it, the CEO has a defensible plan to take to the board, to the bank, and to the top twelve employees whose trust is the real currency of transformation.

    Worked example — a 300-person professional services firm

    A Canadian professional services firm, roughly C$85M in revenue, two offices, seventy professional staff, came to us after a frustrating year. They had piloted three generative AI tools. Two were abandoned. The third was still running but usage was concentrated in eleven people and shrinking. The CEO wanted to know whether to press forward or pause.

    The assessment landed at: Strategy 2, Data 2, Infrastructure 3, Governance 1, Talent 2, Culture 2. The infrastructure score was the highest because their CIO was unusually good. The governance score was the lowest because they had piloted first and written the policy later, which meant every new use case had to re-argue the entire posture. The sequenced remediation was: Governance to 3 in sixty days (one week of policy work, thirty days of review cycles, thirty days of training). Strategy to 3 in ninety days by killing four of the six use cases and doubling investment behind the two with a defensible P&L tie. Data and Talent addressed in parallel over the following two quarters. Culture left alone — the lift comes from fixing the other pillars first.

    Eight months later, two of the surviving use cases had returned roughly 3.2x their total program cost, and the firm had published its first internal AI productivity report. The CEO's note to the board was four pages long and contained no slides.

    Common remediation sequences

    Three sequences cover most mid-market companies. The first — governance-first — applies when you have already piloted and the organization is nervous. Lift governance to stage three in sixty days, then revisit strategy. The second — strategy-first — applies when nothing has been piloted and the executive team is aligned. Force the strategy pillar to stage three before any tooling decision. The third — data-first — applies when two or three high-value use cases have been identified and scoped but stall when the team hits the data layer. Data work is the longest and least glamorous sequence and is where most companies underestimate by a factor of two.

    What you should not do: address every pillar simultaneously. It is the single most common pattern in transformation plans written by large consultancies and the single most reliable way to produce no measurable change in twelve months.

    How this framework differs from Microsoft's and Gartner's AI maturity models

    Microsoft's AI Maturity Model and Gartner's AI Maturity Model are good tools for large enterprises with dedicated AI functions and the organizational slack to run three-year maturity programs. Both assume a level of baseline infrastructure, a named chief data officer, and a governance apparatus that does not exist in most mid-market companies. Applied to a 400-person business, both models tend to produce low scores across the board, which is accurate but not actionable — the company cannot close ten gaps at once.

    The AI Savvy Readiness Framework is not a replacement. It is a right-sized alternative for companies that need a defensible starting point and a sequenced plan, not a multi-year maturity ladder. Companies that outgrow it — and some do, around 2,000 employees — can and should graduate to a full enterprise model. That is the point.

    Download the self-scoring worksheet

    The worksheet walks a leadership team through each pillar in about an hour and produces a scored radar chart plus a recommended remediation sequence. It is free. It is designed for self-facilitation, though most teams we have worked with find an outside facilitator lifts the quality of the conversation noticeably.

    Next step

    If your team has already done the self-assessment and the lowest pillar is not obvious, or if the lowest pillar is obvious and you want a second read before committing the next quarter's capital, book a strategic conversation. Bring your scored worksheet. Ninety minutes. We will debate the sequence and you will leave with a plan defensible to your board.

    Frequently asked questions

    Related services

    Want a second read on your score?

    Book a ninety-minute strategic conversation. Bring your scored worksheet. Leave with a sequenced plan defensible to your board.

    Book a Strategic Call
    Book a Strategic Call