Don't let the tool
define your institution.
Federal institutions are past "should we use AI?" and deep into "we already are, everywhere, and none of it is coherent." Imago Forge helps you get ahead of it — before the tool's defaults become yours, or after, when it's time to build something that actually fits.
The tool is making decisions your institution never made.
Every AI system ships with assumptions baked in — about how decisions get made, what gets prioritized, what gets ignored. Without deliberate customization, those assumptions quietly become yours.
Two ways in. One honest conversation first.
Most clients start with one and grow into the other. Where you begin depends on where you are.
Before you sign anything.
You're evaluating vendors, framing a requirement, about to issue an RFP, or trying to figure out whether to build or buy. You want a second set of eyes from someone who's been on both sides of the table.
Pattern recognition work. Jeremy has watched enough federal AI procurement go wrong — from inside agencies, from the vendor side, and as an analyst — that he catches the expensive mistakes before they get made.
- Vendor evaluation and the questions you don't know to ask
- Requirements framing that protects you in the contract
- Build vs. buy analysis grounded in federal reality
- OMB M-25-22 acquisition readiness
- Leadership alignment before the purchase, not after
Build a model that actually fits.
You're already using AI in multiple ways and need to make it coherent — a real operating model built around your mission, values, and accountability structure, not a framework off the shelf.
Operating model design. Not a governance deck. Not a compliance exercise. A working model that defines how AI operates inside your specific institution — who owns it, how decisions get made, and how it stays yours.
- Use-case mapping to mission and institutional values
- Accountability and decision authority structure
- Human oversight design that actually holds
- Cross-program coherence and interoperability
- Pilot design with measurable, defensible outcomes
A conversation before the RFP
One call to sense-check the approach, the vendors, the framing. Often catches the expensive mistake before it's made.
Shaping the acquisition
Requirements development, vendor evaluation, contract language. The work that determines whether the tool serves the institution or the other way around.
Building the operating model
A defined, governed, mission-aligned AI operating model. The work that makes AI coherent across the institution instead of scattered across programs.
He's been on every side of this transaction.
The value isn't that he's seen a lot of federal AI. It's that he's seen it as the analyst telling agencies what to buy, as the vendor watching how they actually use it, and as the practitioner inside the building when it goes wrong.
Jeremy Wilcox
Founder, Imago ForgeJeremy spent two decades moving between the three roles that almost nobody occupies at the same time: federal technology analyst, AI vendor, and government practitioner. That triangle is what Imago Forge is built on.
At Forrester, he advised federal agencies on what to buy — and watched institutions make expensive, irreversible decisions without the right framework. On the vendor side at C3.ai and Accenture, he saw how those same institutions actually used what they bought, and how rarely the tool matched the institution's real needs.
Inside government at DHS and detailed through USDS to DOJ, he saw the deployment side: what happens when AI meets a real institution, real workflows, and real accountability pressures.
What makes this different.
The large firms have AI governance practices. They also have billions invested in their own platforms and a strong incentive to sell you their stack.
Scale, frameworks, and their platform
- ✗NIST RMF decks applied generically to every client
- ✗Governance as compliance theater, not operating design
- ✗Partners who've never been inside a federal agency
- ✗Incentivized to sell you their platform and their bench
- ✗Senior attention in the pitch, junior team on the work
Built around your institution, not ours
- ✓Operating model designed to your mission and values, not a template
- ✓Governance that reflects how your institution actually decides things
- ✓Advisor who's been inside DHS, USDS, Forrester, and C3.ai federal
- ✓No platform to sell — fully aligned incentives from day one
- ✓Jeremy on every engagement, not handed to someone else
One conversation before the decision costs you.
Whether you're evaluating vendors, framing a requirement, or trying to make sense of AI that's already sprawling across your institution — the right starting point is a direct conversation with someone who's been on every side of this.