How multi-billion dollar companies use AI

Today, I’m posting an interview with Mike Whitaker SVP of Strategy Enablement and Innovation services at ICF.

These interviews are not so much about promoting the products or businesses but, instead, the people behind them.

If you’re a founder or professional using AI for work and would like to share your story, reach out here.

Can you briefly describe your role and the company you work for?

I’m Mike Whitaker (everyone calls me “Whit”). I serve as Senior Vice President of Strategy Enablement and Innovation Services for ICF. ICF is a global professional and technology services firm that serves a roster of government clients with social and environmental missions, energy utilities, and commercial clients. Our purpose is to build a more prosperous and resilient world for all. We leverage deep domain expertise in areas such as public health, climate science, energy systems, social programs, community resilience, and disaster recovery, combined with strong cross-cutting capabilities in technology modernization, digital transformation, analytics, and engagement to help clients design and implement programs that advance their mission or business objectives.  ICF has an annual revenue of ~$2 billion with 9,000+ employees.

In my role, I lead our centralized Innovation Services Team. We are an agile, rapidly deployable team of internal consultants and doers focused on accelerating the innovation of ICF’s capabilities, offerings, and working methods to remain a vibrant, growing enterprise. Our team of 12 has backgrounds in human-centered design, design thinking facilitation, user experience, product leadership, data science and visualization, AI, service blueprinting, and organizational design. We focus our work on the white spaces of value creation not clearly owned or resourced by other parts of the enterprise. We partner with business leaders to convert concepts for innovation into client work while maximizing the time value of engagement for their staff during the innovation life cycle.

How has your company approached integrating AI into its processes and workflows? Was there a specific strategy or plan in place?

ICF primarily serves highly regulated institutions (governments and utilities). We do not have the same freedom to experiment and fail as a startup might, particularly when it comes to things like data protection, output accuracy, and mitigation of potential AI bias. As GenAI momentum built last year, I partnered closely with our Chief Technology Officer to formulate our strategy for advancing AI in the coming year. 

A critical first step in this process was generating executive team alignment on both the opportunity in front of us and the risks we needed to mitigate. Such alignment is essential. While AI’s potential is exciting, organizations that rush to incorporate GenAI without understanding the data needed for its success are set up for failure – a concern 3 in 4 federal mission leaders share. 

To ensure alignment and take a holistic view of the opportunity, we established the following three design principles for advancing AI:

  1. Use existing organizational structures with clearly defined roles and responsibilities instead of creating new structures. 
    • We did this because new structures can add friction and be under-resourced. If we want to use AI widely across the company, our processes need to be embedded into existing workflows and org structures.
  2. Decentralize what we can centralize what we must – ensure alignment on the value of centralized elements.
    • ICF is an incredibly diverse company. We want AI in the hands of our domain and technology experts – and rightfully so: 88% of federal IT professionals state that digital modernization efforts that do not include domain experts are doomed to fail.  It would be impossible to identify and advance use cases centrally at the pace and scale that will be required. However, we also need to control for risk, so we decided to keep centralized specialized risk mitigation expertise such as intellectual property reviews, AI tool contract term approvals, client contract reviews, and data protection risk mitigation recommendations. 
  3. Bias towards action and learning; progress is more important than organizational boundaries. Have the hard conversations and keep moving. 
    • Initiatives in large companies can stall based on organizational boundaries – who owns what, who can make decisions, etc. We set up some temporary structures related to GenAI program leadership that spanned organizational boundaries and used a fusion team concept to organize resources. The concept with fusion teams is that when you know there will be a repeated set of work required in a new area that doesn’t neatly align to organizational boundaries, form the cross-disciplinary team once, allow that team to figure out how to work best together, and then bring work to the team (as opposed to re-forming teams each time a new piece of work comes in).

Given your approach of decentralizing what you can while maintaining centralized risk mitigation, how have you empowered employees to use AI?

We have taken a multi-layered approach to empowering employees to use AI.

  1. We developed a Responsible AI Use policy that clearly outlines the expectations for when employees can freely use AI technologies and when they need to seek further guidance to control for risk. As part of this process, we established a Responsible Generative AI Decision Process that defines when an employee’s use of AI is in the “express lane” and does not require further review and when they need to seek further legal, contractual, or data protection reviews. The risks we are focused on mitigating are when GenAI is used to materially produce project deliverables (ensuring we maintain ability to generate and transfer IP, quality meets our standards, and contractual terms allow for use of AI), inputting non-public data into an AI tool (ensuring we don’t inappropriately expose client or internal data to AI tools that would then use that data for further training), and processing personally identifiable data (ensuring our data protection and privacy standards are maintained). 
  2. The centralized team deputized a set of business GenAI leads who sit in our business units. The leads serve as the first line of guidance for employees seeking to use GenAI. The distributed GenAI leads are backed by the centralized team that provides guidance, connects them to centralized reviews, and builds the community of practice across all the GenAI business leads to increase the pace of organizational learning.
  3. Under the Gen-AI program, we have sub-teams focused on Enabling Client Delivery and Unleashing the Power of an AI-enabled Workforce. The Enabling Client Delivery workstream works with the businesses to identify and prove the value of AI use cases for client projects. The Unleashing the Power workstream works to drive adoption of AI across the workforce.
  4. We maintain a regularly updated list of approved AI tools and use cases. For tools not currently on the approved list, we have established a legal review express lane to assess contractual terms and determine whether the tool can be fully approved or only conditionally approved for certain use cases.
  5. We have built a series of trainings including videos on prompt engineering and a prompt library for specific workflows, developed a capability catalog to help employees find pre-built AI accelerators, and have a small team of AI consultants who can sit directly with domain leads to assess their use cases, determine how AI might be used, and build prototypes and testing protocols with them to confirm the value. 

What AI tools are currently approved?

The following tools have been approved for broad employee use:

  • Microsoft Copilot: the version embedded in the Bing browser (formerly Bing Chat Enterprise) is available to all employees. We have also been running a pilot with a subset of a few hundred employees for the full Microsoft 365 CoPilot suite. We will be increasing those full licenses throughout the year based on promising early results.
  • GitHub Copilot: approved to assist with code writing, documentation, and idea generation.
  • Sales Copilot: brings CRM data and AI-powered intelligence into Outlook and Teams workflows.
  • Adobe Firefly: part of Adobe Creative Cloud used to create initial drafts of high-quality creative content that can then be modified for final deliverables.
  • Mural: digital whiteboarding app that we use to conduct many remote and hybrid design thinking workshops with embedded AI capabilities for ideation and synthesizing inputs.

The following tools have been conditionally approved for specific use cases:

  • PMI Infinity:  researching project management best practices and requirements.
  • Perplexity AI: in-depth scientific research using publicly available sources.

We have 20+ additional tools and use cases currently being assessed and expect the list to grow.

How are we using AI?

As part of the AI strategy, we have classified AI use in three categories:

  • Taker: applying technology that is available off-the-shelf to our use cases
  • Shaper: embedding AI into our systems, tools, and operational software
  • Maker: building new AI capabilities from scratch

We are focused on the first two categories. Early on, more of the use cases have been Shaper where we have had to make custom adaptations to off-the-shelf tools to make them more useful. We hope more of the work shifts to taker applications over time. 

Our theory is that as AI technology becomes more widely available, domain expertise will be an increasingly important differentiator in generating the value required to achieve mission outcomes. Our expert-in-the-loop approach to deploying AI with clients includes five main service areas:

  • Readiness and strategic vision
  • Product strategy and use case development
  • Training and change management to build an AI-ready workforce
  • Risk and governance 
  • Design and development including getting data AI-ready

As an example, federal agencies partner with ICF and use a tool we have created to process comments that are submitted by the public during U.S. Federal Rulemaking processes. AI can play a valuable role in processing through thousands of comments and identifying the distinct points that must be responded to for a regulation to advance. The AI solution requires a data ingestion pipeline, automated infrastructure, and prompt automation / pre-processing. Domain expertise is required to apply the AI and interpret the outputs appropriately to both identify what constitutes a substantive comment and to recommend how to respond in the context of the regulation and what the agency is trying to achieve. 

As another example, we have worked with multiple federal agencies to use AI to significantly reduce the time required to review hundreds of program activity and evaluation reports and combine those reviews with domain expertise to develop go-forward recommendations to improve mission outcomes such as suicide prevention, overdose protection, and reduction of animal testing in biomedical research. Across these use cases, AI can reduce manual document review time by hundreds to thousands of hours enabling the domain experts to focus more time on higher value activities such as interpreting the outputs and working with the clients to make program adjustments to improve outcomes. 

What’s the focus area for your team over the next few months?

We are focused on connecting our domain experts to the power of emerging AI technologies. We are building our capacity to engage business leads around the work they are delivering for clients and where we have the greatest opportunities to introduce AI in a scalable and repetitive way. Finding those opportunities requires our team to provide consultants who can translate how a job is currently delivered into a workflow that can leverage AI technology. Those consultants also need to understand the constraints our client teams face in terms of client expectations and contractual terms. We then need to go beyond envisioning possibilities to building the prototypes and developing the tests that validate the quality of output and value gained. My team just hired two additional look for not only candidates with a technical background in computer science, data science, or AI, but also those who have a consulting mindset and a demonstrated interest in using their expertise to solve society’s most pressing challenges. 

For other companies looking to adopt AI, what do you recommend they do?

There is a sense of urgency to do something as quickly as possible. However, lessons from past technological advancements show that change is usually slower in year 1 than people think and greater in 5-10 years than they initially imagine. I advise other companies who want to deploy AI in a responsible, human-led way to take a few foundational steps that may feel like they slow you down at first but will ultimately position you better for the long game. 

  • Generate and work to maintain executive alignment on your AI strategy.
  • Establish robust processes to control risks to allow moving forward with confidence.
  • Empower a small team of AI consultants to work with your front line and back-office staff to identify and advance use cases that have potential for scalable value. 
  • Ensure you have doers with available time to build prototypes that validate value and build confidence before scaling solutions. Your teams are busy with their day-to-day jobs, and progress will be slow or incremental unless you can deploy additional specialized capacity.

Get full access

✔️ All 100+ courses & tutorials in our catalog
✔️ New content added weekly
✔️ Private community access
✔️ No subscription, $250 paid once
✔️ Expense it using this template. Or get a team account.
✔️ 30-day refund policy. No questions asked
Join 5,163 learners from companies like Microsoft, Coca Cola, NBA, Adobe & Google

If you scrolled this far, you must be a little interested...

Start learning ->

Join 5,163 professionals already learning