Need advice? Call Now, Schedule a Meeting or Contact Us

How to Structure an AI Programme at Your Organisation

Adam provides a tactical guide on structuring an AI programme, focusing on governance, legal, and security strategies for organisational success.

By Adam Vagley 18 Dec 2024
How to Structure an AI Programme at Your Organisation

As someone who has been consulting large companies for two decades, I have never before seen the excitement that I'm seeing around AI. Typically, slow-moving companies are racing to find ways to leverage AI and reap the benefits. 

This is, nonetheless, a quickly evolving space. As project and programme managers, you will likely get pulled in to help get these efforts off the ground and guide them as they move forward. 

As you think about how to structure things, here's a tactical guide to help. 

The Right Governance Structure is Critical

Any experienced project manager knows how important strong governance is to a project's success.  

As you think about standing up an AI programme, it's really important to strike the right balance of control over scope, risks, and budgets without stifling the ability to experiment and learn. 

One option that has worked well is centralising core governance functions but pushing execution to the individual business group level.  

Centralised AI Programme Governance - How to Structure an AI Programme at Your Organisation

This centralised body wears two hats: 

  1. First, it sets AI standards and policies for the organisation, develops programme-level project management and change management processes, and approves projects and funding. This way, everyone is following the same playbook, and you avoid the risk that someone in one group is duplicating the work of a team in another. 
  2. Second, it needs to evaluate proposed projects through the lens of risk to the organisation. 

Given the functions of this governance body, it needs representation from your business organisations, IT, Legal, Risk, Information Security (InfoSec), and Vendor Management.  

The dynamic of these different areas looks something like this: Senior leaders from each business group act as sponsors of the proposed projects and are accountable for outcomes. IT partners with the business to implement the solution. Legal, Risk, and InfoSec assess the impacts of the solution on the organisation's overall security and risk posture. Vendor Management provides guidelines for any vendor solutions considered for evaluation.

An experienced programme manager facilitates getting this body stood up, manages the meetings this body has, and makes sure the business group teams are following the playbook. 

First, Do No Harm: Why You Need Legal and Information Security

Building on the above governance structure, I want to emphasise the important roles of Legal and InfoSec in helping navigate the adoption of AI tools.  

First, let's talk about Legal. 

In short, your legal team needs to be involved to protect your organisation. This is even more important if you operate across different states or countries, as regulations are starting to form in many jurisdictions, with more to come. You need to ensure whatever solutions you deploy do not run afoul of any newly minted laws. 

There may also be employee policies or guidelines that need to be developed for the use of the AI solutions being explored. 

Data security and ownership are other important parts of the puzzle that should involve a legal review. There may be clauses about whether the vendor will use your data to train their model, or whether they can see your prompts or output. This is not something you or IT should navigate alone. 

Next, let's talk about our friends in InfoSec. They're another important part of the flow since they need to understand what security risks a solution might pose.  

For example, people might try to use these tools maliciously. Can your security team see what sort of queries people are submitting?  

And AI can make it easier to expose sensitive data. Is data appropriately locked down?  

Some solutions might let people exfiltrate data or otherwise set your organisation up for a cyber attack.  

Needless to say, you don't want to be the PM whose project put the company at risk.  

How to Structure an AI Programme at Your Organisation

AI Initiatives Should Be Used in Case-Based

Every solutions vendor out there is baking AI into their products. You could waste lots of time and money chasing after cool-sounding products that don't really serve a business purpose for your organisation. 

Some solutions will be vendor-provided, and some will be custom-built in-house, but ultimately, the focus should be on use cases anchored to business objectives. 

What does that look like? 

Vendor/Product focus: I want to evaluate Workday's AI features for candidate reviews 

Use case focus: Streamline the talent recruitment process by automating review and shortlisting of candidate resumes. 

In this example, Workday may be the solution regardless, but the specific business outcome and value are clearly articulated in the latter approach: it's not about a tool; it's about an objective. 

As people in various parts of the organisation have ideas for where AI might add value, those should get bubbled up to the business group's sponsor. (Business groups may want their own criteria for defining what goes to the central governance committee for approval.) 

The Process Should Be Optimised for Risk Management and Speed

As mentioned above, there's a fine balance you're aiming to strike as you set up an AI programme. You need to balance controls with enabling your people to experiment, learn, and deliver value.  

This means that your business unit groups really need to think about intent. Some might argue machine learning-focused projects don't need to go through this governance process. But that line is increasingly blurry, and people don't always use the terminology correctly. So, you may want AI-adjacent asks like machine learning to fall under the group's purview as well.  

Just as most PMOs have an intake or business case process to justify projects, you should have a standard form so that use cases can be evaluated consistently. This form should include things like: 

  • Scope of the initiative
    • What are you trying to learn or deliver? 
    • How is it different from how the business works today? 
  • Expected business outcomes 
    • How will you measure success?
    • Sometimes, it may be important to estimate the potential dollar value provided. This can be hard to come up with, but something indicative is still useful. For example, is it going to make 10 employees 100% more efficient, or make 1,000 employees 50% more efficient? Some external vendors may have their own studies on outcomes — while these should be taken with a grain of salt, they can still be a useful reference point. 
  • Cost to evaluate or implement the solutions (or to determine what an appropriate solution might be) 
  • Sponsors/owners of the initiative  

Here's how this would look moving through the governance structure: 

AI Programme Governance - Step-by-Step Process for Evaluating and Approving AI Use Cases

As the project team hits certain milestones, you may want them to revisit the programme governance forum to ensure continued alignment. 

Again, these are high-level stage gates and should be tailored to fit how your organisation works and what your AI programme is trying to achieve. 

The Landscape is Changing Quickly: Plan Accordingly

The tech itself is changing at light speed. Even Microsoft seems to be throwing half-baked products over the fence and then iterating. Finding up-to-date documentation on controls and settings can be a thankless hamster wheel. 

What that means for your users is how something works today may be different tomorrow. As you think about messaging for your users, you need to set expectations appropriately: it's early days; things will change; we're learning together; we still think these products have a lot of value, which is why we're giving them to you. 

How you frame this could mean the difference between your users getting frustrated versus understanding it's all part of the experience of adopting a brand-new technology.