Skip to main content
Risk Management

The biggest mistakes companies make when onboarding AI

A digital compliance expert weighs in on what companies are doing wrong with AI implementation.

AI compliance risk

J Studios/Getty Images

5 min read

Want to know more about risk management best practices? Join us virtually or in-person for our upcoming live event, Risky Business: Navigating Compliance, Risk, and Regulation in Finance, in NYC on September 16, for more on navigating through these risky times. Register here.

The pressure to be an AI-first company has reached a fever pitch. But in the rush to adopt the new tech, companies might be forgetting to ask an important question: What’s getting overlooked and left behind? If you ask Amy Worley, managing director and head of management consulting firm BRG’s digital compliance practice, the answers are privacy, security, and money.

CFO Brew asked Worley some of the deep AI questions CFOs might whisper to themselves at night when they can’t sleep, as they hurtle toward an AI future.

This interview has been edited for length and clarity.

What in the AI landscape is concerning you?

What I’m seeing is a lot of senior leaders are pushing for rapid adoption. There is a widespread view that adoption of AI will create efficiencies and drive down costs. And so for a lot of our clients, CEOs, COOs, [and] boards have given the directive, “Do AI and do it right now.”

For many organizations, this requires a level of data governance that they may not have had before, and so there’s a lot of change management and organizational readiness issues that are coming up. And then also, because this is a new area, legally, there’s a whole lot of questions about how to appropriately manage risk, and how to even really talk about quantifying that risk and setting the controls to mitigate it.

What are the big mistakes that you’re seeing companies make?

A lot of companies I’m seeing are underestimating how much time and effort it’s going to take to really get their data in a format and in locations and managed in a way to get the value out of AI, to pay back the investment in purchasing different AI systems.

That’s not exactly a privacy mistake, though, is it?

No, it’s a more general mistake that leads to privacy problems. Let me give you an example. There are a lot of cool tools on the market that will use an agent to do enterprise search and efficiency, and most of these tools default to your organization’s access control, deletion, and existing networks. Many organizations haven’t really focused on that for unstructured data, so their SharePoint environment might not have tight access controls or tight user permissions. Google workspace might not have those. You put the AI on top of that, and all of a sudden, you may be pulling unstructured data up that contains personal information or sensitive personal data about people, and you didn’t intend that.

News built for finance pros

CFO Brew helps finance pros navigate their roles with insights into risk management, compliance, and strategy through our newsletter, virtual events, and digital guides.

So what’s your advice to companies?

Governance, governance, governance. We are quickly changing the structure of how data governance works in organizations…AI governance is cross-functional, and so it is requiring multiple stakeholders to be at the table for the entire AI lifecycle.

If you’re telling me you want to use agentic AI on a database where there are existing privacy controls in place, that is a much faster yes. But if you tell me you want to set up an agentic AI to improve worker efficiency and give it access to a bunch of unstructured data, that is a much longer step-by-step process. And we’re going to look at things like access controls…The right way to do it is then to go back and test, at some designated point in the future, to make sure those controls actually got implemented…It isn’t going to sit with just one team. It’s going to require a cross-functional team.

Where are the surprise costs coming from?

One is the money cost, in that the price tag for purchasing and implementing an AI system may look like it’s $X, but once you do the actual implementation work to get everything assessed, to get the controls in place, to get it integrated appropriately; often, redaction tools and things like that can be an upcharge. You buy a privacy package, it can be more expensive than you think. So it can straight up just cost more.

And the risk cost?

The risk is incredibly amorphous…I would call it old-fashioned risk in a new package…If we train AI models on old information, we can bring in old biases. And so we just want to make sure that we’re not doing that.

One of the things I get asked by my CFO direct report people is “put a number on this risk.” And that is really hard to do for AI, in a way that’s very different than data protection. So for privacy, I can pull up five years’ worth of opinions, whether it’s class actions or a supervisory, and I can say, “Okay, well, the monetary regulatory risk is X.” And you really can’t do that yet here, but I expect that we’re going to see some creative lawsuits like we've seen from class action plaintiffs' lawyers bringing privacy lawsuits based on cookies and tracking technologies and things like that.

News built for finance pros

CFO Brew helps finance pros navigate their roles with insights into risk management, compliance, and strategy through our newsletter, virtual events, and digital guides.