People, in general, don’t care about privacy. They like the idea of their data being safe, but their actions tell another story. They’ll click on any link and sign up for the shadiest free tools without giving it a second thought. If you’re not paying for a service, it means you’re the product, and people have shown over and over again that they are OK with that.
AI tools take this careless behaviour to the next level. I think it’s safe to assume that ChatGPT hears more confessions than the average priest. People freely share medical information and financials with their LLM.
It’s one of the main reasons companies are reluctant to jump on the AI train. But the demand for these new tools is sky high, and employees are experimenting, whether the CISO/DPO likes it or not. This mismatch between supply and demand leads to an explosion of dangerous shadow-IT.
Confidential files are summarized by Claude. Someone is using Cursor to generate database scripts with the production credentials. We’re discussing company secrets in a Zoom call with free AI notetakers. People are careless.
Large enterprises are afraid of the repercussions of data leaks, but by blocking AI innovation, they make it worse. If a company goes too slow, its employees will try to catch up with unsactioned productivity tools.
This is an accident waiting to happen. Somewhere at a large insurance firm, an entrepreneurial clerk is tired of waiting for the IT team to deliver a much-needed feature. She decides to use Google Firebase Studio to build a tool to help her weed through thousands of application forms. She doesn’t know what encryption is and has no idea what happens with the uploaded data.
It’s not a far-fetched scenario, and it creates an expensive problem.
The way forward for companies is to remain vigilant about privacy and security, while leaning hard into the AI tools they are allowed to use.
Microsoft’s Azure cloud is in the business of selling you safety and compliance. Organizations that can’t send their confidential data to OpenAI can spin up their own Azure OpenAI instance instead. It’s the same product, with a different data privacy agreement. Microsoft’s legal team guarantees your data is safe.
Google is doing a similar thing with its Enterprise Workspace offering. You can use Gemini and NotebookLM while the Googlers guarantee your data is not used for training.
As new AI products flood the market, incumbents roll out their own versions. And that makes all the difference for reluctant companies. They won’t be on the cutting edge, but at least they can safely ride the wave of productivity improvement their employees demand.
While Enterprise-grade contracts aren’t sexy, they are the way to mass adoption of new technology. They are also a much-needed defense against dangerous shadow IT.
Often, a legal solution is better than a technical one.