
Your employees are using AI. Where’s your policy?
Experts say that no company in the modern tech space is not using artificial intelligence in some manner. From grammar editing to agentic systems, some use more than others. However, many organisations lack guidelines that define the development, deployment, and use of AI systems.
Ross Pambrun, CEO of Calgary's The Memphis Group and a futurist keynote speaker with the Speaker's Bureau of Canada, says that if a company is not using AI, then it is already tracking behind the rapid innovation and expansion of the Canadian tech space.
"As much as many of the CEOs out there say, 'I don't think we're using artificial intelligence,' I can guarantee people in their company are," says Pambrun. "With bring your own device, bring your own AI to work: The question isn't whether a company is using artificial intelligence or whether the employees are using AI, it's whether it has the governance to ensure that it's being used safely and ethically."
Much of AI is designed to train on the information relevant to its purpose. In other words, it learns from queries and becomes more informed. Each LLM has different terms and conditions, possibly giving it access to the information imputed into queries. This varies from tool to tool.
Ramy Nassar, author of the AI Product Design Handbook and former Head of Innovation for Mattel, says that whether it's ChatGPT, Gemini, or another platform, if the version is free, you're probably paying with your data. This brings significant risks at the enterprise level.
Nassar says AI magnified the potential for confidential data to be accessed by third parties. "You would never take company financials from last year...You would never paste that into Google, but you might paste that into ChatGPT."
Nassar's Scenario
If the head of technology for a company with 1,000 staff buys a premium version of an LLM for a select group of employees, what happens when the rest of the company's staff sees employees using this AI tool? They might log in using their accounts, possibly the free version that uses data.
However, person A, using the enterprise-licensed version of a product, has more data security than those with a free subscription.
"It is, relatively speaking, safe for person A to put company or proprietary intellectual property information into this tool and analyse it or query it than person B…who is essentially putting company proprietary information into the hands of private for-profit companies," says Nassar.
Pambrun says at an agency level, there needs to be guidelines within each department to evaluate the innovation AI could bring to each department, and regulate low-risk use.
"The greatest risk for most companies is operational. So the nature of readiness means that they are participating in accepting that the CEO and the C-suite level, is aware and is ready to participate," says Pambrun. "That adaptiveness means we need to start building a framework."
MIT study says AI is making people more dependent on LLMs
A recent study from the Massachusetts Institute of Technology studied the brain effects caused by LLMs and found that brain connectivity systematically decreased with the amount of external support from these tools.
"What happens when people accelerate their performance, their productivity, their job…using AI, and they don't validate what it's doing. Who is accountable for the outcome of it?" says Nassar. Most court rulings state that it is still the employee or company that used the tool.
A 2023 lawsuit by the US Equal Employment Opportunity Commission found iTutorGroup liable for using an application software that "automatically reject female applicants aged 55 or older and male applicants aged 60 or older." The company was forced to pay USD $365,000 to rejected applicants.
The government's using AI too
This issue has been raised within the government as AI tech becomes more dominant. In March, the Government of Canada launched its "AI Strategy for the Federal Public Service 2025-2027." The report outlines primary guidelines for the use of AI in official government business.
It states that while AI has been used for decades, significant advancements to generative AI have come with public scrutiny. "Existing and future AI systems must therefore be appropriately governed, with guidance, policy, and training in place to manage risk, address challenges, and uphold human rights, public trust, and national security," states the strategy.
Pambrun says businesses are desperate for direction on guiding the use of AI that is ethical, accessible and appropriately licensed. He brought attention to the vast amounts of policies to protect companies from phishing emails and other cybersecurity incidents, but not AI.
"The moment you don't have a policy that says, I'm concerned about a communication I had and I didn't report it, all of a sudden you've created a back door where somebody else can be stealing all of your information or creating operational risks and threats."
Transparency is key
As LLM tech accelerates, so do the industries that use it. Nassar says what a company decides to put in an AI policy today might change. It's about transparency, but also alignment on what benefits a company's operations the most at that point in time.