From National AI Policy to Company Reality
- Julie Lavergne
- Feb 19
- 5 min read
At first glance, an AI policy conference can feel far removed from the day-to-day realities of running a small business. National frameworks, global governance, democracy, youth protection, important topics, but seemingly distant from questions like: How can we use GenAI / AI effectively in our business? Should we? And under what conditions?
Not only does the conference help understand AI at large, but policy at the core is the same. The goal of AI isn’t AI itself—it’s the human and societal outcomes it’s meant to serve. Hence, the policy needs to be clear on
who decides,
based on what understanding,
and with what responsibility.
AI policy, at any level, is less about restriction and more about a way to create clarity, manage risk, enable responsible innovation, and stay anchored in long-term human outcomes.
A Few Big Themes from the Day
The Mila - Quebec Artificial Intelligence Institute AI Policy Conference brought together researchers, policymakers, educators, and civil society leaders from around the world to explore how AI should be governed. While the discussions were wide-ranging, several themes surfaced repeatedly .
Governance is about decisions, not technology.
A recurring message was that AI does not “happen” to us. “It’s not the weather” as Virginia Dignum, Professor, Responsible Artificial Intelligence and Director of the AI Policy Lab, Umeå University cleverly said. It is designed, deployed, and used through human choices. The core governance question is not what AI can do, but who decides how it is used, by what rules, and in whose interest.
AI is not the goal. Human benefit is.
Several speakers made this point explicitly: AI is a means, not an end. Policy discussions become distorted when technological advancement itself becomes the objective, rather than the social, economic, or human outcomes it is meant to serve.
Building on this, panelists including Lynnsey Chartrand and Alejandro Mayoral Banos, PhD cautioned against defaulting to externally imposed models of AI development. When capital-market logic, peed, scale, and return, becomes the primary driver, other forms of knowledge risk being sidelined.
The concern raised was not abstract. Relying uncritically on models, assumptions, and systems designed elsewhere can amount to a form of knowledge colonization, where local context, culture, and ownership are treated as secondary rather than essential.
The real gap is not technological—it’s decision-making capacity.
Several speakers noted that technical capability is advancing faster than the ability of governments, schools, and organizations, to make informed decisions about it. This gap shows up as confusion, fear, or overly simplistic rules.
As Elyas Felfoul, WISE Partnerships & EdTech Accelerator, Brain Trust XPRIZE, Strategic Committee UM6P DeepTech Summit, Mila AI Policy Fellow AI for Learning in MENA (middle east and north Africa)
“Education systems were designed for a stable economy, slow change, and predictable careers. That world no longer exists.” The same could be said of many of our organizational systems.
Across panels, the issue wasn’t a lack of tools or frameworks, but a lack of shared AI literacy, especially among leaders. Without a common baseline of understanding, policy becomes symbolic rather than practicals, omething written to signal responsibility, rather than to guide real decisions.
Trust and agency are fragile.
Across discussions ranging from youth and democracy to workplace automation, speakers returned to the question of human agency. When AI systems begin to shape decisions, relationships, or attention without being well understood, trust can erode quickly.
Work shared by Helen Hayes on conversational AI illustrated this shift clearly: tools that once functioned primarily as information systems are increasingly relational systems — tools people talk to, confide in, and rely on over time, raising new questions about dependency, agency, and where human judgment sits in the loop.
Capability building is the long game.
Rather than predicting specific job outcomes, several panelists — including Laurent Charlin and Namir Anani — emphasized building durable human capabilities that allow people and organizations to adapt as technologies evolve.
Translating Big Policy Questions into Company-Level Ones
Different scale. Same logic. Here are five considerations.
1. Who gets to decide how AI is used?
At the conference: Governance discussions repeatedly came back to decision rights: who sets the rules, who is accountable, and how power is distributed when AI systems influence outcomes.
Inside a company: This shows up quickly as ambiguity. Is AI use a personal productivity choice? A manager decision? An IT or legal call? When no one is clearly responsible, decisions happen by default rather than design.
An internal AI policy often needs to answer a simple but uncomfortable question: who decides what “responsible use” actually means here?
2. What level of understanding do decision-makers need?
At the conference: Speakers highlighted a growing disconnect between those making policy decisions and those who truly understand how AI systems work or don’t work. That gap undermines good governance.
Inside a company: Leaders don’t need to be technical experts, but they do need enough understanding to ask better questions. Otherwise, policies become either overly restrictive (“ban it”) or overly permissive (“use common sense”), neither of which helps employees navigate real situations.
3. What are we actually trying to achieve by using AI?
At the conference: There was a strong emphasis on balance: protecting people and democratic values without stifling innovation or beneficial use.
Inside a company: This becomes a practical question:
Are we adopting AI to reduce friction for people?
To improve decision quality?
To move faster because “everyone else is”?
When the goal isn’t clear, AI adoption tends to follow the loudest external signals, vendors, competitors, headlines, rather than internal needs. That’s how tools get adopted without anyone being able to explain what problem they were meant to solve
4. How do we handle uneven impact and vulnerability?
At the conference: Youth, marginalized communities, and unequal access to technology were central concerns. AI does not affect everyone equally.
Inside a company: Not all roles are impacted the same way. Some employees gain efficiency; others worry about deskilling, surveillance, or being judged by opaque systems.
An internal AI policy often needs to acknowledge this unevenness explicitly, rather than assuming a one-size-fits-all approach. Silence here tends to breed anxiety.
5. How do we keep policies relevant as things change?
At the conference: Many speakers emphasized that AI policy must be adaptive. Static rules quickly become obsolete as technology evolves.
Inside a company: This is where many internal policies struggle. They are written once, approved, and then quietly ignored as tools and practices change.
A more useful approach treats AI policy as a living framework: principles, decision criteria, and review points—rather than a fixed list of dos and don’ts.
Where This Leaves Us
Canada’s AI regulatory landscape is still evolving. Federal legislation is in flux, and organizations are navigating a patchwork of privacy, human rights, and voluntary guidance. That uncertainty can make it tempting for small businesses to wait.
But waiting doesn’t eliminate responsibility; it just means decisions get made quietly, one tool and one use case at a time, without a shared understanding.
Internal AI policies matter not because leaders demand them for the appearance of “control”, but because organizations need a way to talk about AI use with clarity and trust. Done well, they are less about control and more about helping people exercise judgment in a changing environment.
One of the most memorable moments of the day came from Audrey Tang, Taiwan's Cyber Ambassador and former Minister of Digital Affairs. Their approach to governance felt notably different: less about restriction and more about creating the conditions for better collective decisions.
Instead of defaulting to heavy controls, Tang shared examples of civic participation, rapid experimentation, and simple, practical interventions that made systems more transparent and responsive. It was a reminder that policy doesn’t have to slow things down — done well, it can expand what’s possible.
At every level, from national policy to a ten-person company, the challenge is the same: not to run faster without steering, but to build the capability to decide well as the terrain keeps shifting.


