
Article content
![]() | Dear Public Service Confidential, The powers that be seem hellbent on pushing us as public servants to implement artificial intelligence in our daily tasks as much as possible. I know we have a reputation for moving too slowly, but I can’t help but worry that we’re moving too fast on AI. Maybe a bit more risk aversion would be a good thing when it comes to this technology. Are we relying too much on private companies to contract these tools and store sensitive information? And how can we be assured that these systems won’t result in sensitive data going somewhere it shouldn’t when part of the way these tools grow and thrive is by using the information we give them? — a hesitant public servant |
THIS CONTENT IS RESERVED FOR SUBSCRIBERS ONLY
Subscribe now to read the latest news in your city and across Canada.
- Unlimited digital access to the Ottawa Citizen.
- Analysis on all things Ottawa by Bruce Deachman, Elizabeth Payne, David Pugliese, and others, award-winning newsletters and virtual events.
- Opportunity to engage with our commenting community.
- Ottawa Citizen ePaper.
- Ottawa Citizen App.
SUBSCRIBE TO UNLOCK MORE ARTICLES
Subscribe now to read the latest news in your city and across Canada.
- Exclusive articles from Elizabeth Payne, David Pugliese, Andrew Duffy, Bruce Deachman and others. Plus, food reviews and event listings in the weekly newsletter, Ottawa, Out of Office.
- Unlimited online access to Ottawa Citizen and 15 news sites with one account.
- Ottawa Citizen ePaper, an electronic replica of the print edition to view on any device, share and comment on.
- Daily puzzles, including the New York Times Crossword.
- Support local journalism.
REGISTER / SIGN IN TO UNLOCK MORE ARTICLES
Create an account or sign in to continue with your reading experience.
- Access articles from across Canada with one account.
- Share your thoughts and join the conversation in the comments.
- Enjoy additional articles per month.
- Get email updates from your favourite authors.
Register to unlock this article — it’s free
Create an account or sign in to continue with your reading experience.
- Access articles from across Canada with one account
- Share your thoughts and join the conversation in the comments
- Enjoy additional articles per month
- Get email updates from your favourite authors
Sign In or Create an Account
or
Article content
Article content
Article content
Article content
![]() | You’re not wrong to feel uneasy, and you’re not alone. Across the federal public service, the pressure to get on board with AI is real and it’s coming from the top. When something lands in a mandate letter, it tends to cascade quickly into departmental plans, workplace tools and day-to-day expectations. So, if it feels like AI has gone from optional curiosity to institutional priority remarkably quickly, that’s because it has. But hesitation doesn’t mean you’re behind. It probably means you’re paying attention. Public servants are stewards of sensitive information and public trust, so a reasonable degree of caution here is prudent. And the tension between innovation (“move faster”) and risk management (“be cautious”) is, in many ways, a central governance challenge of the moment. In the familiar public service tradition of offering fearless advice before delivering loyal implementation, much of the mantle of reconciling this tension falls to the public servants tasked with translating policy ambition into practice. So, is speed itself the real problem? Not exactly. How these tools are introduced, governed and used matters far more. Poorly governed adoption creates risk regardless of pace. But ignoring AI altogether won’t solve much either. More often, it creates “shadow use,” where employees turn to unapproved tools without safeguards or oversight. The riskiest AI use isn’t inside approved systems — it’s what happens quietly in browser tabs at the corner of someone’s desk. Ideally, AI adoption should look less like wholesale rollout and more like controlled experimentation: testing low-risk use cases, learning where these tools genuinely add value, and building governance as implementation evolves. However, meaningful sandboxing requires time, governance capacity, strategic patience and sustained investment, all of which can feel impossible in an environment increasingly defined by doing more with less. That reality doesn’t eliminate the need for caution. If anything, it makes thoughtful implementation and use even more important. Your concerns about private companies are well placed, too. Most AI tools are being developed by a small number of large, mostly foreign private-sector firms — the same firms governments are now, perhaps more uncomfortably than ever, reliant on. That raises legitimate questions about data sovereignty, vendor lock-in, procurement integrity and long-term control over public sector capabilities. To its credit, the federal government’s nascent AI strategy for the public service, alongside Treasury Board Secretariat guidance, does reflect an effort to establish guardrails around AI use, privacy, procurement and accountability. That work remains uneven and may continue to lag behind ambition, but it signals that governance is being treated as essential, even if parts of it are already playing catch-up. It’s also worth distinguishing between different kinds of AI use. There’s an important difference between pasting sensitive information into a publicly available chatbot and using enterprise AI tools approved by your department within secured government systems. The former presents obvious privacy and cybersecurity concerns, while the latter is typically governed by stricter contractual, technical and policy safeguards. In many enterprise-grade systems, those controls are specifically designed to prevent sensitive organizational data from being broadly repurposed to train public-facing models. That doesn’t make approved systems risk-free either, but it does make them very different from a free-for-all. And that distinction matters because it reflects the broader reality of modernization: the public service needs to safeguard against risk and evolve in line with expectations for greater efficiency and improved service delivery. In practice, responsible public sector AI use tends to follow a few common-sense principles: don’t input sensitive information into unapproved systems, keep humans accountable for outputs, verify results carefully, be transparent about where/when AI is being used, and treat AI as a support tool, not a decision-making one. A useful analogy: AI is less like an oracle and more like a self-assured intern or junior analyst — useful for drafting, summarizing, formatting and brainstorming — but also prone to error, factual mistakes, shallow reasoning and a somewhat sycophantic tendency to tell you what sounds good rather than what’s necessarily true. While it may save time on certain processes, judgment, accountability and final decisions should firmly remain with you. So, where does this leave you? Stick to approved tools. Avoid entering sensitive information into unvetted platforms. Start with low-risk administrative tasks. Treat outputs as first drafts, not final answers. And keep your skepticism intact — AI hype is real, and not every use case adds value. The public service doesn’t need blind adopters of AI. It needs thoughtful professionals willing to engage carefully, understand limitations, and speak up when governance, privacy, or public trust may be compromised. Right now, thoughtful skepticism may be one of the public service’s greatest strengths. — Jacob Danto-Clancy, Public Service Confidential |
Article content
Article content
Article content
Jacob Danto-Clancy is a senior policy analyst at the Institute on Governance, working on public sector governance and institutional performance. He has written and advised governments on AI, digital modernization, and emerging technology issues.
Article content
Article content
Read More
Article content
Are you a public servant with questions about your workplace? Fill out our web form or write to us anonymously at [email protected] and we’ll pick our favourites to send to an expert columnist. No gripe is too small. No topic is too big.
Article content
Public Service Confidential is an advice column, written for the Ottawa Citizen by guest contributors Scott Taymun, Yazmine Laroche, Daniel Quan-Watson, Victoria De La Ronde and Chris Aylward. The information provided in this series is not legal advice and should not be construed as legal advice.
Article content
Article content
.png)
4 days ago
19





















Bengali (BD) ·
English (US) ·