Faros AI, The AI Engineering Report 2026
Faros reports that AI adoption increased throughput while incidents-to-pull-request ratio rose 242.7% across telemetry from 22,000 developers in 4,000 teams over a two-year window, bugs per developer rose 54%, and median review time rose 441.5%.
Why it matters: AI-assisted delivery needs review, tests, monitoring, and support. Speed without operating discipline is not the point.
- Last checked
- 2026-05-04
- Confidence/caveat
- Strong for AI-assisted software delivery; not a promise about every workflow.
Microsoft 365 Copilot privacy and security documentation
Microsoft states that Microsoft 365 Copilot uses content in Microsoft Graph that the user has permission to access, and that prompts, responses, and Graph data are not used to train foundation LLMs.
Why it matters: Permissions hygiene becomes AI hygiene. If SharePoint is messy, AI search will be messy too.
- Last checked
- 2026-05-04
- Confidence/caveat
- Strong vendor documentation; safe use still depends on tenant permissions and configuration.
Office of the Privacy Commissioner of Canada, AI, privacy, and your business
The OPC says AI and generative AI are fueled by large-scale data collection, including personal information, and organizations should protect personal information entrusted to them.
Why it matters: AI adoption touching client or personal information is a privacy and governance project, not just a tool trial.
- Last checked
- 2026-05-04
- Confidence/caveat
- Strong Canadian privacy authority; applies broadly across sectors.
CPABC guidance on AI and the Code of Professional Conduct
CPABC warns registrants to avoid confidential information in AI queries, review AI output carefully, and document tool details, inputs, outputs, and professional skepticism when AI assists work.
Why it matters: Accounting AI workflows need defensible review and documentation, not just faster workpaper drafting.
- Last checked
- 2026-05-04
- Confidence/caveat
- Strong BC professional-body source for accounting; always check current Code wording.
CIRO Compliance Report for 2026
CIRO says cybersecurity remains a key business risk for dealers and that firms must protect clients' personal information, assets, critical systems, and applications.
Why it matters: Financial AI workflows need controls around client data, communications, incident readiness, and third-party providers.
- Last checked
- 2026-05-04
- Confidence/caveat
- Strong regulator source for dealers; financial firm obligations vary by registration and business model.
OSFI technology and cyber risk self-assessment tool
OSFI says cyber threats and evolving technologies increase risks to resilience and stability, and its tool helps assess maturity, preparedness, control gaps, and remediation opportunities.
Why it matters: AI and automation should strengthen control evidence, not create another unmanaged technology risk.
- Last checked
- 2026-05-04
- Confidence/caveat
- Strong official source for federally regulated institutions; apply carefully outside FRFIs.
Autodesk Construction Cloud
Autodesk describes construction workflows for document management, AI, model coordination, project management, RFIs, submittals, and daily reports.
Why it matters: Engineering automation guidance can be concrete about project records, field-office coordination, and document workflows.
- Last checked
- 2026-05-04
- Confidence/caveat
- Useful vendor source for workflow categories; not independent ROI proof.
Caseware Cloud Audit Software
Caseware positions cloud audit around automated audit workflow, relevant documents and procedures, reviewer collaboration, and AI-assisted compliance context.
Why it matters: Accounting automation should meet workpaper, review, and compliance realities instead of staying at generic productivity advice.
- Last checked
- 2026-05-04
- Confidence/caveat
- Useful vendor source for audit workflow categories; not independent ROI proof.
Engineers and Geoscientists BC, Practice Advisory: Use of Artificial Intelligence (AI) in Professional Practice
EGBC says engineering and geoscience professionals must assess and manage harm from AI tools, remain professionally responsible for AI-assisted work, and meet documented checking, direct supervision, document retention, and independent review obligations under the Bylaws. Documented checks should record the AI version used, inputs and outputs, and validation steps when outputs may vary use to use. Records must be retained for at least 10 years after a project ends or after a document is no longer in use.
Why it matters: This is the BC regulator's own bar for AI use in engineering practice. Firms that cannot show how they meet it are exposed at practice review, complaints, or insurance renewal. It also sets the documented-checks pattern that the field guide's evidence loop is meant to operationalize.
- Last checked
- 2026-05-05
- Confidence/caveat
- Strong BC professional-body source for engineering and geoscience; firms in other jurisdictions should also check PEO and other provincial advisories that follow EGBC's pattern.
Office of the Information and Privacy Commissioner for BC, Personal Information Protection Act (PIPA)
The OIPC says PIPA regulates how private-sector organizations in BC collect, use, and disclose personal information. PIPA applies to organizations in BC that handle personal information, including employee data of provincially regulated organizations. Where PIPEDA does not apply, PIPA does. Organizations that transfer personal information outside BC must ensure comparable protection.
Why it matters: Most BC professional-services AI workflows touch in-province personal information that falls under PIPA, not only PIPEDA. Vendor due diligence, cross-border transfer review, and breach response all need to be measured against the BC standard.
- Last checked
- 2026-05-05
- Confidence/caveat
- Strong BC privacy authority; organizations that are federally regulated, or that fall under PIPEDA's commercial-activity rules across borders, should review whether PIPEDA also applies.
Canada Revenue Agency, Information Circular IC05-1R1 Electronic Record Keeping
The CRA says electronic records must be readable, accessible to CRA officers on request, properly backed up, and retained for at least six years from the end of the last tax year to which they relate. AI-assisted workpapers and supporting records still need access, integrity, and retention controls.
Why it matters: This is the rule against which an AI-assisted workpaper would be measured if CRA audited it. Firms introducing AI without preserving source records, AI version, prompt, output, and human-review action create audit and disciplinary exposure that workflow design can avoid up front.
- Last checked
- 2026-05-05
- Confidence/caveat
- Strong federal source for tax records. Public Company Accounting Oversight Board (PCAOB)-equivalent assurance and listed-issuer audits operate under separate and longer retention rules; firms doing assurance work for SEC or Canadian Public Accountability Board (CPAB)-regulated entities should layer those on top.
British Columbia Securities Commission, AI fraud and adviser-use guidance
The BCSC says AI is being used to generate fake identities, deepfake testimonials, and chatbot-driven investment scams targeting BC investors, and runs avoidAIscams.ca to help investors recognize them. BC-registered firms still need books and records, communication supervision, and client-information protection when AI is involved.
Why it matters: AI is not just an internal-productivity question for BC financial firms. The same technology is being used against their clients, which raises supervision, communication review, and client-education expectations on the firm side.
- Last checked
- 2026-05-05
- Confidence/caveat
- Strong BC provincial securities regulator source; firms registered in multiple provinces should also check OSC, AMF, and other CSA member positions.
Ontario Securities Commission, AI Innovation Office
The OSC has been one of the more active Canadian securities regulators on AI advisory issues, making its AI Innovation Office useful context for firms that operate across provinces or answer national due-diligence questions.
Why it matters: BC financial firms often answer client, vendor, and compliance questions shaped by the broader Canadian securities-regulator conversation, not only by one local webpage.
- Last checked
- 2026-05-05
- Confidence/caveat
- Useful cross-province securities context; BC firms should still prioritize BCSC and CSA obligations that apply to their registration category.
ISO/IEC 42001:2023, Information technology – Artificial intelligence – Management system
ISO/IEC 42001 specifies requirements and guidance for establishing, implementing, maintaining, and continually improving an AI management system, including risk identification, impact assessment, controls, and monitoring across the AI lifecycle.
Why it matters: Consulting firms that operate AI-touched workflows for clients are starting to be asked to organize AI governance evidence against ISO/IEC 42001. Even without certification, its structure helps firms answer procurement and SOC 2 readiness questions.
- Last checked
- 2026-05-05
- Confidence/caveat
- Strong international standard; certification is optional and most consulting firms will adopt without certifying initially.
NIST AI Risk Management Framework (AI RMF 1.0)
NIST defines a voluntary framework to map, measure, manage, and govern risks of AI systems across their lifecycle. It names trustworthy-AI characteristics including validity, reliability, safety, security, resilience, accountability, transparency, explainability, interpretability, privacy enhancement, and fairness.
Why it matters: NIST AI RMF is one of the most-referenced North American frameworks in client AI questionnaires. Consulting firms whose evidence packets reference it can answer those questionnaires more quickly and credibly.
- Last checked
- 2026-05-05
- Confidence/caveat
- Voluntary framework; gives common vocabulary but does not impose audit requirements on its own.
Microsoft, Data Residency for Microsoft 365 Copilot
Microsoft documents data-residency commitments for Microsoft 365 Copilot workloads, including Canadian tenant geography and Advanced Data Residency considerations.
Why it matters: BC firms evaluating Copilot need to separate tenant storage commitments from permissions hygiene and inference-region commitments.
- Last checked
- 2026-05-05
- Confidence/caveat
- Strong vendor documentation; tenant geography and add-on coverage still need tenant-specific verification.
Microsoft, in-country data processing for Microsoft 365 Copilot
Microsoft announced in-country processing plans for Microsoft 365 Copilot in named countries, including Canada timing in the roadmap statement.
Why it matters: Storage at rest and inference processing are different questions. Firms need the distinction before treating Copilot as fully Canada-resident.
- Last checked
- 2026-05-05
- Confidence/caveat
- Vendor roadmap statement; procurement decisions should verify current availability before relying on it.
OpenAI, Expanding data residency access to business customers
OpenAI describes data-residency availability for business customers and region selection during new workspace or API project creation.
Why it matters: ChatGPT Enterprise, Edu, Business, and API residency settings are procurement controls, not a reason to use consumer ChatGPT for client data.
- Last checked
- 2026-05-05
- Confidence/caveat
- Strong vendor documentation for workspace/project provisioning; existing workspace migration constraints need current verification.
OpenAI, Business data privacy, security, and compliance
OpenAI says business customer data is not used to train models by default for its business and API offerings.
Why it matters: The no-training commitment applies to business-grade products, which is a key boundary for AI acceptable-use policies.
- Last checked
- 2026-05-05
- Confidence/caveat
- Strong vendor documentation for business products; consumer-plan settings differ.
Anthropic, Regional compliance and data residency
Anthropic documents regional compliance and data-residency considerations for Claude deployments, including enterprise deployment paths.
Why it matters: Canadian firms evaluating Claude need to distinguish direct Anthropic API use from AWS Bedrock or Google Vertex AI deployments in Canadian regions.
- Last checked
- 2026-05-05
- Confidence/caveat
- Vendor documentation; deployment path matters because direct API and cloud-marketplace deployments can differ.
Google Cloud, Canadian data residency for Gemini
Google announced Canadian data residency at rest and during machine-learning processing for Gemini-related deployments.
Why it matters: Google may be the most complete Canadian-residency path for firms already using Google Workspace or Vertex AI, but configuration still matters.
- Last checked
- 2026-05-05
- Confidence/caveat
- Vendor announcement; Workspace and Vertex AI configuration still need tenant- and project-level verification.
Google Workspace, Digital Data Sovereignty
Google Workspace describes data-sovereignty and data-region controls for eligible Workspace customers.
Why it matters: Workspace AI residency and training commitments only help when the customer is on the right tier and the admin controls are configured.
- Last checked
- 2026-05-05
- Confidence/caveat
- Vendor documentation; plan tier and admin configuration determine which controls are available.
ADP Research Institute, Today at Work Issue 3
ADP Research found that 19% of professional-service workers report using AI tools daily, while 17% have never used AI at work. The report notes that the accounting profession lags the broader knowledge-worker average.
Why it matters: A defensible reference point for where the profession actually is, rather than vendor talking points.
- Last checked
- 2026-05-05
- Confidence/caveat
- Survey self-reporting; underlying figures vary by sector and role.
NContracts, Investment Advisers and AI 2025 Compliance Report
NContracts reports that 5% of investment-adviser firms use AI for client-facing interactions and 40% use it internally, while 44% have no formal testing or validation of AI outputs.
Why it matters: Quantifies the governance gap that the field guide is designed to close.
- Last checked
- 2026-05-05
- Confidence/caveat
- Compliance-vendor source; figures are from a survey of US RIAs and may differ for Canadian-registered advisers, but the directional gap is consistent with CIRO and OSFI guidance.
SEC Investor Advisory Committee, AI Disclosure Recommendation
The SEC IAC noted that 40% of S&P 500 issuers provide any AI-related disclosure and 15% disclose board oversight of AI, while 60% view AI as a material risk.
Why it matters: Even at the largest end of the market, governance disclosure lags adoption. Smaller firms should not assume larger ones have figured this out.
- Last checked
- 2026-05-05
- Confidence/caveat
- US listed-issuer data; private BC firms are not subject to these disclosure rules but are increasingly asked the same questions by clients and insurers.