A briefing for managing partners and law firm leaders on current developments affecting how firms are governed, priced, staffed, and run.
March reporting points to a more serious phase of legal AI adoption.[1][8] Firms are moving beyond casual experimentation and starting to build it into the way they operate. That shift reaches well past technology purchasing. It affects supervision, pricing, quality control, staffing, vendor oversight, and leadership responsibility.
The practical issue is whether the firm’s management systems are keeping pace with changes in the economics and risk profile of legal work.
Legal management developments this month suggest a more mature phase of adoption. Firms are no longer just trying tools in isolated pockets. They are starting to assign executive responsibility and tie AI use to real workflows, with clearer ownership than before.
A notable example came from Herbert Smith Freehills Kramer, which described a model that puts AI oversight under a global chief AI officer and pairs lawyers with forward-deployed legal engineers.[6] Most Canadian firms will not copy that structure directly. The signal still matters. AI is moving closer to firm leadership.
This matters most in firms where informal experimentation is already happening. Once tools influence drafting, research flow, client communication, or matter economics, informal use becomes a management issue. Leaders need a clear view of where AI is being used and whether that use fits the firm’s confidentiality and quality expectations.
Recent ABA commentary and Legalweek reporting point in the same direction.[2][3] Firms are expected to show that AI use is controlled and credible. They need approved tools, workable rules, proper review standards, and a clear process for bringing in new products.
In management terms, AI governance now sits beside quality control and risk management. A short policy by itself will not help much if lawyers do not know which tools are approved or when human review is mandatory.
For smaller and mid-sized firms, the answer may be simpler than in global firms, but it still needs structure. The policy needs an owner. Product evaluation needs an owner. Exceptions need a decision-maker. Without that clarity, adoption becomes fragmented and the firm’s risk profile becomes harder to judge.
As AI reduces time on routine and mid-level tasks, the link between hours worked and value delivered becomes harder to defend in some categories of work. Recent conference coverage suggests this will strengthen the case for hybrid pricing and tighter matter scoping.[4]
The risk includes discount pressure. It also includes pricing AI-assisted work as though the underlying cost structure has not changed. That can create client friction and erode margin even while lawyers feel extremely busy. The management challenge is to see where the pricing model is lagging behind the workflow.
For many firms, the next step is to review which work types are becoming more predictable and decide where pricing needs to move first.
Court sanctions remain a reminder that supervision still matters in AI-assisted work. Reuters reported a March appeals court decision imposing a $30,000 sanction after fake citations appeared in a filing.[5] For firm leaders, every AI-related mistake is also a process failure that exposes weaknesses in training and review.
That matters because the leadership response should not focus only on the individual lawyer involved. The larger question is whether the firm has given people clear expectations and effective review practices. If not, the risk is systemic rather than isolated.
The strongest management response is usually procedural. Define when AI assistance is allowed and what level of review is required for different forms of work product.
As firms adopt more AI-enabled workflows, information governance becomes more important. Practical Law commentary highlighted data sovereignty and security as priority concerns for 2026, while Legalweek reporting emphasized systems that are governed and defensible.[3][7]
The management burden is to govern a growing mix of tools and vendors. A firm may now rely on several connected systems, each touching sensitive information in different ways. Leadership needs a view across that stack.
That means vendor review and confidentiality controls become operational issues. In practice, firms that move fastest without creating unnecessary risk are often the firms that simplify tool choice and apply consistent rules across the approved environment.
Start by reviewing whether the firm has a real governance structure rather than a growing collection of tools. Then look at practice areas where automation is reducing time faster than pricing is adapting. Tighten verification standards for AI-assisted work and reassess vendor oversight across the firm’s technology environment. Training should focus on judgment and responsible use.
Canadian hiring commentary points to continued demand for legal operations, compliance, technology, employment, and advisory roles. Robert Half also reports skill gaps in legal technology and workflow capability.[10] That suggests adaptability and judgment are becoming more valuable alongside technical legal skill.
For managing partners and executive directors, this has direct implications for recruitment and development. Hiring for technical excellence alone is becoming less sufficient. Firms increasingly need people who can work well inside redesigned processes and exercise sound judgment in AI-assisted environments.
Reuters also reported a proposal that would require practice-based AI training in California-accredited law schools.[9] Even where that proposal does not apply, it reflects a broader direction of travel. AI competence is moving toward baseline capability.
That development matters because it changes the leadership agenda. Firms that wait for outside pressure before building internal capability may find themselves behind on productivity and client confidence. Training should focus on judgment and responsible use.
Firms that respond well are likely to treat AI capability as part of professional development and management discipline, not as a side project for a few enthusiasts.
This issue draws on March 2026 reporting and commentary selected for management relevance. Source notes are listed below:
This newsletter was created with the assistance of artificial intelligence.