The Ethical Duty to Understand AI: What Your State Bar Expects (and What's Coming)
The question for the legal profession is no longer whether lawyers can use AI. It's whether they have a professional obligation to understand it. The answer, with increasing clarity from state bars, ethics committees, and courts across the country, is yes. And the consequences of ignorance — from sanctions to malpractice exposure to lost client trust — are becoming impossible to dismiss.
As of mid-2025, at least 40 states have adopted the ABA's technology competence amendment to Model Rule 1.1, and more than a dozen have issued formal guidance specifically addressing AI use in legal practice. The ethical ground is shifting rapidly — and firms without a clear AI governance strategy are already behind.
This isn't a theoretical concern. When a New York federal judge sanctioned attorneys in Mata v. Avianca for submitting AI-fabricated case citations, it made headlines — but the underlying principle was decades old. Lawyers have always been responsible for the accuracy of what they file. What's new is the scale and subtlety of the risk, and the growing expectation from regulators that attorneys affirmatively understand the tools they use.
Here's a comprehensive look at where the ethical obligations stand today, what's coming next, and how to build a compliance posture that protects your clients, your license, and your competitive position.
ABA Model Rule 1.1, Comment 8: The Competence Duty Already Covers AI
The foundation of the AI ethics conversation isn't a new rule — it's Comment 8 to ABA Model Rule 1.1, amended in 2012. That comment states that maintaining competence requires a lawyer to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology."
For over a decade, "relevant technology" meant things like e-discovery platforms, cloud storage, and encrypted communications. Today, it unambiguously includes AI — particularly generative AI tools used for legal research, drafting, contract review, and analysis.
What "Relevant" Means in the AI Context
The word "relevant" is doing significant work in that rule. It doesn't require every attorney to become a machine learning engineer. But it does require understanding several things about any AI tool you use in practice:
-
How the tool generates output: You don't need to understand transformer architectures, but you need to know that large language models can produce plausible-sounding but fabricated content — including fake case citations.
-
Where client data goes: Does the tool retain inputs? Are prompts used to train future models? Is data stored in a jurisdiction that creates regulatory exposure?
-
The limitations of the tool: Understanding what the AI is good at (first-draft generation, pattern recognition, summarization) and what it's not (nuanced legal judgment, jurisdiction-specific analysis without verification).
-
Your verification responsibilities: Every AI output used in legal work must be independently verified. The tool assists; it does not substitute for professional judgment.
The duty of competence, in short, now includes a duty of technological literacy — at least sufficient to make informed decisions about tool selection, use, and oversight.
The Emerging State Bar Landscape: Who Has Weighed In and What They're Saying
The state bar response to generative AI has been remarkably swift by the standards of legal ethics rulemaking. While the specifics vary, the common themes are striking in their consistency.
California (2024 Practical Guidance)
The California State Bar issued practical guidance emphasizing that attorneys must understand the capabilities and limitations of generative AI, verify all AI-generated content, and maintain confidentiality protections. The guidance specifically warns against using consumer-grade tools with client data.
Florida (Ethics Opinion 24-1)
Florida's Bar issued a formal ethics opinion requiring attorneys to understand AI tools before using them, verify AI-generated work product, protect client confidentiality in tool selection, and consider disclosure obligations on a case-by-case basis.
New York (Multiple Court Orders)
Several New York courts now require attorneys to certify whether AI was used in filings. The Southern District's standing orders have been widely influential, and the state bar has issued guidance reinforcing the duty to supervise AI use.
Texas, New Jersey, and Beyond
Texas issued a generative AI order for state courts. New Jersey's ethics committee has addressed AI in advisory opinions. Multiple other states — including Illinois, Colorado, and Pennsylvania — have either issued guidance or formed task forces.
The Four Common Themes Across Jurisdictions
Despite differences in format and specificity, virtually every state bar that has weighed in emphasizes the same four pillars:
-
1Competence
Understand the tool before you use it. Know its strengths, limitations, and failure modes. This includes understanding the difference between retrieval-augmented generation (RAG) and unconstrained language model generation.
-
2Confidentiality
Client data cannot be exposed to tools that lack adequate protections. This means evaluating data handling practices, training policies, and contractual commitments before uploading any client information.
-
3Supervision
AI output must be reviewed by a qualified attorney. The AI is a tool, not a delegate — and the attorney remains fully responsible for the work product.
-
4Disclosure
The trend is firmly toward transparency about AI use, particularly in court filings and in the context of the attorney-client relationship.
Confidentiality Under Rule 1.6: Why Tool Selection Is an Ethical Decision
This is where the rubber meets the road for most legal teams — and where the greatest risk currently lies.
Model Rule 1.6 requires lawyers to make "reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client." When you paste client facts into a consumer AI chatbot, you are potentially disclosing confidential information to a third party with no contractual obligation to protect it.
Consumer AI vs. Enterprise AI: The Distinction That Matters
Not all AI tools carry the same confidentiality risk. The critical differences lie in data handling, model training, and contractual protections:
| Factor | Consumer AI Tools | Enterprise / Purpose-Built Legal AI |
|---|---|---|
| Data used for training | Often yes, unless opted out (terms change frequently) | Contractual no-training commitments |
| Data retention | Typically retained for service improvement | Customer-controlled, with deletion policies |
| Encryption | Varies; often limited to transit | At rest and in transit; client-side options |
| Contractual protections | Consumer TOS (unilaterally modifiable) | Enterprise API agreements with negotiated terms |
| Audit capability | None | Usage logs, access controls, audit trails |
| Compliance certifications | Rarely relevant for legal use | SOC 2, GDPR compliance, industry-specific certifications |
The ethical question isn't "should I use AI?" — it's "does this specific tool meet the 'reasonable efforts' standard required by Rule 1.6?" Consumer tools, by their nature, generally do not. Purpose-built platforms with enterprise data agreements can.
White Shoe AI was built with this distinction as a foundational design principle. Enterprise API agreements ensure that client data is never used to train models. Client-side encryption, no-retention defaults, and audit-ready data handling give legal teams the documentation they need to demonstrate Rule 1.6 compliance. You can explore the platform's architecture in detail to see how these protections are implemented.
Disclosure Obligations: When, What, and How
Disclosure requirements around AI use are evolving quickly, and the landscape is uneven. But the trajectory is clear: transparency is becoming the default expectation.
Disclosure to Courts
Multiple federal and state courts now require or strongly encourage attorneys to disclose AI use in filings. The requirements range from standing orders (like those in the Northern District of Texas and the Southern District of New York) to certification requirements embedded in local rules. Even where no formal rule exists, the general duty of candor to the tribunal (Model Rule 3.3) may require disclosure in certain circumstances — particularly where AI substantially contributed to legal arguments or research.
Disclosure to Clients
Under Model Rule 1.4 (Communication), attorneys must keep clients reasonably informed about the means used to achieve their legal objectives. If AI tools are playing a material role in research, drafting, or analysis, many ethics commentators now argue that clients should be informed — especially when the use of AI affects billing or the nature of the work product.
Practical Disclosure Guidance
-
Add AI use provisions to engagement letters: Include a section explaining that the firm uses AI tools for certain tasks, what safeguards are in place, and that all AI-assisted work is reviewed by qualified attorneys.
-
Comply with court-specific requirements: Before filing in any jurisdiction, check for standing orders or local rules requiring AI disclosure. Maintain a running list by jurisdiction.
-
When in doubt, disclose: Over-disclosure carries minimal risk. Under-disclosure carries significant risk — both to client trust and to your standing with the court.
Supervision Duties: Rules 5.1 and 5.3 Apply to AI
Managing partners, supervising attorneys, and anyone with managerial authority in a legal practice have specific obligations under Model Rules 5.1 (supervision of lawyers) and 5.3 (supervision of nonlawyers). Most state bars now interpret "nonlawyer assistance" to include AI tools — meaning the firm has an affirmative duty to ensure that AI is used competently and ethically.
This isn't a matter of individual best practices. It requires firm-wide policies and infrastructure.
What Firm Leadership Must Put in Place
-
Approved tool list: Designate which AI tools may be used for client work, with documented rationale for each selection — including data handling verification.
-
Acceptable use policy: Define what types of work AI can assist with, what review is required, and what client data (if any) may be entered into which tools.
-
Training program: All attorneys and staff who use AI tools must receive training on proper use, limitations, and ethical obligations. Document attendance and content.
-
Output review protocols: Establish clear requirements for human review of all AI-generated work before it's used in any client-facing context.
-
Periodic review: AI tools and policies should be reviewed at least quarterly as both the technology and the regulatory landscape evolve.
The White Shoe AI resource library includes templates and guidance for developing these policies — because we believe that responsible AI adoption requires institutional infrastructure, not just individual effort.
The Practical Compliance Checklist
Whether you're a solo practitioner or a General Counsel managing a growing in-house team, here's what you should be able to document today:
Tool Selection & Vetting
Documented evaluation of each AI tool's data handling practices, contractual protections, training policies, and security certifications. Evidence that the tool meets "reasonable efforts" standards under Rule 1.6.
Data Handling Verification
Written confirmation from the vendor regarding data retention, training exclusion, encryption standards, and geographic data storage. Enterprise agreements with negotiated confidentiality terms.
Output Review Process
Defined workflow for human review of all AI-generated content. Citation verification protocols. Clear assignment of responsibility for final work product accuracy.
Client Disclosure Templates
Updated engagement letter language addressing AI use. Matter-specific disclosure procedures. Court filing certifications where required by local rule.
Training Records
Documentation of AI training provided to all attorneys and staff. Content of training, attendance records, and frequency of updates.
Acceptable Use Policy
Written firm policy specifying approved tools, permitted use cases, prohibited inputs, review requirements, and consequences for non-compliance.
What's Coming: The Regulatory Wave Beyond Ethics Rules
State bar ethics obligations are only the beginning. A broader regulatory framework is taking shape that will layer compliance requirements on top of professional duties.
The Colorado AI Act (Effective February 2026)
Colorado's SB 24-205 is the most comprehensive state-level AI regulation in the U.S. to date. It imposes obligations on both developers and "deployers" of "high-risk" AI systems — a category that could include AI tools used for legal analysis affecting individuals' rights or access to services. Deployers (which includes law firms using these tools) will need to conduct risk assessments, implement governance policies, and provide transparency to affected individuals.
The EU AI Act
For firms with any EU exposure — clients, counterparties, or jurisdictional reach — the EU AI Act's phased implementation through 2026 introduces a risk-based regulatory framework that may affect how AI tools are used in legal contexts. Firms advising EU-connected entities will need to understand these requirements to serve their clients competently, adding another layer to the technology competence duty.
Federal Action
While comprehensive federal AI legislation remains in progress, executive orders, agency guidance, and sector-specific regulations continue to emerge. The FTC has signaled interest in AI-related deception and unfair practices. The SEC has focused on AI in financial contexts. Firms that build robust AI governance now — rather than reacting to each new requirement — will save significant time and risk.
The firms that invest in AI governance infrastructure today won't just be compliant when new regulations arrive — they'll be positioned to advise their own clients on compliance, turning a cost center into a revenue opportunity.
The Competitive Advantage of Rigorous AI Governance
There's a temptation to view all of this as burden — one more compliance obligation layered on top of already demanding work. But the firms that get this right will discover something valuable: demonstrable AI governance is a client acquisition and retention advantage.
Sophisticated corporate clients — the ones sending RFPs with data security questionnaires, the ones asking about vendor risk management — increasingly want to know how their legal counsel handles AI. They're asking questions like:
-
What AI tools does your firm use, and what are your data handling practices?
-
Can you demonstrate that our confidential information won't be used to train third-party models?
-
What is your firm's AI acceptable use policy, and how do you train your team?
-
How do you ensure the accuracy of AI-assisted work product?
A firm that can answer these questions with specificity — naming the tools, referencing the contractual protections, pointing to written policies and training programs — isn't just avoiding discipline. It's winning trust. And in a profession built on trust, that matters.
This is precisely why White Shoe AI provides comprehensive documentation for its data handling practices, contractual commitments, and security architecture. When your client asks how their data is protected, you should have a clear, specific answer — not a vague assurance. Tools like the Compliance Navigator and Corporate Policies Drafter Associates can even help you build the internal governance documents you need, while Co-Counsel can research the specific ethics requirements in your jurisdiction.
The Bottom Line: Competence Now Requires Action
The ethical duty to understand AI is not aspirational — it's operational. It requires concrete steps: evaluating tools, documenting decisions, training teams, reviewing outputs, and staying current as the landscape evolves. The firms and in-house teams that treat this as a strategic priority — not a checkbox — will be the ones that earn client confidence, avoid regulatory exposure, and ultimately deliver better legal work.
The good news is that responsible AI adoption and efficient legal practice aren't in tension. When you choose tools that are purpose-built for legal work, with the data protections and accuracy safeguards that ethical practice requires, you get both the productivity benefits and the compliance posture. You don't have to choose between working smarter and working ethically.
AI That Meets Your Ethical Obligations — By Design
White Shoe AI was built from the ground up to satisfy the confidentiality, competence, and supervision requirements that state bars expect. Enterprise API agreements with no-training commitments. Client-side encryption. Audit-ready data handling. 25+ specialized AI Associates that deliver precision across every major practice area — with the transparency your clients and regulators demand.

