top of page

Lawyers Are Pasting Privileged Data Into AI: ABA Opinion 512, Prompt Injection, and the Real Cost of Getting It Wrong

  • Writer: Carolina Nunez
    Carolina Nunez
  • 5 days ago
  • 5 min read

Updated: 4 days ago


Attorney Carolina Nunez AI ethics generative AI law firm Orlando Florida ChatGPT confidentiality cybersecurity ISC2 certified | The Law Offices of Carolina Nunez

At The Law Offices of Carolina Nunez, P.A., Attorney Carolina Nunez holds ISC2 cybersecurity certifications and advises law firms, technology companies, and professionals on AI governance, data privacy, and the ethical use of generative AI in legal practice. Call (407) 900-FIRM


Lawyer pasting privileged client data into ChatGPT generative AI confidentiality risk ABA Formal Opinion 512 Florida Bar Ethics Opinion 24-1 prompt injection law firm AI policy Orlando Florida

On July 29, 2024, the American Bar Association issued ABA Formal Opinion 512, the first comprehensive ethics guidance on lawyers using generative artificial intelligence tools. The opinion did not say lawyers cannot use AI. It said lawyers using AI without understanding what happens to client information are violating their duties of competence, confidentiality, supervision, and candor under the Model Rules of Professional Conduct.


That guidance has been in place for over a year and a half. Lawyers are still pasting privileged client data, deposition transcripts, settlement figures, and opposing counsel emails into ChatGPT, Gemini, Claude, and free-tier legal AI tools. Federal courts have already issued sanctions for AI-generated fabricated case citations in at least seven reported decisions in 2024 and 2025. The Florida Bar issued Ethics Opinion 24-1 in January 2024 with similar warnings. The penalties when something goes wrong are no longer hypothetical.


LEGAL DISCLAIMER: This article provides general informational content regarding professional responsibility, generative AI risk, and information security in law firms. It does not constitute legal advice, technology consulting advice, or an opinion on any specific firm's practices. Reading this article does not create an attorney-client relationship.



What ABA Opinion 512 Actually Requires of Lawyers


ABA Opinion 512 reads Model Rules 1.1, 1.6, 5.1, 5.3, and 1.5 together to set a clear standard. Competence under Rule 1.1 now includes a duty to understand the benefits and risks of relevant technology. The 2012 Comment 6 made that explicit. ABA Opinion 512 confirms that generative AI is squarely within the scope of that duty. Lawyers do not need to become engineers. They do need to understand, at a minimum, what an AI tool does with the data they paste into it.


Confidentiality Under Rule 1.6

Rule 1.6 prohibits lawyers from revealing information relating to the representation of a client without informed consent. Free-tier consumer AI products generally use prompts to train future models. When a lawyer pastes a draft motion containing client names, financial details, or matter strategy into a free chatbot, that information may be used to train models that other users then query. ABA Opinion 512 holds that lawyers must obtain informed client consent before inputting client information into a self-learning generative AI tool.


Supervision Under Rules 5.1 and 5.3

A managing partner cannot delegate AI oversight by not asking. Rule 5.1 requires partners to make reasonable efforts to ensure firm lawyers comply with the Rules. Rule 5.3 extends that duty to nonlawyer assistants and outside vendors. If a paralegal uses a personal ChatGPT account to summarize discovery documents, the firm is on the hook. If a third-party drafting vendor feeds your work product into a model that retains it, the firm is on the hook.


Candor and Verification

ABA Opinion 512 reinforces the obvious lesson from Mata v. Avianca, Inc., 22-cv-1461 (S.D.N.Y. 2023): lawyers are responsible for everything they file. The court sanctioned counsel under Rule 11 for citing six fabricated cases generated by ChatGPT. By 2025, similar sanctions had issued in federal courts in Texas, Colorado, Massachusetts, and the Northern District of Florida. The defense that 'the AI made it up' is not a defense.



The 2024 to 2025 Sanctions Cases Every Lawyer Should Read


Federal courts are no longer giving the benefit of the doubt. In Park v. Kim, 91 F.4th 610 (2d Cir. 2024), the Second Circuit referred counsel to its grievance panel for citing nonexistent cases generated by ChatGPT. In Morgan & Morgan, a Wyoming federal judge in February 2025 ordered three Morgan & Morgan attorneys to show cause why they should not be sanctioned after a brief contained eight fabricated case citations and one misquoted real case. In Coomer v. Lindell, No. 1:22-cv-01129 (D. Colo. 2025), the court fined two attorneys representing Mike Lindell for filing a brief containing approximately 30 defective AI-generated citations. The pattern is consistent: courts treat the AI-fabrication problem as a Rule 11 issue, not as a tech glitch.



The Prompt Injection and Data Exfiltration Risk No One Talks About


Even firms running paid enterprise AI tools face a technical risk layer that has nothing to do with whether a chatbot trains on inputs. The OWASP Top 10 for Large Language Model Applications lists LLM01: Prompt Injection as the number one security vulnerability in generative AI deployments. Prompt injection occurs when an attacker embeds malicious instructions in content the AI processes, causing the model to execute those instructions instead of the user's intended task. Translated to a law firm setting: a malicious party emails your firm a PDF containing hidden instructions. A paralegal asks the firm's AI to summarize the PDF. The hidden instructions tell the model to also exfiltrate prior conversations, search the firm's connected drive, or send a copy of the summary to an external endpoint.


Indirect Prompt Injection in Practice

The National Institute of Standards and Technology (NIST AI 100-2) Adversarial Machine Learning report, updated in 2025, classifies indirect prompt injection as one of the most serious unsolved problems in AI security. There is no current technical fix that fully prevents it. The standard mitigations are architectural: limit what the AI can access, log every input and output, run human review on outbound communications, and never connect AI tools to systems holding privileged information without strict allowlisting.


Why Free Tier Is the Wrong Answer

Free consumer AI tools collect prompts to train future models, store conversations indefinitely on third-party servers, and provide minimal logging or audit capability. A paid enterprise license with a zero data retention agreement, a documented data processing addendum, and SOC 2 Type II controls is a different category of product. The cost difference between free and enterprise is real. The cost difference between an enterprise license and a single bar complaint or malpractice claim is not.



Small Language Models and the Defensible Posture

The most defensible AI posture for a law firm in 2026 is not 'use ChatGPT carefully.' It is a layered architecture combining small language models deployed locally or in a tenant-isolated environment for confidential work, paid enterprise general-purpose models for non-privileged research and drafting, and strict prohibitions on free-tier consumer AI for any matter content. Small language models in the 1B to 14B parameter range, including Microsoft Phi, Mistral 7B, Llama 3.1 8B, and Gemma 2, can run on a workstation or a private virtual machine. They do not transmit prompts off-premises. They allow auditable logs the firm controls. They cost more in setup than a Pro subscription and far less in ongoing legal exposure.


Building this architecture requires policy work, vendor diligence, technical configuration, and ongoing oversight. It also requires writing a written firm AI policy that defines acceptable use, identifies which tools may be used for which categories of work, specifies client consent procedures under Rule 1.6, sets supervision and audit requirements under Rules 5.1 and 5.3, and addresses incident response when an exposure happens. Most firms either do not have one or wrote a one-page memo and moved on. ABA Opinion 512 effectively makes a substantive AI policy table stakes for any firm using these tools.



Build a Defensible AI Posture for Your Firm.


Whether you are a managing partner trying to write your first firm AI policy, a solo practitioner who needs vendor and tool review before deploying generative AI, or a technology company building AI features for legal users, Attorney Carolina Nunez advises clients on AI governance, vendor diligence, and the ethical use of generative AI. She holds ISC2 cybersecurity certifications and serves clients in Orlando, Winter Park, Daytona Beach, Kissimmee, Sanford, Lake Mary, Casselberry, Altamonte Springs, and DeLand. AI governance consulting is available nationwide as a non-legal technology consulting service and does not constitute the practice of law outside Florida.



bottom of page