top of page
Writer's pictureCynthia Sharp

Creating the Law Firm of the Future: Navigating the Legal Ethics of Artificial Intelligence

Updated: May 2

By Rebecca Howlett and Cynthia Sharp


"The real problem is not whether machines think but whether men do." — B.F Skinner


By now, most attorneys are aware that artificial intelligence (AI)–driven platforms can serve as powerful tools in the legal setting. Yet, many are hesitant to take advantage of these resources for fear of unwittingly committing an ethics violation. Yet, if we approach AI tools with caution and mindfulness awareness, we can maximize their positive potential while also avoiding legal ethics issues.


At the time of our article “ChatGPT: What Lawyers Need to Know Before Using AI,” published in the June 2023 issue of GPSolo eReport, only one legal ethics case had been brought to light. (You may find it helpful to read that article in conjunction with this one as it provides foundational material.) Since that article was published, a number of reported incidents have emerged, motivating us to revisit the topic in greater depth. In this article, we will provide a survey of new AI-related ethics matters, review related ethics opinions and guidelines issued by bar associations, and highlight AI developments in the judiciary.


In examining specific case studies, our goal is not to scare our readers away from ChatGPT or other generative AI tools. Instead, our intent is to show that ethical issues arise when legal professionals take shortcuts and fail to adhere to well-established rules of professional conduct, such as the duties of competence and diligence.


Case Law Developments


The following cases illustrate why attorneys who make an error should confront their mistakes early on and resist any temptation to cover them up. The act of concealment, rather than the initial mistake itself, can exacerbate the severity of the situation. If you find yourself in ethical hot water, our best advice is “tell the truth faster.”


Missteps by a Novice Lawyer


When drafting his very first motion to set aside a decision, Colorado Springs attorney Zachariah Crabill relied heavily on ChatGPT, which, unfortunately, made up imaginary cases (a phenomenon known as AI “hallucination”). Although he discovered the fabricated information prior to the hearing, he did not make disclosure to the court and failed to withdraw the motion.


When the judge questioned the validity of the cases, Crabill inaccurately blamed a legal intern. Six days later, he filed an affidavit acknowledging his use of ChatGPT in drafting the motion. Subsequently, the presiding judge referred the matter to the Colorado Office of Attorney Regulation Counsel, who suspended Crabill from practice for a period of one year and one day. He was required to serve only 90 days, however, with the stipulation that he complete a period of probation.


Perils of Unverified Legal Research


A novice attorney employed by the Dennis Block firm cited nonexistent case law in a brief filed in a matter pending before Los Angeles Superior Court Judge Ian Fusselman. After an opposing lawyer discovered the fake citations, the judge dismissed the matter and imposed a penalty of $999 against Block’s law firm. (Because the sanction was less than $1,000, the firm was not required to report the violation to the state bar.)


At the sanctions hearing, attorney John Greenwood appeared on behalf of the Block firm and testified, “I have to say there was a terrible failure in our office. There’s no excuse for it.” He further stated that the responsible attorney (who by then was no longer employed by the firm) did not check the “online research” on which she had relied. Perhaps the firm’s willingness to take responsibility worked in their favor, given the relatively light sanction imposed by Judge Fusselman.


Beyond Proofreading

In Smith vs. Farwell, et. al., a Massachusetts attorney was sanctioned $2,000 for submitting pleadings with fictitious cases generated by AI. The lawyer apologized to the court and admitted that while he had reviewed the documents for “style, accuracy and flow,” he had not verified the accuracy of the case citations. Despite the lawyer’s candor, apology, and acknowledgment of fault, the court found that Rule 11 sanctions were appropriate. While the $2,000 penalty is not small potatoes, it is certainly far less than what has been imposed on lawyers who were less forthcoming.


When Bad Faith Matters


New York lawyer David M. Schwartz faced possible sanctions for submitting a letter brief that cited nonexistent cases. As reported by the ABA Journal, Schwartz filed the brief in question in support of Michael Cohen’s motion for early termination of supervised release. Michael Cohen had “found” the cases through Google Bard and provided them to his counsel; however, nobody on the team apparently read them. Subsequently, a new member of Cohen’s legal team, unable to verify three citations, informed the judge about the issue.


After holding a sanctions hearing, Judge Jesse Furman concluded that sanctions were not appropriate because there was no finding of “bad faith.” The judge recognized in the opinion that “Rule 11 does not always require bad faith but it does where, as here, a court raises the prospect of sanctions on its own initiative.” It is likely that the judge was somewhat lenient because the firm self-reported the error upon its discovery.


Client Notification of AI Ethics Issues


The Second Circuit recently referred attorney Jae S. Lee to the court’s grievance panel for “further investigation” for her failure to confirm the validity of cases generated by ChatGPT. Furthermore, the court ordered her to supply a copy of the ruling to her client—translated into Korean, if necessary. Ouch! This issue could have been avoided if she had simply confirmed that the cited cases supported her position. Readers interested in seeing the full opinion are referred to Park v. Kim, No. 22-2057 (2nd Cir. 2024).


Pro Se Litigants


Even pro se litigants must ensure that the cases cited in their submissions are accurate. In Ex Parte Allen Michael Lee, No. 10-22-00281-CR (Tex. App. July 19, 2023), the court dismissed an appeal due to “inadequate briefing” by a pro se litigant. The court noted that Lee’s argument portion of his brief appeared to have been generated by AI as it contained three cases that didn’t exist.


AI’s Impact on Evolving Ethics Standards


As the legal field incorporates ever-evolving AI tools, our professional ethics responsibilities will also continue to change. To date, at least four state bars—California, Florida,  Michigan, and New York—have issued guidelines or ethics opinions regarding the use of AI in the legal setting, and certainly more will follow suit. Below, we provide a brief summary of AI ethics standards available when this article was written:


● Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law” by the Standing Committee on Professional Responsibility and Conduct of the State Bar of California (Nov. 16, 2023) provides a user-friendly chart with professional ethics rules and corresponding recommendations for satisfying these duties when using AI.


● Florida Bar Ethics Opinion 24-1 (Jan. 19, 2024) confirms that attorneys are permitted to use AI in their practice under the condition that they adhere to ethical standards across several arenas, including case oversight, client confidentiality, competence, fees, and lawyer advertising.


● State Bar of Michigan Informal Opinion JI-155 (Oct. 17, 2023) is particularly relevant to members of the judiciary in Michigan. The opinion interprets the Michigan Code of Judicial Conduct, concluding, “Judicial officers have an ethical obligation to understand technology, including artificial intelligence, and take reasonable steps to ensure that AI tools on which their judgment will be based are used properly and that the AI tools are utilized within the confines of the law and court rules.”


● Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence (April 2024) provides a comprehensive overview of the benefits and risks of generative AI, including a detailed assessment of its impact to lawyers and judges’ ethical obligations.


In August 2023, the American Bar Association announced the formation of the ABA Task Force on Law and Artificial Intelligence, which will focus on the following areas:


● Impact of AI on the legal profession,

● Access to justice,

● AI benefits and challenges,

● AI governance,

● AI and legal education, and

● AI risk management.


To stay up-to-date, check out the Task Force’s series of webinars by topic area, as well as the schedule of upcoming AI events.


AI in the Judiciary


Although the role of AI in the courts continues to change rapidly, there is a general acknowledgment that it will be an intrinsic part of the judicial system going forward. U.S. Supreme Court Chief Justice John Roberts largely featured AI in the 2023 Year-End Report on the Federal Judiciary and predicted it will “transform” and “significantly” affect judicial work in the future.


What We’ve Seen So Far


To date, many individual courts have adopted AI guidelines or requirements, such as mandates for the human verification of AI-produced information and disclosure that AI was used. One example is the New Jersey Supreme Court, which issued its Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers to help lawyers navigate their ethical responsibilities under its professional rules of conduct when using AI. Also, see Bloomberg Law’s table of AI directives in federal court. Be sure to conduct ongoing research in your own jurisdiction to determine local mandates.


In addition, several national judicial organizations are working together to address the use of AI in courts. The National Center for State Courts, the Conference of Chief Justices (CCJ), and the Conference of State Court Administrators established a joint AI Rapid Response Team. This initiative created comprehensive resources to help courts understand AI’s implications on the judicial process and to develop model rules for state courts.


Recently, the Team issued Interim Guidance, which highlights potential applications of AI in court settings, such as streamlining internal processes and creating tools for pro se litigants. These guidelines also call for updated ethical guidelines and court rules on AI.

“Artificial intelligence is already impacting the courts, and we must be prepared and forward-thinking when it comes to addressing how AI can be used effectively, efficiently, and ethically to promote the administration of justice,” related D.C. Court of Appeals Chief Judge Anna Blackburne-Rigsby, who is CCJ president and co-chair of the AI Rapid Response Team.


Indeed, more courts are leveraging digital technologies, especially after the pandemic served as a catalyst for innovation. According to the 2024 State of the Courts Report: What Do Courts Think of Gen AI?, by the Thomson Reuters Institute, judges and court personnel are increasingly incorporating technology such as AI into court processes. On the other hand, the Southern District of Ohio banned the use of AI by both lawyers and pro se litigants, making it the only federal court to do so.


The Future is Uncertain

Overall, these developments suggest a proactive yet cautious approach by U.S. courts to address the opportunities and challenges of AI in the legal field. At present, several U.S. appellate courts are in various stages of studying and implementing AI-related rules. We are hopeful that ongoing efforts to incorporate AI in the judiciary will focus on maximizing AI’s potential to enhance efficiency and expand access to justice.


As Chief Justice Roberts counsels, “[a]ny use of AI requires caution and humility.” We agree. Perhaps our profession’s habit of being “notoriously averse to change” (Chief Justice Roberts again) will serve us well as we thoughtfully seek to balance the unlimited potential of AI with our professional ethics responsibilities.


Moving Forward


While we heartily encourage attorneys to actively explore the use of generative AI professionally and personally, we also emphasize that each of us is responsible for adhering to the rules of professional conduct. Hopefully, we can learn from the missteps of other attorneys outlined in the preceding case discussion. Be sure to periodically check The Legal Burnout Solution’s AI Tools and Resources List, which we update on a regular basis.


Our next column will address the crucial role that law firms and other legal organizations can play in promoting mental health in the legal field.


Rebecca Howlett, Esq. and Cynthia Sharp, Esq. are co-founders of The Legal Burnout Solution (legalburnout.com), a community committed to the well-being of lawyers. Check out The Legal Mindset Corner, a podcast dedicated to tackling the unique challenges of the legal profession.


Originally published in ABA GPSolo eReport, April 2024 Issue (Vol. 13, No. 9) by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association or the copyright holder.

40 views1 comment

1 Comment


Score Cred10
Score Cred10
Sep 24

Great guide! Thanks for sharing!When conducting a background check, Grubhub typically looks for several key factors to ensure the safety and reliability of its delivery drivers. This includes checking criminal records, driving history, and any relevant employment history. They aim to verify that applicants meet their standards for safety and professionalism. Understanding what Grubhub reviews can help potential drivers prepare for the application process. For more details on the specific criteria used in Grubhub’s background checks, you can read this article: https://consumerattorneys.com/article/what-does-grubhub-look-for-in-a-background-check.

Edited
Like
bottom of page