As the use of generative AI expands, so do concerns about its appropriate use and potential misuses.
BY KAREN J. SNEDDON AND DAVID HRICIK
Generative artificial intelligence (GAI) continues to inform practices by allowing for many forms of text creation. From creation of meeting minutes summaries to the prompting of email responses, generative AI is becoming pervasive in our personal and professional lives, and the number and type of potential uses will continue to increase. In the context of legal writing, generative AI uses now include the following:
• Drafting and editing legal documents, including contracts, motions, briefs and patent applications;
• Intaking new clients through the use of chatbots;
• Summarizing depositions, contracts or other information; and
• Simplifying legal concepts into standard English.
As the use of generative AI expands, so do concerns about appropriate use and potential misuses of generative AI. Several bar opinions have been issued discussing the ethical issues created by lawyers' use of generative AI, which includes use of ChatGPT, Microsoft Copilot and Google Gemini, and other large language models.[1] In addition to bar opinions, a few states have issued guidance on the use of generative AI,[2] and at least one commentator has suggested that malpractice liability could arise from incompetent use of generative AL[3] This installment of "Writing Matters" expands upon our prior discussions of generative AI by focusing on legal writers' ethical obligations.
The use of generative Al creates one old ethical issue, and one new one. The first issue is old and arising from the storage of information with a third party.... The second issue, however, is new. Generative Al does not just store information, it uses it.
Lawyers Must Competently Assess the Output of Generative Al
Lawyers must act competently, but competency cuts both ways when assessing the use of generative AI. On the one hand, lawyers have an obligation to use technology when necessary and reasonable to further the interests of a client, but they must also be competent to use the technology, or to associate with someone who is competent to use it. Put differently, competency may require lawyers to use technology, but they must be competent to do so.
With generative AI, lack of competency has manifested primarily in the form of "hallucinated" authority, with the first being reported in June 2023.[4]"Hallucinated authority" refers to an authority that does not exist and which AI "hallucinated"—made up completely. AI output can include case citations, statutes or other authorities that appear to be real, but which are fake.
Despite the notoriety of cases calling out lawyers for citing hallucinated authority, lawyers continue to cite them.[5]In response, some courts have required certification that the lawyer has reviewed each...