Trending

Why do lawyers keep using ChatGPT?

Every few weeks, there appears to be a new title about lawyer Entry problem To submit files that contain, in the words of one of the judges, the “generated research from artificial intelligence”. Details differ, but the average line is the same: the lawyer turns into a large language model (LLM) such as ChatGPT to help them in legal research (or worse than that, writing), and hallmos are not present, and the lawyer is not wisely until the judge or the opposition attorney indicates their mistake. In some cases, including a lawsuit for aviation from 2023, lawyers I had to pay fines To submit files with hallucinations that are generated by artificial intelligence. Why did they not stop?

The answer is mostly due to the time wings, and the way artificial intelligence crept into almost every profession. Legal research databases such as Lexisnexis and Westlaw have artificial intelligence. For lawyers who decrease in large cases, artificial intelligence can look incredibly effective. Most lawyers do not necessarily use Chatgpt to write their files, but they are increasingly using them and other LLMS to search. However, many of these lawyers, like many audiences, do not understand exactly what llms are or how they work. One lawyer was punishment In 2023 he said he believed Chatgpt was a “super search engine”. It took a file with fake martyrdoms to reveal that it is very similar to a random phrase-one that can give you either correct information or convincing formulation nonsense.

Andrew Pirman, Dean of Law Faculty at Sofolk University, argues that many lawyers use artificial intelligence tools without an accident, and the people who fell in fake martyrdom are extremist values. “I think what we see now – although these hallucinations are real, and lawyers must take it seriously and be careful about that – does not mean that these tools do not have enormous benefits and use cases of legal services.” Legal databases and research systems such as Westlaw integrate artificial intelligence services.

In fact, 63 percent of lawyers Poll by Thompson Reuters In 2024 they said they used artificial intelligence in the past, and 12 percent said they used it regularly. The respondents said that they use artificial intelligence to write summaries of the rules of judicial precedents and research “judicial precedents, laws, models, or sample language for orders.” Lawyers surveyed by Thompson Reuters see it as a tool to save time, and half of the surveyed said, “Explore the potential to implement artificial intelligence” at work is their priority. One of the respondents said: “The good attorney is a” reliable “consultant not as a document producer.

But as many recent examples showed, the documents produced by artificial intelligence are not always accurate, and in some cases are not real at all.

On one of the last prominent cases, journalist Tim Burke’s lawyers, who was arrested for publishing Fox News, not available in 2024, submitted a request to reject the case against him on the first amendment ground. After discovering that the deposit included “great distortions and the mismanagement of the supposed judicial precedents and history”, Judge Catherine Kimal Mizal, from the Central Florida region, ordered the proposal from the case record. Mizelle found nine hallucinations in the document, According to Tamba Bay Times.

Ultimately, Mizellle allowed Burke, Mark Rasch and Michael Maddux, a new proposal. In a separate file that explains the mistakes, RASCH wrote that he “assumes responsibility alone and exclusively for these mistakes.” Rush said he used “Deep Search” feature on Chatgpt Proany freedom It was already tested with Mixed resultsAs well as the AI ​​Westlaw feature.

Rasch is not alone. Lawyers who represent Antarbur recently Acknowledged using Claude AI for the company To help write an expert witness advertisement that was presented as part of A copyright violation lawsuit It was brought against anthropologist by music publishers. This deposit included a quote with “inaccurate address and inaccurate authors. Last December, wrong information expert Jeff Hancock admitted that he used Chatgpt To help organize the testimonies in a declaration to support the Minnesota law, which regulates deep use. The Hancock file included “martyrdom errors, which are popularly referred to as” hallucinations “, and the authors are incorrectly listed in another martyrdom.

These documents, in fact, are important – at least in the eyes of judges. In a recent case, a judge in California He heads a case against the state farm I was initially influenced by the arguments in a summary, just to find that the aforementioned judicial precedents were completely composed. Judge Michael Wellner wrote: “I read their summary, and they were persuaded (or at least fascinated) by the authorities they martyred, and searched for decisions to learn more about them – just to find that they are not present.”

Birman said that there are many risky methods that lawyers use from artificial intelligence in their work, including finding information in large segments of discovery documents, reviewing observations or deposits, possible arguments or potential opposition opinions. “I think in almost every task, there are ways in which childish artificial intelligence can be useful – not a substitute for lawyers, and not an alternative to the experience that lawyers bring to the table, but to complete what the lawyers do and enable them to do their work better, faster and cheaper.”

But like anyone who uses artificial intelligence tools, lawyers who rely on them to help legal research and writing must be keen to verify the work they produce, Perman said. Part of the problem is that lawyers often find themselves on time – an issue that it says was present before Llms entered the picture. “Even before the emergence of the Turedi artificial intelligence, the lawyers are submitting documents with categories that did not really address the case they claimed to take.” “It was just a different type of problems. Sometimes when the lawyers are transferred, they enter the martyrdom, and do not examine them properly; they really don’t see if the case has turned or vetoed.” (However, the cases are at least.)

Another treacherous problem is the fact that lawyers – like others who use LLMS to help research and write – are very confident in what artificial intelligence produces. “I think that many people are hacked to a feeling of comfort with directing, because it seems at first glance to be well,” Perman said.

Alexander Culoden, the election lawyer and representative of the Republican State in Arizona, said he was treating as a classmate at the level of beginners. Chatgpt has also been used to help write legislation. In 2024, he included the text of artificial intelligence in part of a draft law on Deepfakes, and LLM provided “Definition of the Foundation Line” to the deepest thing, then “I, the human being, was added to protection for human rights, things like this that excludes comedy, majestic, criticism and artistic expression, this type of things,” Coludian “. He said Guardian at that time. Culodine said he “may be” discussing his use of Chatgpt with the main CosPonsor in the draft law, but he wanted to be an “Easter egg” in the draft law. The bill has been passed to the law.

Culodine – who was It is punishable by the Arizona state tape in 2020 For his involvement in lawsuits that were challenged as a result of the 2020 elections – Chatgpt also used to write the first drafts of the modifications, and told us freedom He uses it for legal research as well. He said that to avoid the problem of hallucinations, it only verifies the categories to ensure that it is real.

“You usually do not send a producer of a young partner without verifying the testimonies,” Culodine said. “Not only the hallucinations of hallucinations; the novice assistant can mislead the issue, they do not really defend the aforementioned proposal anyway, whatever.

Culodine said he uses a “deep search” professional tool in Chatgpt and the Lexisnexis AI tool. Like Westlaw, Lexisnexis is a legal research tool used primarily by lawyers. Culoden said he has a hallucinated rate higher than Chatgpt, who says he “has decreased dramatically over the past year.”

The use of artificial intelligence among lawyers has become so bad that in 2024, the American Lawyers Association She issued her first instructions On the use of lawyers for LLMS and other tools of artificial intelligence.

Lawyers who use artificial intelligence tools “have a duty of efficiency, including maintaining relevant technological efficiency, which requires understanding of advanced nature” from artificial intelligence, opinion Read. The guidelines for lawyers recommend “obtaining a general understanding of the benefits and risks of the Gai tools that they use – or, in other words, do not assume that LLM is a” super search engine “. Lawyers should also weigh the risk of confidentiality to enter information related to their issues in LLMS and consider whether they will tell their customers about their use of LLMS and other artificial intelligence tools.

Perlman is my budget on the use of lawyers for Amnesty International. “I think the Important IQ will be the most influential technology that has ever witnessed the legal profession and that lawyers are expected to use these tools in the future,” he said. “I think we are at some point, we will stop worrying about the efficiency of lawyers who use these tools and start anxiety about the efficiency of lawyers who do not do it.”

Others, including one of the judges who held lawyers to submit files full of hallucinations created from artificial intelligence, is more skeptical. “Even with recent developments,” Wilnener wrote, “No reasonably qualified lawyer should surpass the research and writing of this technology-especially without any attempt to verify the accuracy of that article.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button