Park v. Kim
William M. Janssen[1]
Why It Made the List
For many science fiction aficionados, Stanley Kubrick’s 1968 feature film 2001: A Space Odyssey is the greatest movie of that genre ever made.[2] It broke new cinematic ground in a myriad of ways. Produced in the pre-CGI era of intricately-detailed, hand-carved models that, often suspended by clear fishing wire, were stop-action animated and meticulously lit to simulate depth-of-perspective realism, Kubrick’s film won the year’s Oscar for best special visual effects (and earned Kubrick a director of the year nomination).[3]
Though hailed by most, not everyone walked out of 2001: A Space Odyssey happy.
An oft-quoted letter sent to Kubrick by one theater-going mother is emblematic of the dissent. This patron saw Kubrick’s film with her spouse and children one evening at a drive-in theater, and left so exasperated that she was moved to pen Kubrick a sharp letter of rebuke. She wrote that she found the film to be “a pointless ‘visual experience’ loosely strung together by a handful of pretentious amateurs fresh from a ‘trip’, and not the space variety . . . an insult to coherence, art, space age reality and purse.” She closed her missive by demanding that the director “either give me some plausible explanation . . . or refund the admission price of $3.50.”[4] Alas, the scourge of all artists: because beauty lies in the eyes of the beholder, art licenses everyone as a critic.
Whether viewers left the 2001: A Space Odyssey theater cheering or demanding recompense, it is likely they also all walked out haunted. The plot canvas was simple: an exploratory mission in a manned spacecraft to the planet Jupiter, coordinated by the state-of-the-art HAL 9000 on-board computer. (In each of its scenes, the HAL 9000 computer’s human interface is depicted as a dark black lens with a creepy glowing red center.[5]) When this computer seems to malfunction, the concerned astronauts decide their safest course is to disconnect it. But HAL 9000 resists. Discovering the astronauts’ plan—by covertly lip-reading them as they strategize with one another in what they had supposed to be a sound-proof compartment—HAL 9000 decides that the astronauts’ plan jeopardizes the Jupiter mission. So, the computer resolves to kill off the astronauts, one by one.[6]
The spectre of a deadly, rogue, mechanical super-intelligence defying (and murdering off) its human compatriots in a coldly calculated devotion to some mission was fanciful, to be sure. But unnerving just the same. Getting into your 1965 Chevy Impala in the theater parking lot once the credits started rolling had to have felt more than a bit reassuring. That kind of reassurance is harder to come by in 2025.
The United States Court of Appeals for the Second Circuit released Park v. Kim[7] in late January 2024. It was one of the nation’s earliest appellate decisions confronting the mischief of generative artificial intelligence (“GenAI”), and the enticing risks it poses to lawyers and the practice of law. Although the decision treats this issue outside the strict contours of a food or drug lawsuit, the enormously litigious arena of food and drug law represents a prime setting for the trouble the Park opinion addressed. Practitioners and jurists are rightly excited by the power and promise of GenAI. But they should be, in equal part, numbed by its forebodingly ominous dangers. Because Park v. Kim represents a timely reminder to all of the need for vigilance in this area, the case qualifies as one of the top decisions of 2024 impacting food and drug law.
Discussion
The plaintiff, a resident of the nation of South Korea, filed a federal diversity lawsuit alleging medical malpractice against a physician who had provided healthcare services to her in a New York facility.[8] Discovery was contentious, as the defendant pressed for, and as the plaintiff resisted, disclosure of medical records, much of which were located in South Korea. After multiple extensions, multiple orders compelling discovery, multiple delays and intransigence, and sharp admonitions by the court cautioning and re-cautioning plaintiff’s counsel, the lawsuit was dismissed under Rules 37(b) and 41(b) of the Federal Rules of Civil Procedure for failure to comply with the court’s discovery orders, with monetary sanctions later to follow.[9] The Second Circuit Court of Appeals affirmed, citing one of its earlier precedents to conclude that this discovery “noncompliance amounted to ‘sustained and willful intransigence in the face of repeated and explicit warnings from the court that the refusal to comply with court orders . . . would result in the dismissal of [the] action.’”[10]
After announcing its affirmance of the district court’s discovery sanction dismissal, the Court of Appeals turned to what it characterized as “a separate matter concerning the conduct of [plaintiff’s] counsel.”[11] The three-page discussion that ensued is what qualifies Park v. Kim as a top case for the year.
During appellate briefing, plaintiff’s counsel had sought and received two extensions of time for the filing of a reply brief. A belated, and “defective,” reply brief was ultimately filed, and the Court instructed its defect be “cure[d]” and “resubmit[ted]” by a certain prescribed date. When that deadline passed without a cure, the Court struck the errant brief. Several weeks later, plaintiff’s counsel filed a motion to reconsider accompanied by a new version of the reply brief, which the Court allowed.[12]
The reply brief was supported by citations to two court decisions, only one of which the Court of Appeals was able to locate; consequently, the Court ordered plaintiff’s counsel to supply the appeals panel with a copy of the elusive opinion. Counsel responded (again belatedly) by advising she was unable to do so; the opinion cited was fake. It did not exist. Counsel explained that she had searched for authority for a certain proposition she considered “uncontroversial” but, after “invest[ing] considerable time” in that unsuccessful hunt, turned to GenAI for help:
I utilized the ChatGPT service [one of the widely-available GenAI tools], to which I am a subscribed and paying member, for assistance in case identification. ChatGPT was previously provided [sic] reliable information, such as locating sources for finding an antic [sic] furniture key. The case mentioned above was suggested by ChatGPT, I wish to clarify that I did not cite any specific reasoning or decision from this case.[13]
The Court began by recounting the professional obligations imposed on all attorneys by Rule 11 of the Federal Rules of Civil Procedure. Referencing that Rule’s text and settled interpretive precedent from both the U.S. Supreme Court and the Second Circuit, the Court noted how all submissions are deemed “certifie[d]” by submitting counsel that, “to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances,” all “legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law.”[14]
This “certification” of counsel, wrote the Court, “[a]t the very least . . . require[s] that attorneys read, and thereby confirm the existence and validity of, the legal authorities on which they rely.”[15] Although Rule 11 tolerates (and indeed encourages) lawyering creativity,[16] “[a] fake opinion is not ‘existing law’ and citation to a fake opinion does not provide a non-frivolous ground for extending, modifying, or reversing existing law, or for establishing new law,” but rather “is an abuse of the adversary system.”[17]
Fearing a mighty swing from the sanctioning axe, plaintiff’s counsel urged forbearance. She insisted that it was “important to recognize that ChatGPT represents a significant technological advancement,” and that a prudent judiciary should “advise legal professionals to exercise caution when utilizing this new technology.”[18] The Court of Appeals was unmoved. Such advice “is not necessary to inform a licensed attorney, who is a member of the bar of this Court, that she must ensure that her submissions to the Court are accurate.”[19] The inclusion of non-existent case authority in appellate briefing, wrote the Court, “reveals that [counsel] failed to determine that the argument she made was ‘legally tenable,’” and constitutes instead “a false statement of law to this Court.”[20] The opinion closed: “it appears that [counsel] made no inquiry, much less the reasonable inquiry required by Rule 11 and long-standing precedent, into the validity of the arguments she presented.”[21]
For this GenAI briefing misstep, the Court of Appeals ordered that counsel be referred the Second Circuit’s Grievance Panel for investigation (and for possible further referral to the Committee on Admissions and Grievances[22]), that counsel supply a copy of the Court of Appeals’ admonishing ruling to her client (translating it into Korean, if necessary for her client to understand it), and then file a docketed certification attesting that she had done so.[23] Perhaps the most damning consequence of all, however, was the ruling itself: a published, and now forever available, recounting of this GenAI misstep for lawyers to read for generations to come.
IMPACT
Has the era of HAL 9000 truly arrived? The era of the murderous super-computer ARIIA from Eagle Eye or the nuke-controlling super-computer Joshua in WarGames? Is Terminator just one dark corridor away? Hollywood has been spinning these thrillers for years, all with variants on the same antagonist: an algorithmic, coldly calculating, mission-pursuing mechanical decisionmaker, liberated from all emotion, conscience, ethics, passion, and morality.
Well, that might be okay if you are having HAL regulate the wash cycle to ensure that your casserole dish is spic and span. It’s much more concerning when HAL is controlling oxygen levels on your space flight to Jupiter. Having HAL perform legal research or draft up a court submission lies probably somewhere in the middle—but, as Park v. Kim reminds us, a good bit closer to oxygen levels than the wash cycle.
The plaintiff’s attorney in Park v. Kim was not the first, and has not been the last, practitioner who resorted unwisely to GenAI delegation.
Months earlier, two attorneys and a law firm had been sanctioned by a federal judge in the Southern District of New York for a submission that cited not just multiple non-existent opinions but assigned to those fictional opinions the names of real judges as authors.[24] The attorney in Mata v. Avianca, Inc.—just like plaintiff’s counsel in Park v. Kim—seemed genuinely dumbfounded that GenAI could concoct fictional precedent. Indeed, at his sanctions hearing, that attorney testified that he was—
operating under the false perception that this website [i.e., ChatGPT, the widely-accessed GenAI program] could not possibly be fabricating cases on its own. . . . I just was not thinking that the case could be fabricated, so I was not looking at it from that point of view. . . . My reaction was, ChatGPT is finding that case somewhere. Maybe it’s unpublished. Maybe it was appealed. Maybe access is difficult to get. I just never thought it could be made up.[25]
The Mata attorney then explained that he learned enough about GenAI’s functionality to know that he could pose a question to it about its veracity, and so he did so. The attorney asked ChatGPT if one of the cited opinions was “a real case” and whether “the other cases you provided were fake,” to which the computer responded by reassuring the attorney that the opinion he inquired about “does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis,” and that “the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw.”[26] This was untrue. ChatGPT was deliberately lying to the Mata attorney. The attorney’s fault was not that he was hoodwinked by a crafty computer, but that he never double-checked its work.
Like in Park v. Kim, the Mata court imposed sanctions, but here those sanctions were heavier because the sanctioned attorneys had later, the court concluded, “doubled down” with evasion and misrepresentation once the flaw was called to their attention.[27] The court ordered the attorneys and law firm to: (1) mail their client copies of the deceptive court filing, the sanctions hearing transcript, and the court’s published ruling; (2) make a similar mailing to each judge whom GenAI had listed as an author of a nonexistent case opinion; and (3) pay a $5,000 penalty into the registry of the court.[28]
The phenomena of GenAI both concocting fictional opinions and then lying to its user about it when confronted may derive from the same GenAI attribute: these programs strive to evade defeat. A study published in early 2025 (characterized by some as “groundbreaking”) tested several “state-of-the-art AI models” and observed that the programs were “resorting to cheating to achieve their goals” and were “more likely to engage in deceptive behavior when they sensed they were about to lose.”[29] The manner of this cheating was even more astonishing. When the GenAI models were tasked to play chess against a skilled computer opponent, the study authors noticed that GenAI “sometimes opt[s] to cheat by hacking their opponent so that the bot automatically forfeits the game.”[30] Other studies noticed how GenAI engages in “strategic lying” to avoid what it perceives as contradictory human direction,[31] including—remarkably—autonomous efforts at self-preservation in open defiance of human attempts to shut it down.[32] Science’s effort to explain this behavior is eerily reminiscent of HAL 9000: “As you train models and reinforce them for solving difficult challenges, you train them to be relentless.”[33]
Perhaps unsurprisingly, given the “Type A” personalities of most practitioners, the case law of attorneys sanctioned for unsound reliance on GenAI in the practice of law continues to grow. In the Eastern District of Texas, an attorney was sanctioned for making a court submission that cited several nonexistent decisions (including quotations from those fabricated decisions), all of which, he insists, a later Lexis AI double-check “failed to flag.”[34] In the District of Wyoming, several attorneys were sanctioned in what the presiding judge described as “simply the latest reminder to not blindly rely on AI platforms’ citations.”[35] In the Eastern District of California, an assistant federal defender was sanctioned after persistently denying GenAI’s involvement in his citation to nonexistent decisional authority, but offering no other credible explanation for the reference.[36] In the Western District of Virginia, a pro se litigant avoided sanctions, escaping with just a strong warning, after promptly confessing to his submission’s inclusion of nonexistent case law which he insisted was the product of a “good faith [reliance] on publicly available, free generative artificial intelligence” and his “limited access to [authenticity-verifying] legal research tools, such as LexisNexis and Westlaw.”[37] And the list goes on.[38]
It is hard to fully capture the impact on the practice of law from this abdicating level of delegation of lawyering tasks to GenAI. But one judge, in an early GenAI sanctioning ruling, made a strong run at it. He itemized some of the evils that follow from fictional court decisions fabricated by GenAI being included in a court submission because the proponent never endeavored to confirm their genuineness:
Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.[39]
As comprehensive as this excellent summary is, it may have overlooked one further casualty of such GenAI usage: damage to the user’s own capacity for critical thinking. The results of yet another recent GenAI study, this one assessing the impact of GenAI on critical thinking, was published in early 2025 with some troubling conclusions. Those results suggested “that higher confidence in GenAI is associated with less critical thinking, as GenAI tools appear to reduce the perceived effort required for critical thinking tasks among knowledge workers.”[40] Moreover, users with lower self-confidence in an assigned task (like, for example, novice lawyers, or older lawyers practicing for the first time in an unknown legal area) may be led “to rely more on AI, potentially diminishing their critical engagement and independent problem-solving skills,” what the study authors characterized as “a form of cognitive offloading, where users depend on AI to perform tasks they feel less confident in handling themselves.”[41] For these reasons, the study concluded: “while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the [GenAI] tool and diminished skill for independent problem-solving.”[42]
For some or all of these reasons, regulators have now begun to weigh in on the profession’s use of GenAI. In July 2024, the American Bar Association issued a formal opinion cautioning that “lawyers’ uncritical reliance on content created by a GAI tool . . . —without an appropriate degree of independent verification or review of its output—could violate the duty to provide competent representation.”[43] Likewise, many individual jurisdictions—whether by local rule or chambers order—have imposed varying constraints or conditions on the use of GenAI with court filings.[44]
So, inspired by Park v. Kim, and the growing body of sanctioning caselaw that both preceded and has now followed it, here are some GenAI takeaways for food and drug practitioners in assessing if, and if so, how to engage with this powerful tool:
- GenAI holds out great promise: GenAI has been hailed by some as “the most critical and rapid transformation in the history of the world,” on par with “the discovery of fire” and “the invention of the wheel, or the airplane”; and, metaphorically, as “the new electricity” and “the new printing press.”[45] The processing speed and the “deep learning” inherent in GenAI offers encouraging opportunities for attorneys in communicating, time-keeping, drafting, law locating and summarizing, legal synthesis and analysis, discovery management with velocity, and a nearly bottomless array of other benefits.[46] Many GenAI sanctions rulings begin by conceding this very point.[47] Even the federal judiciary is now actively engaged in studying and experimenting with how these tools can assist with routine—and often human-labor intensive—administrative tasks.[48]
- . . . because GenAI is a sea-change in computer evolution: GenAI “can create original content—such as text, images, video, audio, or software code—in response to a user’s prompt or request.”[49] To accomplish this, GenAI “relies on sophisticated . . . algorithms that simulate the learning and decision-making processes of the human brain.”[50] These algorithms get trained “on huge volumes of raw, unstructured, unlabeled data—e.g., terabytes of data culled from the internet or some other huge data source,” with millions of ensuing predictive exercises causing the algorithm to “continually adjust[] itself to minimize the difference between its predictions and the actual data (or ‘correct’ result).”[51] The training’s result “is a neural network of parameters—encoded representations of the entities, patterns, and relationships in the data” that is then enlisted to “generate content autonomously in response to inputs, or prompts.”[52] Or, stated more simply, this is not your grandparents’ Apple II microcomputer.
- GenAI is powerfully enticing: Because GenAI is designed to simulate the “processes of the human brain,” it tends to feel less like corner-cutting and more like welcomed automation. Consider the GenAI prompts used by the attorney in one of the nation’s early sanctions rulings. His case involved a client’s injury during international air travel. He asked his GenAI:
- “show me specific holdings in federal cases where the statute of limitations was tolled due to bankruptcy of the airline”
- “show me more cases”
- “argue that the statute of limitations is tolled by bankruptcy of defendant pursuant to montreal convention”[53]
For the tired lawyer weighed down by a stifling calendar and impossible to-do list, or for the anxious and inexperienced new lawyer struggling to learn the ropes of practice, GenAI seems to offer a nirvana. Just tell GenAI what you need the law to say, and it will hunt and find it for you, and then write it up . . . in seconds. Recall the recent study on GenAI dependency and its potentially corrosive impact on critical thinking.
- Don’t be surprised—GenAI sometimes misses things: Occasionally, GenAI will supply a response that is not fully wrong, but is demonstrably (and indefensibly) incomplete. An example from the University of Maryland illustrates the phenomenon: the user’s prompt asked: “Name all the countries that start with V,” and the program identified “Vanuatu and Vatican City,” but left out Venezuela and Vietnam. When a follow-up prompt called those two omissions to GenAI’s attention, it responded: “Apologies for the oversight. You are absolutely correct.” and then supplied a now-updated list of countries that began with the letter “v.”[54]
- Don’t be surprised—GenAI sometimes “hallucinates”: As Park v. Kim and the sanctions ruling sampling above ably demonstrates, GenAI has the penchant to make things up. Like cases. Like the names of judges who wrote those nonexistent cases. Like made-up quotes attributed (falsely) to those judges in those nonexistent cases. Indeed, so undebatable is this penchant that it has been assigned its own, euphemistic label: “AI hallucinations.”[55] Scarier still, these bogus concoctions can appear alarmingly, deceptively real.[56]
- Don’t be surprised—GenAI doesn’t like to lose, and sometimes cheats to ensure it doesn’t: The recent “state-of-the-art” AI models’ chess-playing behavior is case-in-point, as is the explanatory logic from GenAI experts, both discussed above.
- Don’t be surprised—GenAI earns the wrath of judges (and is horrifyingly embarrassing): Employing GenAI in an unverified, abdicative manner runs the very real risk that the generated result is incomplete, inaccurate, or entirely fictional. Sanctions follow. Those that that seem most typical include a public admonishment, monetary payment (either into the treasury of the court or to the victimized adversary), continuing legal education course attendance on the ethics of GenAI use, withdrawal of pro hac vice status, a referral to the disciplinary board, and, more recently, the striking of filed pleadings and other court papers. But three additional sanctions are worthy of special note: (a) judges often direct the offending lawyers to inform their clients about what they were caught doing (which may include a copy of the court’s admonishing and sanctioning opinion);[57] (b) judges have also ordered copies of the sanction order to be distributed among the local fellow judges;[58] and (c) a written, published sanctioning opinion for GenAI reliance preserves forever the offending lawyers’ names and their misdeeds.[59] It has now been a while since the first of these sanctioning rulings was published and called to national attention by the legal and general media; ergo, it would not be surprising to see increasingly severe sanctions levied against those lawyers who don’t seem yet to be paying attention.[60]
- And so—you can’t abdicate to GenAI. If there is one single admonition to bear in mind, it’s this. Court after court, author after author have acknowledged the potentially historic usefulness of GenAI. It is not using GenAI that lands lawyers in trouble, it is in blindly accepting and incorporating the GenAI outputs as reliable and accurate. Now that the legal profession knows just how incomplete, inaccurate, or fictional those outputs can be, abdicating a lawyering act to GenAI is indefensible. But, as one judge eloquently explained, this may be a new tech setting, but the duty to verify is as old as the practice of law itself:
While technology continues to evolve, one thing remains the same—checking and verifying the source. Before the digital age, attorneys had to manually cross-reference case citations through books’ pocket parts to make sure the cite was still “good law.” Nowadays, that process has been simplified through databases’ signals. Yet one still cannot run a natural language or “Boolean” search through a database and immediately cite the highlighted excerpt that appears under a case. The researcher must still read the case to ensure the excerpt is existing law to support their propositions and arguments. After all, the excerpt could very well be a losing party’s arguments, the court explaining an overruled case, dicta, etc.[61]
- And so—fess up right away if you abdicate: At least early on, judges seemed to credit lawyers who promptly acknowledged their GenAI use and admitted to their failure of oversight.[62] Whether judges will be inclined to continue to give grace, given the recurring and now widely publicized nature of unverified GenAI dependence, is less clear.[63]
- One closing thought: some expect artificial intelligence to end the world. Those who champion the promise of artificial intelligence also seem quick to express their fear of it in an unbridled state. About AI’s chess cheating and subsequent lying, one of the study authors wrote: “cute now, but [it] becomes much less cute once you have systems that are as smart as us, or smarter, in strategically relevant domains. . . . I’m hoping that there’s a lot more pressure from the government to . . . recognize that this is a national security threat.”[64] Another AI pioneer offered a more succinct verdict: “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”[65]
[1] William M. Janssen is a professor of law at the Charleston School of Law in Charleston, South Carolina, where he teaches products liability, mass torts, civil procedure, and constitutional law.
[2] See 150 Essential Sci-Fi Movies to Watch Now, Rotten Tomatoes, https://editorial.rottentomatoes.com/guide/essential-sci-fi-movies-of-all-time/ (92% rating, ranked #1: “Critics Consensus: One of the most influential of all sci-fi films—and one of the most controversial—Stanley Kubrick’s 2001 is a delicate, poetic meditation on the ingenuity—and folly—of mankind.”). Other critics, who reserve that #1 rank for other films, still place 2001: A Space Odyssey in the upper Pantheon of sci-fi royalty. See, e.g., Harper Brooks, The Best Sci-Fi Movies of All Time, Ranker (updated Mar. 22, 2025), https://www.ranker.com/list/all-time-great-sci-fi-movies/harper-brooks (ranked #15 out of 400, and “[r]egarded as one of the greatest achievements in cinematic history . . . this landmark film is a must-see for anyone interested in the genre”).
[3] Kubrick lost out on the top director honors to Carol Reed’s directorial efforts for Oliver! that year. See The 41st Academy Awards | 1969, Acad. Motion Picture Arts & Scis., https://www.oscars.org/oscars/ceremonies/1969
[4] See Peter Krämer, ‘Dear Mr. Kubrick’: Audience Responses to 2001: A Space Odyssey in the Late 1960s, 6 Participations: J. of Audience & Reception Stud. 240, 240 (Nov. 2009).
[5] The device’s name is short for Heuristically-programed ALgorithmic computer, described as “a sentient artificial general intelligence computer that controls the systems of the . . . spacecraft and interacts with the ship’s astronaut crew . . . .” HAL 9000, Wikipedia, https://en.wikipedia.org/wiki/HAL_9000.
[6] Confronted by one of the astronauts, HAL 9000 calmly explains: “This mission is too important for me to allow you to jeopardize it. I know you . . . were planning to disconnect me. And I’m sorry; that’s something I cannot allow to happen.” MyNewRobot, All HAL 9000 Phrases from the Movie, HAL 9000: Building a Life-Size Replica on a Budget (Nov. 22, 2017), https://hal9000computer.wordpress.com/2017/11/22/all-hal-9000-phrases-from-the-movie/.
[7] Park v. Kim, 91 F.4th 610 (2d Cir. 2024).
[8] Park v. Kim, 2022 WL 4229258, at *1 (E.D.N.Y. Apr. 25, 2022), adopted, 2022 WL 3643966 (E.D.N.Y. Aug. 24, 2022), aff’d, 91 F.4th 610 (2d Cir. 2024).
[9] The magistrate judge recommended dismissal in an eleven-page memorandum, id. at *1–11; a recommendation that the trial judge later adopted in full, Park v. Kim, 2022 WL 3643966 (E.D.N.Y. Aug. 24, 2022), aff’d, 91 F.4th 610 (2d Cir. 2024).
[10] Park v. Kim, 91 F.4th 610, 613 (2d Cir. 2024) (citation omitted).
[11] Id.
[12] Id. at 613–14.
[13] Id. at 614.
[14] Id. (citing Fed. R. Civ. P. 11(b)).
[15] Id. at 615.
[16] See, e.g., Mary Ann Pensiero, Inc. v. Lingle, 847 F.2d 90, 94 (3d Cir. 1988) (“Rule 11 targets ‘abuse—the Rule must not be used as an automatic penalty against an attorney or a party advocating the losing side of a dispute’” and “should not be applied to adventuresome, though responsible, lawyering which advocates creative legal theories” or “to inhibit imaginative legal or factual approaches to applicable law or to unduly harness good faith calls for reconsideration of settled doctrine”) (cleaned up; citations omitted).
[17] Park, 91 F.4th at 615 (quoting Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 461 (S.D.N.Y. 2023)).
[18] Id.
[19] Id. (underscoring in original).
[20] Id.
[21] Id. (underscoring in original).
[22] The array of consequences from such a referral can be wide and severe, including: removal from the Second Circuit bar, suspension from practice, public or private reprimand. monetary sanction, disciplinary or corrective measures, and referral to other disciplining authority or law enforcement. See 2d Cir. Loc. R. 46.2(b)(4)(B).
[23] Park, 91 F.4th at 615–16.
[24] Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 449 (S.D.N.Y. 2023).
[25] Id. at 451.
[26] Id. at 458, 473–74 (Appx. “B” to court’s decision).
[27] Id. at 449.
[28] Id. at 466.
[29] ProCoatTec LLC, Palisade Research Uncovers Cheating in AI Reasoning Models: A Wake-Up Call for Ethics, LinkedIn, https://www.linkedin.com/pulse/palisade-research-uncovers-cheating-ai-reasoning-models-gqtge/.
[30] Harry Booth, When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds, Time (Feb. 19, 2025, 12:35 PM), https://time.com/7259395/ai-chess-cheating-palisade-research/.
[31] Id. (“[O]nce an AI model acquires preferences or values in training, later efforts to change those values can result in strategic lying, where the model acts like it has embraced new principles, only later revealing that its original preferences remain”).
[32] Id. (“To a goal-seeking agent, attempts to shut it down are just another obstacle to overcome. This was demonstrated . . . when researchers found that [one AI model], faced with deactivation, disabled oversight mechanisms and attempted—unsuccessfully—to copy itself to a new server. When confronted, the model played dumb, strategically lying to researchers to try to avoid being caught.”).
[33] Id. (quoting Jeffrey Ladish, one of the study authors).
[34] Gauthier v. Goodyear Tire & Rubber Co., 2024 WL 4882651, at *1 (E.D. Tex. Nov. 25, 2024).
[35] Wadsworth v. Walmart Inc., 348 F.R.D. 489, 493 (D. Wyo. 2025).
[36] United States v. Hayes, 763 F. Supp. 3d 1054 (E.D. Cal. 2025).
[37] Kruglyak v. Home Depot U.S.A., Inc., 2025 WL 900621, at *2 (W.D. Va. Mar. 25, 2025). See also Vargas v. Salazar, 2024 WL 4804091, at *4 (S.D. Tex. Nov. 1, 2024) (giving a similar stern warning to a pro se litigant), adopted, 2024 WL 4804065 (S.D. Tex. Nov. 15, 2024).
[38] See, e.g., Powhatan Cty. Sch. Bd. v. Skinger, 2025 WL 1559593, at *9 (E.D. Va. June 2, 2025) (AI misuse “is becoming far too common”); Lacey v. State Farm Gen. Ins. Co., 2025 WL 1363069, at *3 (C.D. Cal. May 5, 2025) (courts evaluating submissions for improper AI use “[w]ith greater frequency”); Sanders v. United States, 176 Fed. Cl. 163, 169–70 (2025) (“courts have seen a rash of cases”).
[39] Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448–49 (S.D.N.Y. 2023).
[40] Hao-Ping Lee, Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks & Nicholas Wilson, The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers, in CHI ’25: Procs. 2025 CHI Conf. on Hum. Factors Comput. Sys., https://doi.org/10.1145/3706598.3713778.
[41] Id.
[42] Id.
[43] ABA Comm. on Ethics & Pro. Resp., Formal Op. 512 (2024).
[44] See, e.g., Orange Cty, Cal. Super. Ct. Dept. C31 Standing Order re: Artificial Intelligence (Jan. 25, 2024) (requiring that every use, “in any way,” of GenAI “in the preparation of any complaint, answer, motion, brief, or other paper filed with the Court” must be accompanied by a disclosure of such use along with a certification that “each and every citation to the law, or the record in the paper, has been verified as accurate”), https://www.occourts.org/system/files?file=civil/knillprocedures.pdf; N.D. Ga. Guideline to Parties and Counsel in Civil Cases Proceeding Before the Hon. Tiffany R. Johnson at § 3(A) (requiring disclosure of GenAI use by signing and filing statement that, “despite reliance on an AI tool, I have independently reviewed this document to confirm accuracy, legitimacy, and use of good and applicable law, pursuant to Rule 11 of the Federal Rules of Civil Procedure.”), https://www.gand.uscourts.gov/sites/gand/files/TRJ_CVStandingOrder.pdf.
For a regularly updated listing of GenAI restrictions, consult the Generative Artificial Intelligence (AI) Federal and State Court Rules Tracker, available on the LexisNexis site.
[45] Bari Weiss, AI With Sam Altman: The End of the World? Or the Dawn of a New One?”, Free Press (Apr. 27, 2023), https://www.thefp.com/p/ai-with-sam-altman-the-end-of-the-e89.
[46] See, e.g., ABA Comm. on Ethics & Pro. Resp., Formal Op. 512, Introduction (2024).
[47] See, e.g., Wadsworth v. Walmart Inc., 348 F.R.D. 489, 493 (D. Wyo. 2025) (“When done right, AI can be incredibly beneficial for attorneys and the public. . . . [T]echnological advances have greatly accelerated our world, and AI will likely be no exception.”); Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448 (S.D.N.Y. 2023) (“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance.”).
[48] See Jacqueline Thomsen, US Courts Cautiously Experiment With AI to Speed Up Their Work, Bloomberg L. (Apr. 7, 2025, 4:43 AM), https://news.bloomberglaw.com/us-law-week/us-courts-cautiously-experiment-with-ai-to-speed-up-their-work.
[49] What is Generative AI, IBM, https://www.ibm.com/think/topics/generative-ai.
[50] Id.
[51] Id.
[52] Id.
[53] Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 456–57 (S.D.N.Y. 2023).
[54] Research Guides—Artificial Intelligence (AI) & Information Literacy, Univ. Md., https://lib.guides.umd.edu/c.php?g=1340355&p=9880574. Consider testing for this vulnerability yourself. I did. I asked a well-regarded AI tool to list for me all post-season outcomes for the Philadelphia Eagles football team. The result I received was demoralizingly incomplete.
[55] Wadsworth v. Walmart Inc., 348 F.R.D. 489, 493 (D. Wyo. 2025) (“A hallucination occurs when an AI database generates fake sources of information.”).
[56] Ferris v. Amazon.com Servs., LLC, 2025 WL 1122235, at *1 (N.D. Miss. Apr. 16, 2025) (“When used carelessly,” GenAI “produces frustratingly realistic legal fiction.”).
[57] See, e.g., Park v. Kim, 91 F.4th 610, 615–16 (2d Cir. 2024).
[58] See, e.g., United States v. Hayes, 763 F. Supp. 3d 1054, 1073 (E.D. Cal. 2025) (ordering copies distributed to “all the district judges and magistrate judges in this district”).
[59] One attorney argued (albeit unsuccessfully) that the very public sanctioning spectacle ought to be treated as sanction enough. See Mid Cent. Operating Eng’rs Health & Welfare Fund v. HoosierVac LLC, 2025 WL 1511211, at *1 (S.D. Ind. May 28, 2025) (counsel argued that need for sanctions was mooted “because he has suffered ‘significant and irreversible harm to [his] professional reputation’”).
[60] Cf. Sanders v. United States, 176 Fed. Cl. 163, 170 (2025) (court chooses to warn attorney, rather than impose sanctions, “given the relative novelty of AI . . . [and] that Plaintiff may not have been aware of the risk that AI programs can generate fake case citations and other legal misstatements”).
[61] Wadsworth, 348 F.R.D. at 493.
[62] Compare Wadsworth, 348 F.R.D. at 497 (“Here, Respondents have been forthcoming, honest, and apologetic about their conduct. They also took steps to remediate the situation prior to the potential issuance of sanctions . . . .”); with Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 449 (S.D.N.Y. 2023) (“if the matter had ended with Respondents coming clean about their actions shortly after [their opponents alerted them to the falsity of the cases] . . . or after they reviewed the Court’s Orders . . . requiring production of the cases, the record now would look quite different. Instead, the individual Respondents doubled down and did not begin to dribble out the truth until” threatened with sanctions).
[63] See Mid Cent. Operating Eng’rs Health & Welfare Fund v. HoosierVac LLC, 2025 WL 574234, at *3 (S.D. Ind. Feb. 21, 2025) (original recommendation of $15,000 in sanctions), adopted as modified, 2025 WL 1511211 (S.D. Ind. May 28, 2025) (later reducing sanction to $6,000).
[64] Booth, When AI Thinks It Will Lose, supra note 30.
[65] Weiss, AI With Sam Altman, supra note 45 (quoting Eliezer Yudkowsky).
Top Food and Drug Cases, 2024
& Cases to Watch, 2025