Redrawing the Legal Landscape
Generative AI is disrupting business as usual for lawyers. Get used to it, experts say.
BY JERI ZEDER
While most of us have not read the complete works of Shakespeare, many AI systems have not only absorbed those plays but are currently capable of using their words as inspiration for creating new Shakespeare-like works that the bard could never have imagined. It makes you wonder what inspiration an AI system might derive from one of Shakespeare’s most famous lines: “The first thing we do, let’s kill all the lawyers.” Figuratively, that may not be far off from what many in the legal profession fear.
Generative AI uses massive collections of information to create, when prompted, text, images, speech, video, music and computer code. Reuters reports that ChatGPT alone already has 100 million monthly users. ChatGPT and its generative AI cousins have nearly aced a bar exam, achieved at the level of a B+ law student, conducted legal research, crafted persuasive legal memos and “co-authored” law review articles.
The technology has inspired handwringing over whether it will cost lawyers their livelihoods. But the legal field is a competitive one, always hustling to be faster, more efficient and more streamlined. In fact, the horse has already left the barn: the global law firm Allen & Overy started working with Open AI, the developer of ChatGPT, back in November of 2022 to develop Harvey, a generative AI tool specializing in legal work. The firm announced in February that it has formally launched a “partnership” with Harvey.
Generative AI is exciting — but it’s also disruptive and has the potential to be harmful. “The trap is that society can be so enraptured with the new technology that it loses focus or refuses to foreground its ethical and moral commitments,” says Michael Bennett, a former law school faculty member who now serves as director of education curriculum and business lead for responsible AI at Northeastern’s Institute for Experiential AI. Will society steer generative AI so it is largely a force for good? Lawyers in their roles as advisors, litigators, lobbyists, academicians and agency regulators will be central to that discussion.
Dan Jackson ’97, executive director of the law school’s NuLawLab, uses ChatGPT. He says the legal profession has a tradition of being a late adopter of new technologies and needs to take a new tack. “The legal profession needs to jump into artificial intelligence with an open mind and a lot of enthusiasm and resources, and the reason I think we need to do that is because it’s not going away,” he says. “It’s only going to continue to be further and further developed. It’s only going to get more and more powerful, and more and more valuable to humanity if it’s done correctly.”
Professor Beth Simone Noveck, an expert on AI who directs Northeastern’s Burnes Center for Social Change as well as The Governance Lab, says, “I think that AI is going to make lawyers that much more useful and necessary.” She’s ready with examples. There’s due diligence: “I think the ability to uncover background information will become that much faster,” she says. There’s consumer protection: AI can help regulators to “better pinpoint where, for example, a charity is likely to be fraudulent,” she says. There’s public interest work: Noveck points to a pro-democracy nonprofit that’s engineering AI tools intended to help activists build campaigns and movements for social change.
I think that AI is going to make Lawyers that much more useful and necessary.
Users Beware
Professor Elettra Bietti, who holds a dual appointment in the law school and Khoury College of Computer Sciences, sounds a more cautious note. “I think it’s important to try to resist the urge to predict what’s going to happen and to be defeatist or enthusiastic about the future of AI,” she says. Yes, generative AI will lighten the legal grunt work. But legal grunt work has its merits, especially for new lawyers. “It’s possible that chatbots will perform some parts of the tasks of junior lawyers, and there are questions about whether there will be some gaps in their knowledge or in their ability to be lawyers down the line,” Bietti says.
Problems like these as well as others — like threats to client confidentiality, or the tendency of generative AI to spit out convincing but false information — speak to the need for law offices to deploy generative AI with organizational awareness, agility and healthy doses of cybersecurity.That’s challenging, and important. But it feels more readily surmountable than the threats that generative AI poses to the clients and society that lawyers are there to serve.
Bennett offers this hair-raising scenario: “As an element of voter suppression strategy, an antiabortion organization uses a generative AI system to create a deepfake which is released to social media on election day morning in one of the redder US states. In the video, the secretary of state appears and announces that elections are to be halted in several counties due to security threats and that the election process will accordingly commence at another time that will be announced in the near future. Voters in the targeted counties turn out in lower numbers and anti-abortion-friendly candidates and initiatives win by larger margins.”
And that’s just a hypothetical. In real life, Ari Irvings ’10, head of cybersecurity at Elektra, a global producer of high-tech medical devices, worries that ransomware attacks on hospitals, which have harmed patients and even caused deaths, will skyrocket. He’s seeing it already: “All of a sudden, especially with the rollout of ChatGPT, there is this world of AI attacks that are being created,” he says. Artists protective of their livelihoods have sued Stability AI and Midjourney, developers of generative AI tools that create art, for scraping their images from the web without their permission. Canada’s Office of Privacy is investigating ChatGPT after receiving a complaint that the AI system was collecting, using and disclosing personal information without consent. The tech news site CNET has published AI-generated news articles that are rife with errors. Police relying on AI’s police-prediction algorithms have subjected innocent people to aggressive police surveillance; ChatGPT-4, a newer, more powerful version of ChatGPT, has the potential to exacerbate the problem. A complaint filed with the FTC by the nonprofit Center for AI and Digital Policy alleges that ChatGPT-4 “is biased, deceptive and a risk to privacy and public safety” and in violation of FTC regulations.
All of a sudden, especially with the rollout of ChatGPT, there is this world of AI attacks that are being created.
Growing Legal Challenges
The trend is clear: as generative AI speeds forward and permeates our lives, it is intensifying a host of legal challenges touching on data privacy and protection, bias and discrimination, transparency, liability and accountability, ethical considerations and international governance of this borderless technology.
Despite the risks of AI, the United States has so far declined to enact comprehensive federal legislation. Some federal and state agencies are stepping up, but without unifying laws, their impact will be scattershot. In 2022, for example, a consortium of states started regulating automated employment decision tools. In early April, the Commerce Department’s National Telecommunications and Information Administration started seeking public feedback on policies to hold AI accountable. The FTC is investigating how generative AI can be used to worsen scams and fraud. The Office of Management and Budget has written guidance on AI regulation. Other agencies are also looking at AI, but as of 2022, only five of 41 major US agencies have created plans for regulating it. The White House has issued a so-called Blueprint for an AI Bill of Rights. It addresses issues of safety, discrimination, data privacy and people’s right to know and right to opt out — but it does not require agency implementation.
In contrast, European Union countries are taking definitive action: an Italian regulatory agency banned ChatGPT over its mass collection of personal data and its exposure of minors to “unsuitable” material. Other EU countries are also considering bans. In addition, the EU has passed major legislation on digital technology that includes managing the risks of AI, notably the General Data Protection Regulation, the Digital Services Act and the Digital Markets Act. Forthcoming is the EU’s AI Act, which will address things like deepfakes and will ban certain risks deemed unacceptable. Overall, compared with the US, EU regulation of AI is clearly more coordinated, comprehensive and enforceable.
Lawyers Weigh In
Lawsuits, legislation, regulation and policy papers are, of course, standard forums where lawyers can influence the public dialogue around AI. Representing the profession, the American Bar Association earlier this year published guidelines that call on AI developers to ensure that their systems are “subject to human authority, oversight and control”; that organizations using AI be accountable for “any legally cognizable injury or harm caused by their actions, unless they have taken reasonable steps to prevent harm or injury”; and that AI developers should ensure “transparency and traceability” by “documenting key decisions made regarding the design and risk of data sets, procedures and outcomes underlying their AI.”
Bennett believes there’s a place for lawyers to contribute in less conventional ways as well. “I think there’s significant value to be extracted from legal minds taking very technical legal arguments and translating that into a language and a vocabulary that non-legal experts can understand,” Bennett says. “Maybe writing essays about this, or even writing science fiction about some of the puzzles that are on the horizon or projected out maybe 10 years or so,” he says. Bennett, who is developing a course for the law school on the regulation of AI, has done some science fiction writing himself, in part to get out in front of the foreseeable but not yet manifested ramifications of AI.
I think there’s significant value to be extracted from legal minds taking very technical legal arguments and translating that into a language and a vocabulary that non-legal experts can understand.
Jackson notes educating law school staff and faculty is one of the paths to educating the next generation of lawyers about AI. This summer, the NuLawLab is holding a series of lunch and learns that focus on how ChatGPT can serve as a rapid research assistant while simultaneously increasing opportunities for student research assistants, how students are using AI in the lab’s Seminar on Applied and Critical Legal Design, and how to apply AI in other disciplines.
Meanwhile, generative AI tools for lawyers are charging ahead. In late May, for example, the technology company Thomson Reuters unveiled a new plug-in for Microsoft 365 Copilot, which injects a suite of legal research and drafting capabilities into Copilot’s AI chatbox technology. The demo video shows “Bob,” a junior attorney, opening a Word document that generates a draft contract, which Bob tweaks using Copilot’s access to Westlaw Precision, Practical Law, Checkpoint Edge and other reliable legal resources. It’s exciting. Maybe the lawyers will live after all.
About the Author
Jeri Zeder is a contributing writer.
Share
A judiciary representative of the population it serves is a fundamental necessity, says Justice Ramon Ocasio III ’88.
Professor Margaret Burnham, founder and director of Northeastern Law’s Civil Rights and Restorative Justice Project, spoke at the South by Southwest (SXSW) Conference in Austin, Texas, in March about policing and modern-day lynchings in the rural South.
Professor Claudia Haupt is back at Northeastern Law after a fall visit at Yale Law School, where she taught public health law and continued her research on the intersection of the First Amendment, health law and torts in the context of professional speech.