AI in Practice
Rapid advancements in artificial intelligence promise to revolutionize the way we practice law. Two BU Law alums offer their perspectives.
AI in Practice
Rapid advancements in artificial intelligence promise to revolutionize the way we practice law. Two BU Law alums offer their perspectives.
Until fairly recently, the disruptive potential of artificial intelligence remained largely in the province of science fiction. Sure, there were stories of supercomputers vanquishing chess grand masters. And who hasn’t been enticed by eerily on-point personalized social media ads? But AI technology that could fundamentally change the way we live and work? That seemed like a quandary for future generations.
Then came the splashy November 2022 debut of OpenAI’s GPT-3 and the dizzyingly fast release of its more powerful successor GPT-4 the following March. The chatbot’s dramatic rise has brought abstract anxieties of an AI-dominated world crashing into present reality. ChatGPT’s simple interface and sophisticated generative capabilities are forcing the world to reckon with the seemingly limitless promise of AI’s new cutting edge—and the existential threat it may pose to humanity.
However, even as AI pioneers sound the alarm and governments around the world grapple with how to develop ethical guardrails, there’s no denying that AI tools like Harvey, Westlaw, and now ChatGPT are already transforming the American legal sector. Astonishingly adept at analyzing and summarizing text, these large language models can execute, in a matter of seconds, work that would once have required thousands of billable hours. A Goldman Sachs report released earlier this year estimated that 44 percent of legal work could be automated.
So, what does that mean for the average law firm? The average attorney?
Consider, for instance, your typical antitrust investigation. Let’s say a client is looking to merge with another company, and they’ve received an information request seeking reams of documents with a tight deadline from the Department of Justice or Federal Trade Commission.
“In the old days”—say, 20 years ago—“the files would probably be in a warehouse somewhere. We’d have associates in hazmat suits come through and pull all the dusty documents and moldy contracts,” says John Koss (’05), who directs the E-Data Consulting Group for Mintz, a 500-lawyer firm with an international reach. Even 5 or 10 years ago, lawyers would have to sift through voluminous email inboxes and large data servers to find the necessary documents. “It would be an extensive project with hundreds of attorneys, and we would be going fast and furious,” he says.
Today, text-based legal data—cell phone records, Microsoft Teams or Slack messages, and emails—can be pulled into an AI review application. After training the tool on relevant samples, it can retrieve a statistically validated set of responsive documents to satisfy the requests and production expectations of the government.
“Before, we would have had to review every single document from one to a million in a linear format,” says Koss, who spent a decade practicing in healthcare and pharmaceutical product liability litigation before founding the Mintz group. “Now, with AI technology, we review maybe 40 percent of the documents to meet a given recall percentage. What used to require a hundred attorneys we can accomplish with 10 to 20. The government’s happy because they get the documents faster. Our clients are happy because they’ve saved money. And the result is more accurate than if we had just completely relied on human beings.”
Koss advises start-ups and Fortune 500 companies on how to use AI-enabled tech and analytics to solve data challenges and optimize workflows. He says his clients typically see a 95 percent reduction in data submitted for human document review.
So, is Koss worried about making lawyers obsolete? “We’re not trying to take away good work from human beings,” he says. “The reality is doc review can be a drag. Frankly, our associates get a better experience because now they’re spending their time digging into key documents, or learning the case, or engaging in deposition prep. My hope is that we’re helping people do things that are more valuable to their career development.”
The government’s happy because they get the documents faster. Our clients are happy because they’ve saved money. And the result is more accurate than if we had just completely relied on human beings.
The promise of using AI tools to cut costs and boost productivity is immense. Many law firms are already using them for contract review, e-discovery, legal research, drafting basic standard agreements, and predictive analytics—that is, predicting the outcomes of legal cases.
That’s not to say there aren’t serious concerns. The unregulated use of AI technology in a highly regulated industry like the law can present ethical conundrums and legal liability. Koss points to ChatGPT’s unsettling tendency to “hallucinate” or fabricate responses to user queries. “ChatGPT will give you absolutely incorrect answers in a very authoritative way,” he says, “It’s unreliable, so right now that limits what lawyers can use it for.”
The other major barrier is privacy. Certain types of AI analytics simply can’t be used when dealing with sensitive information, Koss says. Dropping financial accounts, medical records, or Social Security numbers into the “black box” of an AI tool could be a violation of privacy laws or client privilege.
“We have to be sure the applications we use are secure and have appropriate data management protocols,” Koss explains. “If you’re putting patient, client, or deal information into an AI tool or large language model, the company that designed the program may use that data to train their models and algorithms. We have to be very careful not to introduce the potential for this data to be shared, stored, and kept by non-permissible parties or locations. That’s exactly what privacy legislation and confidentiality provisions are designed to forbid.”
Ashley Jackson (’11) carved out her niche in privacy law early on. Jackson was just a few years out of law school, working as a litigation associate, when she was recruited to join Sedgwick LLP with the opportunity to assist with its new data privacy practice in Chicago. Around the same time, the European Union was preparing to vote on the General Data Protection Regulations (GDPR), which govern the collection and use of personal data by companies.
“Privacy at that point took off,” Jackson recalls. “Even though I had only been working in privacy for two years, I already had more experience than some very senior partners at law firms who were just coming into the space. I saw an opportunity to become an expert and decided to transition to a privacy focus.”
After five years of practicing law in privacy and data security, Jackson took a role as in-house counsel to GE Healthcare, where she gained global insight into how other countries were managing privacy issues. Until June 2023, she led privacy efforts at Olive AI, a healthcare automation company that optimizes revenue cycles and HIPAA compliance through AI products and machine learning. She recently left Olive AI to take on a new role leading international privacy at the Mayo Clinic.
“Healthcare is a great place for demonstrating what ethical AI can look like,” Jackson says, “whether it’s helping to streamline the claims process to reduce administrative costs or helping doctors diagnose a disease quicker. Soon, I think we will very literally see this technology saving people’s lives.”
Although law firms can be slow to change, Jackson predicts the ones that thrive will be those that stay ahead of the tech curve. “AI is going to challenge attorneys to demonstrate we’re a value add,” Jackson says. “You need to know the tech. You’re going to have to get in the weeds. Because if you can’t explain what’s going on, it’s going to be really tough to anticipate regulatory issues and advise on them.”
You need to know the tech. You’re going to have to get in the weeds. Because if you can’t explain what’s going on, it’s going to be really tough to anticipate regulatory issues and advise on them.
That’s in part because US regulations haven’t kept pace with technological advances. “It’s hard when things are happening at the speed of light,” Koss says. In the absence of established rules on AI use, industry groups and companies are issuing their own guidance for employees. “These conversations need to be happening globally,” he says. “Companies want to operate in a way that doesn’t violate privacy laws here or in other countries. We need a common set of guidelines for these tools. For now, we’re piecing it together as we go.”
Jackson points to the EU’s GDPR as a potential model for the US to follow. “It’s a risk-based approach to the evaluation and regulation of AI,” she says. “GDPR basically asks companies: Before you use a data set, know the potential risks and benefits and mitigate for those risks.”
She also admires the agility and responsiveness of Singapore’s approach. “They’re trying different things out and seem very receptive to feedback,” Jackson says. “They have technical and industry knowledge at the table because you can’t have politicians and academics coming up with rules that are impractical. We don’t need something reactionary—we need something visionary.”
Still, like Koss, Jackson is bullish on a future powered by new technologies. She shares a story from her time at GE Healthcare, when the company designed the first ultrasound system with 3D printing capability. Thanks to the new machine, a pregnant mother who was blind could not only listen to her baby’s heartbeat, but she was also able to experience through touch the 3D-printed representation of her baby’s ultrasound image.
“I recognize the dangers. And like anything else, AI can be abused,” Jackson says. “But that should never stop us from unlocking all of these beautiful possibilities.”