Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and ...



mostly ai :: Article Creator

Chief Justice, Law School Dean Guarded, Hopeful About AI In Justice System

South Dakota Supreme Court Chief Justice Steven Jensen, left, and USD Knudson School of Law Dean Neil Fulton address the Sioux Falls Rotary Club on April 29, 2024. (John Hult/South Dakota Searchlight)

South Dakota Supreme Court Chief Justice Steven Jensen, left, and USD Knudson School of Law Dean Neil Fulton address the Sioux Falls Downtown Rotary Club on April 29, 2024. (John Hult/South Dakota Searchlight)

SIOUX FALLS — Artificial intelligence (AI) could mean more efficient legal offices and lower bills for clients – provided human beings use the technology ethically.

South Dakota Supreme Court Chief Justice Steven Jensen and University of South Dakota Knudson School of Law Dean Neil Fulton offered that tentative conclusion on AI in the legal profession to members of the Sioux Falls Downtown Rotary Club on Monday. 

The law school is teaching AI to students and offering its law librarian as a trainer for practicing lawyers, Fulton said. The judicial system is considering ways the technology might improve efficiency, Jensen said, and not yet pondering regulations on its use by attorneys.

But both leaders agreed that human decision-making and judgment ought to be top of mind in the use of AI for legal work.

"I gave a speech to the class of 2024 on Friday, and the centrality of the human person to the law was the thrust of it," Fulton said. "One of the things I tell them is that a lot of disciplines will just say, 'Can we?' The law has to step back and say, 'Should we?'"

Some forms of AI have been a modern part of life for years. It undergirds consumer-facing tools like voice dictation on smartphones, spell checkers on word processors or chatbots that screen customer service queries and hold your place in line.

Public awareness of generative AI exploded mostly because of ChatGPT, a text creation tool released in November 2022. Generative AI involves asking a tool like ChatGPT (for text) or Midjourney (for art) to produce something. That could be a term paper, an image, a screenplay or a legal brief in a matter of seconds, though concerns about "hallucinations" – wherein an AI tool makes up facts to include in the final product – quickly emerged as a danger of relying too heavily on AI-only material.

In the legal field, AI tools have returned legal briefs citing cases that don't exist. A federal judge in New York sanctioned a lawyer in that state last June for submitting briefs with phony citations.

"The running joke is now, 'Did you write this brief, or did AI write it?'" said Barry Sackett, a Rotarian and lawyer who led the Monday discussion with Jensen and Fulton.

Sackett wanted to know how the chief justice and law school dean are thinking about the technology, given its prominence across multiple areas of work and play. 

The initial stumbles with hallucination and worries of students using AI to cheat factored into some of Fulton's first conversations about it with law school faculty in Vermillion.

That was a year ago. Now, Fulton said, the school works with – and teaches students about – the AI tools embedded in LexisNexis, one of two major legal research companies in the U.S.

Right now, he said, it can generate a brief that's "about 50% accurate."

"The human element is working out that other 50%," Fulton said. "But that is a savings to your client. It's a savings of time."

Jensen agreed, saying it's possible that AI could make offices efficient enough to shave dollars off legal bills. Lawyers often charge billable hours in 15-minute increments. 

"We've not developed the rules, because frankly, if you start developing rules, sometimes you preclude innovation and the ability to improve what you're doing," Jensen said. 

The state already has strict ethical standards on the truthfulness of evidence, he said. Those standards apply to any brief signed by any lawyer, regardless of whether someone on their staff or an AI tool helped write it.

"Are we getting briefs from AI right now? Maybe, and I don't have a problem with it, as long as the lawyers are doing the homework to make sure that the briefs are accurate," Jensen said.

He said the state has begun looking at ways to streamline certain processes for the sake of efficiency. 

But there are lines to be drawn on AI and its use in the justice system, he said.

"We can't depend mostly on a machine to decide cases," Jensen said. "We can't depend upon a machine to argue our cases. There's so much of a human aspect in what lawyers do and what judges do that we have to make sure that that human aspect isn't lost."

Both leaders also told the Rotary Club that the law will need to keep up with the technology, wherever it winds up. 

"We're just like everyone else, in that we're trying to figure this out as we go because it is moving so quickly," Fulton said. "I think anybody who tells you they have this figured out is fibbing."

GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX

SUBSCRIBE

The post Chief justice, law school dean guarded, hopeful about AI in justice system appeared first on South Dakota Searchlight.

View comments


Former Amazon Exec Alleges She Was Told To Ignore The Law While Developing An AI Model — 'everyone Else Is Doing It'

A former Amazon executive is accusing the company of telling her to violate copyright law to compete with other tech giants in AI.

Viviane Ghaderi filed a lawsuit against Amazon in Los Angeles Superior Court, saying she was discriminated against and ultimately fired.

Ghaderi said she was tasked with flagging possible legal violations in how Amazon was developing its LLMs, or large-language models.

(LLMs are text-generating services like Open AI's ChatGPT or Google's Bard.)

The complaint says Ghaderi's boss, Andrey Styskin, told her to ignore legal advice and Amazon's own policies to get better results.

From the lawsuit:

Styskin rejected Ms. Ghaderi's concerns about Amazon's internal polices and instructed her to ignore those policies in pursuit of better results because "everyone else"—i.E., other AI companies—"is doing it."

The allegation about Amazon's AI work came in a larger case where Ghaderi alleges she was demoted and ultimately fired for taking maternity leave.

In a statement to Business Insider, Amazon spokesperson Montana MacLachlan did not directly address Ghaderi's claims.

She did say that Amazon does not "tolerate discrimination, harassment, or retaliation in our workplace," and that it investigates allegations and punishes wrongdoing.

Ghaderi said she took her complaints to HR, which mostly dismissed her claims before ultimately firing her.

BI also sent messages to Ghaderi and the Amazon employees named in the complaint but did not immediately hear back.

Ghaderi's lawsuit alleges that Amazon violated California's law protecting whistleblowers and statutes outlawing pregnancy discrimination.

Her attorneys said in the filing that Amazon's haste to compete in AI left employees like her as "collateral damage in the battle for the future of the technology industry."

"It takes a lot of courage to come forward against a company like Amazon," said her lawyer, Julian Burns King, a partner at King & Siegel LLP. "We are proud to represent Ms. Ghaderi and look forward to proving her allegations in discovery and at trial."

Ghaderi's LinkedIn said she worked at Amazon until January 2024, though the complaint says she was fired on November 17, 2023.

Ghaderi doesn't appear to have spoken about her departure from Amazon other than in the lawsuit.

Though Ghaderi's case is yet to be tested in court, the context of a frantic rush in Silicon Valley to develop AI products is well-attested.

That haste reached Amazon, too — in November 2023, Business Insider's Eugene Kim reported that it was racing to launch new AI products comparable to Microsoft's.

AI development is straining the limits of copyright law, as tech companies and publishers wrestle over the ownership and usage of the vast quantities of text the AI models ingest.

Some publishers allege that tech companies owe them billions of dollars for using their work.

The New York Times is pursuing a landmark case against OpenAI, which it says owes it big for using its content to train ChatGPT.

Others have taken a different approach — Axel Springer, BI's parent company, struck a deal with OpenAI allowing use of its articles.


Developers Need Crystal Ball For AI Legislation

AI developers are watching Congress closely to gauge how forthcoming legislation could impact the ... [+] technology. (Photo by Anna Rose Layden/Getty Images)

Getty Images

Artificial intelligence developers have largely lived under a cloud of uncertainty around how regulatory bodies will influence their work, but for the first time, that cloud has started to clear.

The European Union, an outspoken leader in establishing rules for AI development, recently passed the most significant piece of AI-focused legislation to date. The EU AI Act represents one component of the bloc's plan to "support the development of trustworthy AI" and clearly define how and where developers can build AI. In a realm of technology that mostly looks like untamed wilderness, the EU has started to build roads that will shape emerging tools and capabilities.

While legislators start to catch up to the developers, health tech companies need to have some ability to predict the future. The pace of legislative action in the U.S. Has lagged miles behind the breakneck speed of AI innovation, but developers here are watching the EU closely because it could offer a framework for the U.S. To emulate. For those building solutions for healthcare, we face the highest of stakes – saving human lives – and tight regulatory oversight. We must weigh the guidelines already in place and the legislation slowly moving through government to build solutions that work now and under forthcoming regulations.

This is a view of the current regulatory landscape for AI and the pending legislation that tech developers need to keep an eye on as it moves through Congress.

Rules In Effect Today

The National Coordinator for Health Information Technology (ONC) finalized a new rule in December with a key component for AI developers. The Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) rule includes a requirement for ONC-certified health IT solutions to build their algorithms for AI solutions transparently.

Some early AI implementations have shown biases, including racial biases, in predictions that could potentially widen the disparity in health outcomes for minority patients. ONC's ruling aims to mitigate those dangers by giving clinical users "access [to] a consistent, baseline set of information about the algorithms they use" and allow them to identify any issues.

"Transparency" has been the operative word in discussions about AI regulation, including in the Biden administration's executive order on AI last fall, which calls for developers to share safety test results with the government. AI's early successes, such as Google's solution for diagnosing diabetic retinopathy, and its missteps, including Medicare plans denying necessary care, both offer learning opportunities to improve AI's development and implementation through data sharing.

Medicare has already taken action to improve its use of algorithms in coverage decisions, issuing new guidelines that call for a balance of human influence in the decision-making process. In short, determining coverage from Medicare cannot rely solely on an algorithmic process, it must account for "the individual patient's medical history, the physician's recommendations, or clinical notes." Developers for AI in healthcare should keep in mind that the solutions they build have to interface with clinicians in an assistive capacity, not as the final word on patient care decisions. They also need to standardize how they share their test results with regulators, giving a clear and consistent view into the development process.

Legislation To Watch

Senator Ron Wyden, an Oregon Democrat, introduced the Algorithmic Accountability Act last September. It would create a new bureau within the Federal Trade Commission dedicated to taking in impact reports from AI builders and using those reports to build a repository of information about those AI tools.

The bill aims to shed light on where exactly AI is influencing decisions – in medicine and other fields, such as tenant screening for renting a home. It also aims to provide structure to AI reporting without creating an entirely new agency. By working directly with the FTC, AI developers could engender more trust from patients, who have largely shown a distrust of AI in their medical care.

Generative AI, or large language model-based AI, is also sure to attract the attention of regulators for its proclivity for "hallucinations" and inaccuracies. Some companies, including KeyBank, have already limited or banned internal use of generative AI out of concern for data privacy or anticipating restrictions from legislators.

Recent reports suggest Apple and Google are discussing building Google's Gemini AI platform into the iPhone. This partnership would put Google firmly in command of how consumers leverage Gen AI – very much in the way Google became the de facto search tool for Apple's broad base of iPhone users. However, the deal will certainly attract attention from the Department of Justice for anti-competitive behavior, just as it did in investigating the search engine partnership between those two companies. How the DOJ rules on each of those partnerships could indelibly shape the consumer relationship with Gen AI.

At this stage, many more questions about AI guidelines exist than answers, but the picture is growing less murky by the day. With the context of some legislation already in place, and an eye toward anticipated rules, developers can still build successful AI-powered tools that positively impact healthcare and stay within the boundaries of federal regulation.






Comments

Follow It

Popular posts from this blog

Dark Web ChatGPT' - Is your data safe? - PC Guide

Reimagining Healthcare: Unleashing the Power of Artificial ...

Christopher Wylie: we need to regulate artificial intelligence before it ...