With the advancement of artificial intelligence, communication and how it is drafted and presented has been completely revamped, but the increased reliance on AI for content brings several other issues. A misinformation expert was recently under fire for using the technology to draft his document for a legal filing, which ended up including fabricated citations. Now, what makes this more ironic is that the case filing was an attempt to combat the use of AI-generated content to mislead voters prior to elections. The researcher has now admitted to having used ChatGPT for streamlining citations and believes the error should not have an impact on the points presented in the declaration.
A misinformation expert now admits that ChatGPT was used for his court filing but was not aware of AI ‘hallucinations’ adding inaccurate details to the document
Jeff Hancock is a Stanford professor and a misinformation expert who filed an affidavit for a case and supported the Minnesota law prohibiting the use of Deep Fake technology to influence elections. What was initially meant to be a filing against using AI to wrongfully impact users is now facing massive criticism for ironically having AI-generated details in the legal document that include false information, making it unreliable and inaccurate.
Now, the misinformation expert, in an additional declaration, has admitted that he used ChatGPT-4o to organize his citations but was not aware that it added fake details or fabricated references. He denies using the tool for other parts of the document and declares the error to be unintentional. In the later submission, he wrote:
I wrote and reviewed the substance of the declaration, and I stand firmly behind each of the claims made in it, all of which are supported by the most recent scholarly research in the field and reflect my opinion as an expert regarding the impact of AI technology on misinformation and its societal effects.
Hancock further explained that he used Google Scholar as well as GPT-4o to create the citation list, but it was not used for drafting the document. He emphasized his lack of awareness of AI hallucinations that ended up making the citation errors. Hancock then focused on the points made in the declaration and how he stands by them that should not be impacted due to the confusion. He expressed:
I did not intend to mislead the Court or counsel. I express my sincere regret for any confusion this may have caused. That said, I stand firmly behind all the substantive points in the declaration.
Whether the court would accept Hancock’s explanation for the errors in his submission or not, it does highlight the risks of using AI tools in legal contexts.
Read full on Wccftech
Discover more from Technical Master - Gadgets Reviews, Guides and Gaming News
Subscribe to get the latest posts sent to your email.