The New way to News
Picture credit : Iryna Imago /
ChatGPT Founder OpenAI Faces Class Action Lawsuit, FTC Investigation

ChatGPT Founder OpenAI Faces Class Action Lawsuit, FTC Investigation

By RZR News Team
Jul 21, 2023

Fast Facts

Please subscribe to continue viewing this content.

Starting at just $8 per month


The recent class action lawsuit filed against ChatGPT’s owner has raised serious concerns among conservatives, shedding light on potential violations of data privacy rights and the provision of potentially false information to users. This legal challenge underscores the critical need for safeguarding user data and ensuring transparent and reliable AI interactions.

The core issue at hand revolves around data privacy. As the custodian of vast user interactions and personal information, ChatGPT (and its owner) carries a significant responsibility to protect and respect user data. Any unauthorized use or mishandling of this sensitive information may infringe upon the fundamental right to privacy that conservatives staunchly defend.

Moreover, the allegations of potentially false information being provided by ChatGPT call into question the reliability and integrity of the AI system. Upholding truth and accuracy in all forms of communication is paramount and AI platforms should be held to the same standard. Providing users with misleading or false information undermines trust in AI technology and impedes the responsible use of such systems.

Transparency is a cornerstone of responsible AI deployment. Users have the right to know how their data is being used and the limitations of the AI’s capabilities. Failure to be forthright about data practices and AI functionalities erodes user trust and can have far-reaching consequences on public perception of AI technology as a whole.

While the government should remain limited, it is nevertheless imperative for private entities, including AI platforms, to self-regulate and uphold data privacy standards. The class action lawsuit against ChatGPT’s owner serves as a wake-up call for the industry to take data privacy seriously and adopt measures that prioritize user rights and data protection.

The class action lawsuit against ChatGPT’s owner reflects the urgent need to uphold data privacy rights and ensure that AI systems provide reliable and accurate information. Personal privacy and truthful communication are cornerstone values and the responsible use of AI technology must align with them. 

Learn more about the conservative viewpoint

A new lawsuit against OpenAI could decide whether the company’s use of training data scraped from the public internet may continue. A new class action lawsuit accuses ChatGPT creator OpenAI of criminally scraping data from the web and using the stolen information to create its automated products. As OpenAI continues to propel and expand its business, the controversial nature of the technology it sells may hinder how it excels in the future. 

There are numerous lawsuits filed against OpenAI. The lawsuit claims that OpenAI’s entire business model is based on theft, accusing the company of creating products using stolen private information. Then, there is another lawsuit filed on behalf of numerous authors who claim their copyrighted works were derived by OpenAI in its effort to gather data to train its algorithms. Finally, another suit was filed shortly after ChatGPT’s release by the offices of Joseph Saveri, accusing OpenAI and its founder and partner Microsoft of ripping off coders in an effort to train GitHub Copilot – an AI virtual assistant. It is no surprise that in order for people to get the instant answers and creative inspiration that ChatGPT has to offer there is an original derivation to congregate the information being provided. 

Is there a case to be had against OpenAI and will it lead to stricter regulations for generated content? One could argue that a ruling in favor of AI technologies could make it easier for AI to evolve with development and innovation, while a ruling against AI would open a portal of regulations and require a process for approval before being able to give certain content. There is a provision under the 1996 Communications Decency Act called Section 230, which provides immunity from libel cases for content produced and posted by users of online services. The lawsuits raise questions over if the content produced by ChatGPT can be likened to search engine results to original content. It is possible that OpenAI isn’t protected but even if not it certainly is a mystery as to whether it’d be enough to win the case against OpenAI because it would be hard to prove.

– Briauna B.

Learn more about the independent viewpoint

OpenAI, the artificial intelligence company that founded the widely-used website, ChatGPT, is now facing serious legal action in federal court. According to the class-action lawsuit, OpenAI violated several laws, such as the Computer Fraud and Abuse Act (CFAA) when it began using private data for training purposes without compensating its users. 

On top of this lawsuit, the Federal Trade Commission (FTC) announced that it will be opening an investigation into whether OpenAI mistrained ChatGPT to feed false, and possibly unauthorized, information to users. 

When considering the rise of ChatGPT, it is without a doubt that its vast, and seemingly limitless, library of information had a major appeal to users. However, all information must come from a source. To obtain all the data that it uses to serve consumers, ChatGPT learns and synthesizes all the information that is available online. In other words, ChatGPT is merely a compressed version of the internet. 

Because ChatGPT is able to access and present information from every corner of the web, people, such as comedian Sarah Silverman, argue that it is committing copyright infringement in the process. 

However OpenAI tries to defend itself, it is apparent that having a platform that is designed to grab and share vast amounts of information may be treading some dangerous waters. Unlimited, or near-unlimited access could lead to several legal repercussions, such as those involving copyright infringement, defamation, breach of privacy, and extortion. 

Now, it is up to the federal courts to decide whether OpenAI committed any wrongdoing and, if it did, what sort of steps it must take to remedy the plaintiffs. 

– James Demertzis

Learn more about the liberal viewpoint

A professor in college once posed this question to my class: how do you hold an Artificial Intelligence (AI) system, or any technological system accountable for its actions in a court of law? The answer to that is tricky, to say the least and may not necessarily have one single right answer. Although a system itself cannot necessarily be put on trial, you can put its creators/overseers on the stand. 

Microsoft-backed company OpenAI is facing a lawsuit for their invention ChatGPT generative (AI) and training the AI system using personal information that should not be divulged. The internet and online data usage and statistics are fairly new frontier, and in a lot of ways legality, and enforcement are still in their infant stages. This is not to say they do not exist mind you, but fighting data appropriation in cases like this can be considerably harder. 

Once again, who is at fault here? Is it the AI system itself, is it the manufacturers, or a little of both? Well, the most concrete argument to make here would be the AI’s overseers, and here is an example of why. ChatGPT disclosed a bug in their AI software that showed some users’ payment-related information and data related to other users’ chat history. That is a pretty big “whoopsie” and one a jury would be interested in hearing the hows and whys of in open court. 

Now, let us take the converse side of that argument with a scenario. You are the defense attorney going in for damage control, how do you try to smooth that one over? Well, you could try to angle it this way: how did the AI system obtain that information, and why even with a bug, release such personal information? That is not bad, but that argument has some pretty noticeable holes. 

More likely, one would have to argue that the data in question is “public” and therefore fair game under copyright law to use to train an AI. That argument could be persuasive to a judge, but the judge might ask you how this information was obtained, and how it does not infringe on personal liberties protected under existing copyright and internet-related laws. 

Perhaps the most notable thing I can pose for you the reader is this: be careful what questions you pose to AI systems, and how you use them. Your personal liberties and your image, among other things, belong to you as an individual, unless in some cases you choose to sell one of them for profit, but that is a different ball of wax. Be careful of the footprints you leave, for companies will happily follow your trail without your knowledge.

Learn more about the libertarian viewpoint