In the contemporary digital media landscape, the confrontation between major press institutions and artificial intelligence companies has taken on significant proportions. The New York Times and the Chicago Tribune, two pillars of American journalism, have recently filed a lawsuit against Perplexity AI, a California startup specializing in artificial intelligence technologies. This legal action highlights growing tensions around intellectual property, journalistic ethics, and respect for original content in an era where AI is increasingly involved in the production and distribution of information.
The core of the dispute lies in the accusation of plagiarism made by these two media outlets against Perplexity. According to them, this startup copied and used without authorization articles, photos, and videos from their archives to feed its AI tools, notably its chatbot and a browser named Comet. This massive and direct appropriation of exclusive content raises concerns about the practices of artificial intelligence companies regarding the exploitation of journalistic data, but also raises important questions about intellectual integrity and readers’ trust in traditional media.
Beyond the mere debate on content use, the plaintiffs also denounce an abusive use of their brand image, particularly when Perplexity attributes certain erroneous information — generated by the AI, a phenomenon called “hallucination” — to their titles, potentially contributing to the spread of misinformation under their name. This issue highlights the ethical challenges faced by both media companies and artificial intelligence developers as of mid-2025.
- 1 Origins and context of plagiarism accusations against Perplexity
- 2 Impact of the conflict on intellectual property and digital journalism
- 3 Different strategies of the media facing AI giants
- 4 Potential legal consequences for Perplexity in the plagiarism case
- 5 Ethical challenges raised by the use of media content in artificial intelligence
- 6 Reactions from the journalistic community and implications for the media
- 7 Artificial intelligence technologies and developers’ responsibility
- 8 Evolution perspectives: toward a harmonized framework between AI and media
Origins and context of plagiarism accusations against Perplexity
The dispute between the New York Times, the Chicago Tribune, and Perplexity AI is part of a series of similar legal conflicts marking the rapid expansion of artificial intelligence technologies applied to media. For several years, these two renowned newspapers had repeatedly warned Perplexity about the risks related to the unauthorized use of their content. However, these warnings remained ineffective.
Perplexity is thus accused of having integrated large excerpts of articles into its products, especially in its AI-based search engine that generates automatic and contextualized responses. In other words, the original texts are sometimes reproduced almost identically, calling into question the real creativity of these computer-generated answers.
Faced with this situation, the New York Times and the Tribune decided to exercise their rights through a targeted legal procedure, considering that the massive use of their archives benefits neither remuneration nor adequate recognition. These accusations fall within a broader framework for the protection of intellectual property in the digital universe, where rules sometimes seem unclear in the face of rapid technological innovation.
- Repeated warnings sent to Perplexity without satisfactory response
- Almost textual reuse of original articles for AI responses
- Formalized legal recourse to assert the media’s rights
- Protection of archives and exclusive content to preserve economic and editorial value
| Actor | Type of accusation | Consequences mentioned |
|---|---|---|
| New York Times | Plagiarism, copyright infringement, abusive trademark use | Attribution of erroneous content, loss of reader trust |
| Chicago Tribune | Copying articles, unauthorized collection of multimedia content | Decrease in archive value, harm to editorial work |
| Perplexity AI | Unauthorized use of content | Legal impact and potential financial damages |

Impact of the conflict on intellectual property and digital journalism
The complaint filed by these two media outlets symbolizes a crucial battle for the preservation of intellectual property in the digital age. Journalism, which relies on rigorous research, verification, and writing, sees its foundations threatened by technologies capable of reproducing content without authorization or remuneration.
The issue is also economic: the value of articles, in-depth investigations, and journalistic creativity today represents a major asset for media companies. Uncontrolled exploitation by artificial intelligence startups weakens this model, posing a considerable challenge to the financial balance of newsrooms already facing declining subscriptions and advertising revenues.
Moreover, the impact is felt on the quality of information. When excerpts from articles are inserted into AI responses without context or editing, it can distort the original message, create misunderstandings, or spread distorted information. This aspect was highlighted by the New York Times in its complaint, where AI hallucination phenomena were pointed out as a source of misinformation mistakenly attributed to their outlets.
- Strengthening legislation around the protection of digital content
- Need to adapt copyright law to new technologies
- Risk of dilution of traditional editorial quality
- Economic challenge for press companies
- Bias and errors linked to AI-produced results
| Consequence | Description | Actor concerned |
|---|---|---|
| Erosion of trust | Biased reading due to information erroneously attributed to the New York Times | Media and public |
| Financial loss | Lack of remuneration for the use of original content | Traditional media |
| Regulatory pressure | Call for legal update to regulate AI | Legislators and AI companies |
Different strategies of the media facing AI giants
The legal approach initiated by the New York Times and the Chicago Tribune against Perplexity is part of a broader trend where some journalism actors choose legal confrontation to protect their interests. However, others opt for a strategy based on negotiation and commercial partnerships.
OpenAI, for example, has concluded several agreements with media groups, allowing regulated exploitation of their content. Similarly, the New York Times has signed a partnership with Amazon, potentially earning up to 25 million dollars per year. These alliances reflect a willingness to find common ground that preserves both the rights of the media and the ambitions of artificial intelligence companies.
The divide between those who prefer the contractual route and those who rely on the courts highlights the legal and moral complexity of integrating AI into the media landscape. Negotiations can sometimes be complex but may prevent damaging public conflicts.
- Choice between negotiation and legal procedure depending on the stakeholders
- Commercial agreements with OpenAI and others for regulated content use
- Lucrative partnerships between media and digital giants
- Impacts on innovation and creative freedom of AI tools
| Media | Strategy | Concrete example |
|---|---|---|
| New York Times | Dual approach: legal action and commercial partnership | Partnership with Amazon and lawsuit against Perplexity |
| Chicago Tribune | Legal action | Accusations against Perplexity for plagiarism |
| OpenAI | Contractual agreements with media | Multiple licensing agreements with press groups |

Potential legal consequences for Perplexity in the plagiarism case
The charges formulated against Perplexity involve several significant legal consequences. The alleged copyright infringement could result in substantial financial penalties, including damages awarded to the plaintiffs. Furthermore, the company could be compelled to modify its commercial and technological practices to avoid future infringements.
Beyond financial aspects, this procedure could also impose on Perplexity an obligation for increased transparency toward its users concerning the origin of the content used to feed its systems. This raises questions of responsibility and ethics in the design of AI tools, which are crucial in a rapidly evolving sector.
Moreover, the complaint also highlights a more delicate point: the abusive use of the New York Times brand by Perplexity, which wrongly attributes some information to this media outlet, which could be considered a form of deception or misinformation. This dimension adds an additional layer of legal and media complexity to this dispute.
- Financial risks related to damages and interests
- Enhanced control over collection and use practices of content
- Transparency obligations in communicating sources
- Possible impact on the startup’s reputation
- Issues surrounding ethics and responsibility of AI systems
| Type of sanction | Expected effect | Implication for Perplexity |
|---|---|---|
| Damages and interests | Financial compensation to media | Perplexity must pay significant sums |
| Modification of practices | Compliance with intellectual property laws | Adjustment of algorithms and collection protocols |
| Increased transparency | Clear information to users | Explicit communication about data sources |
| Action on reputation | Possible harm to public image | Perplexity risks loss of customer trust |
Ethical challenges raised by the use of media content in artificial intelligence
The controversy between the New York Times and the Chicago Tribune and Perplexity goes far beyond legal issues. It raises major ethical challenges related to the use of media content in AI tools. The heart of the debate lies in respecting journalistic work, explicit recognition of sources, and prevention of misinformation.
Media outlets invest thousands of hours investigating, cross-checking, and producing quality articles. Their exploitation without consent undermines this value and questions the fair sharing of profits generated by AI technologies which partly rely on these original contents.
Furthermore, the hallucination phenomenon inherent to artificial intelligences sometimes leads to the generation of false information falsely attributed to major media. This aggravates public distrust and compromises journalism’s fundamental role of informing with rigor and impartiality.
- Respect for content creators and recognition of sources
- Fairness in sharing of profits from data use
- Risk management related to misinformation and AI errors
- Moral responsibility of technology companies
| Dimension | Ethical issues | Consequences |
|---|---|---|
| Creation and ownership | Recognition of journalistic work | Protection of authors’ rights |
| Misinformation | Risks related to AI hallucinations | Loss of credibility for media |
| Transparency | Clarity about the origin of content | Renewed public trust |
| Economic sharing | Equitable distribution of revenues | Viability of quality journalism |
Reactions from the journalistic community and implications for the media
The complaint from the New York Times and the Chicago Tribune deeply marked the journalistic and media community in 2025. It highlights a tension around content protection in the face of the rapid expansion of AI tools capable of reproducing and redistributing articles without filters or permissions.
For many newsrooms, this legal confrontation embodies a struggle to safeguard the intrinsic value of journalism, often threatened by the free and instant nature of online information. At the same time, media professionals question the need to renew their economic models and strengthen collaboration with technological stakeholders to ensure legal and ethical use of their content.
- Strong support for copyright protection
- Calls for international regulation on data use
- Search for alternatives between confrontation and cooperation
- Impacts on training for journalists about digital issues
| Concerned group | Action or attitude | Consequence |
|---|---|---|
| Traditional media | Support for legal procedures | Strengthening of legal protections |
| Professional organizations | Promotion of regulation and ethics | Better international recognition |
| Young journalists | Inclusion of AI issues in training | Adaptation to new digital challenges |
Artificial intelligence technologies and developers’ responsibility
The Perplexity case represents an emblematic example of the responsibilities borne by creators of artificial intelligence tools. As these technologies grow in power and sophistication, their ability to use, transform, and produce content from various sources requires increased vigilance.
Developers must not only ensure that their products comply with legal frameworks, but also anticipate indirect impacts on the reputation of sources, the truthfulness of information, and public trust. The integration of control mechanisms, transparency regarding the origin of data, and limitation of errors or other hallucinations are all key challenges.
- Strict legal framework for licenses and content uses
- Application of watermarks or marks for generated texts
- Audit and automated control mechanisms
- Team training on ethical and legal aspects
| Responsibility | Required action | Objective |
|---|---|---|
| Respect for copyright | Obtaining licenses, content filtering | Avoid disputes and potential litigation |
| Transparency | Informing users about data origin | Maintain trust and clarity |
| Error correction | Reducing AI hallucinations | Limit misinformation |
Evolution perspectives: toward a harmonized framework between AI and media
The controversy around Perplexity is representative of the many challenges marking the coexistence between artificial intelligence and traditional media. In 2025, the need for a clear legal and ethical framework is increasingly pressing to regulate the use of journalistic content in AI systems.
Current discussions between industry players, regulators, and journalism actors are moving towards developing shared standards, combining rights protection, fair revenue sharing, and information quality assurance. The establishment of trust labels for AI content, enhanced transparency mechanisms, and promotion of contractual agreements could offer a pragmatic path.
As the boundaries between AI and human creation continue to blur, the future of journalism seems to need to be written in this complex interweaving — where technological innovation coexists with the preservation of recognized ethics and professionalism.
- Development of international standards on intellectual property and AI
- Promotion of contractual partnerships between media and AI companies
- New certifications for the authenticity of generated content
- Strengthening vigilance against misinformation
| Initiative | Description | Expected impact |
|---|---|---|
| International standards | Harmonization of legislations and best practices | Reinforced copyright protection |
| Contractual agreements | Regulated collaboration for content use | Fair revenue sharing |
| Certifications | Trust labels for AI content | Better identification of sources and credibility |
| Awareness | Actions against AI-related misinformation | Increased public trust |