Skip to Content

NYT approves select AI tools from OpenAI, Amazon, Google, & Microsoft for editorial & product staff

The New York Times (NYT) has officially approved the use of select internal and external AI tools for its editorial and product teams. This move is part of a broader strategy to integrate artificial intelligence into its operations while maintaining journalistic integrity and addressing potential risks.

Key Developments in AI Integration at NYT

Internal AI Tools

NYT has developed an in-house AI tool called Echo, currently in beta, to assist journalists in summarizing articles, briefings, and interactive content. Echo is designed to streamline workflows by condensing information for various uses, such as newsletters or SEO headlines.

The company has also established a specialist team, nytDEMO, which focuses on creating AI-driven tools to better understand reader engagement and optimize content strategies.

External AI Tools

NYT has greenlit the use of several external AI tools, including:

  • GitHub Copilot for coding assistance.
  • Google Vertex AI for product development.
  • NotebookLM and OpenAI’s non-ChatGPT API for research and content creation.
  • Select Amazon AI products for specific applications.

Access to OpenAI’s API is restricted and requires approval from NYT’s legal department to mitigate risks related to copyright infringement.

Editorial Use Cases

The approved tools are intended to assist journalists with tasks such as generating SEO-optimized headlines, summarizing complex reports, brainstorming ideas, analyzing internal documents, and creating audience-targeted promotional materials. However, strict guidelines prohibit using AI to draft or significantly revise articles or input third-party copyrighted materials.

Training and Guidelines

NYT has introduced mandatory training programs for staff on responsible AI usage. Editorial guidelines emphasize that generative AI should be viewed as a tool to enhance journalistic capabilities rather than replace human judgment. The company also outlined specific “do’s and don’ts” for using AI in reporting.

Potential Risks and Safeguards

The NYT remains cautious about risks such as copyright infringement, data privacy issues, and the potential exposure of confidential sources. For example, unapproved use of certain AI tools could compromise the protection of sources or notes.

Legal concerns are particularly relevant given NYT’s ongoing lawsuit against OpenAI and Microsoft over alleged copyright violations related to training models on its content without permission.

Broader Implications

The integration of AI at NYT reflects a growing trend in the media industry to leverage generative AI for efficiency while navigating ethical and legal challenges. By adopting a measured approach that combines internal innovation with external partnerships, NYT aims to enhance its journalistic mission while safeguarding intellectual property and editorial standards.