Dataset Viewer
Auto-converted to Parquet
text
stringlengths
0
2.03k
Anthropic PBC is an American artificial intelligence (AI) startup company founded in 2021
Anthropic has developed a family of large language models (LLMs) named Claude
According to the company, it researches and develops AI to "study their safety properties at the technological frontier" and use this research to deploy safe models for the public.[5][6]
Anthropic was founded by former members of OpenAI, including siblings Daniela Amodei and Dario Amodei (who serves as CEO).[7] In September 2023, Amazon announced an investment of up to $4 billion, followed by a $2 billion commitment from Google in the following month.[8][9][10] As of September 2025,[update] Anthropic is the fourth most valuable private company globally, valued at over $183 billion.[11][12]
Founding and early development (2021–2022)
Anthropic was founded in 2021 by seven former employees of OpenAI, including siblings Daniela Amodei and Dario Amodei, the latter of whom served as OpenAI's Vice President of Research.[13][14]
In April 2022, Anthropic announced it had received $580 million in funding,[15] including a $500 million investment from FTX under the leadership of Sam Bankman-Fried.[16][3]
In the summer of 2022, Anthropic finished training the first version of Claude but did not release it, mentioning the need for further internal safety testing and the desire to avoid initiating a potentially hazardous race to develop increasingly powerful AI systems.[17]
Legal and strategic partnerships (2023)
On September 25, 2023, Amazon announced a partnership with Anthropic, with Amazon becoming a minority stakeholder by initially investing $1.25 billion, and planning a total investment of $4 billion.[8] As part of the deal, Anthropic would use Amazon Web Services (AWS) as its primary cloud provider and make its AI models available to AWS customers.[8][18] The next month, Google invested $500 million in Anthropic, and committed to an additional $1.5 billion over time.[10]
Major investments and acquisitions (2024)
In February 2024, Anthropic hired former Google Books head of partnerships Tom Turvey, and tasked him with obtaining "all the books in the world".[19] The company then began using destructive book scanning to digitize "millions" of books to train Claude.[19]
In March 2024, Amazon maxed out its potential investment from the agreement made in the prior year by investing another US$2.75 billion into Anthropic, completing its $4 billion investment.[9]
In November 2024, Amazon announced a new investment of $4 billion in Anthropic (bringing its total investment to $8 billion), including an agreement to increase the use of Amazon's AI chips for training and running Anthropic's large language models.[20]
In 2024, Anthropic attracted several notable employees from OpenAI, including Jan Leike, John Schulman, and Durk Kingma.[21]
Additional funding and partnerships, product improvements (2025)
In early 2025, Anthropic secured significant funding and partnerships while continuing its focus on AI safety research and policy advocacy
The company raised $3.5 billion in a Series E funding round in March, achieving a post-money valuation of $61.5 billion, led by Lightspeed Venture Partners with participation from several major investors.[22][23] The investment enabled Anthropic to advance development of next-generation AI systems, expand compute capacity, and accelerate international expansion.[22] A significant partnership was announced in March with Databricks, establishing a five-year strategic relationship to integrate Anthropic's models natively into the Databricks Data Intelligence Platform
This partnership provided over 10,000 companies access to Claude models for building AI agents that can reason over their enterprise data.[24][25]
Anthropic released several major updates to its Claude AI models throughout 2025
In May, the company announced Claude 4, introducing both Claude Opus 4 and Claude Sonnet 4 with enhanced coding capabilities and advanced reasoning features.[26] Claude Opus 4 was positioned as a highly-competitive coding model with sustained performance on complex tasks, while Claude Sonnet 4 delivered improved reasoning and instruction-following capabilities.[26] The company also introduced new API capabilities including the code execution tool, Model Context Protocol (MCP) connector, Files API, and prompt caching functionality.[26] In May, Anthropic launched a web search API that enabled Claude to access real-time information from the internet, expanding its capabilities beyond static training data.[27] Claude Code, Anthropic's coding assistant, transitioned from research preview to general availability, featuring integrations with VS Code and JetBrains IDEs and support for GitHub Actions.[26] The product enabled developers to collaborate directly with Claude in their development environment, with the AI capable of making coordinated changes across multiple files and understanding entire codebases.[28]
In July, Anthropic published a report titled "Build AI in America", outlining policy recommendations for domestic AI infrastructure development.[29] The report emphasized the need for substantial investments in computing power and electricity infrastructure, projecting that the U.S
AI sector would require at least 50 gigawatts of electric capacity by 2028 to maintain global leadership.[29]
In September 2025, Anthropic completed a Series F funding round, raising US $13 billion at a post-money valuation of $183 billion
The round was co-led by Iconiq Capital, Fidelity Management & Research, and Lightspeed Venture Partners, with participation from the Qatar Investment Authority and other investors.[30][31] The same month, Anthropic announced that it would stop selling its products to groups majority-owned by Chinese, Russian, Iranian, or North Korean entities due to national security concerns.[32]
According to Anthropic, the company's goal is to research the safety and reliability of artificial intelligence systems.[6] The Amodei siblings were among those who left OpenAI due to directional differences.[14]
Anthropic incorporated itself as a Delaware public-benefit corporation (PBC), which enables directors to balance the financial interests of stockholders with its public benefit purpose.[33]
Anthropic's "Long-Term Benefit Trust" is a purpose trust for "the responsible development and maintenance of advanced AI for the long-term benefit of humanity"
It holds Class T shares in the PBC which allow it to elect directors onto Anthropic's board.[34][35] As of April 2025, the members of the Trust are Neil Buddy Shah, Kanika Bahl and Zach Robinson.[36]
Investors include Amazon.com for $8 billion,[20] Google for $2 billion,[10] and Menlo Ventures for $750 million.[37]
Claude incorporates "Constitutional AI" to set safety guidelines for the model's output.[42] The name, "Claude", was chosen either as a reference to mathematician Claude Shannon, or as a male name to contrast the female names of other A.I
assistants such as Alexa, Siri, and Cortana.[3]
Anthropic initially released two versions of its model, Claude and Claude Instant, in March 2023, with the latter being a more lightweight model.[43][44][45] The next iteration, Claude 2, was launched in July 2023.[46] Unlike Claude, which was only available to select users, Claude 2 is available for public use.[47]
Claude 3 was released in March 2024, with three language models: Opus, Sonnet, and Haiku.[48][49] The Opus model is the largest
According to Anthropic, it outperformed OpenAI's GPT-4 and GPT-3.5, and Google's Gemini Ultra, in benchmark tests at the time
Sonnet and Haiku are Anthropic's medium- and small-sized models, respectively
All three models can accept image input.[48] Amazon has added Claude 3 to its cloud AI service Bedrock.[50]
In May 2024, Anthropic announced the Claude Team plan, its first enterprise offering for Claude, and Claude iOS app.[51]
In June 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated significantly improved performance on benchmarks compared to the larger Claude 3 Opus, notably in areas such as coding, multistep workflows, chart interpretation, and text extraction from images
Released alongside 3.5 Sonnet was the new Artifacts capability in which Claude was able to create code in a dedicated window in the interface and preview select code in real time such as websites or SVGs.[52]
In October 2024, Anthropic released an improved version of Claude 3.5, along with a beta feature called "Computer use", which enables Claude to take screenshots, click, and type text.[53]
In November 2024, Palantir announced a partnership with Anthropic and Amazon Web Services to provide U.S
intelligence and defense agencies access to Claude 3 and 3.5
According to Palantir, this was the first time that Claude would be used in "classified environments".[54]
In December 2024, Claude 3.5 Haiku was made available to all users on web and mobile platforms.[55]
In February 2025, Claude 3.7 Sonnet was introduced to all paid users
It is a "hybrid reasoning" model (one that responds directly to simple queries, while taking more time for complex problems).[56][57]
In May 2025, Claude 4 Opus and Sonnet were introduced
With these models, Anthropic also introduced Extended thinking with tool use and the ability to use tools in parallel.[58]
In August 2025, Claude Opus 4.1 was introduced.[59]
According to Anthropic, Constitutional AI (CAI) is a framework developed to align AI systems with human values and ensure that they are helpful, harmless, and honest.[13][60] Within this framework, humans provide a set of rules describing the desired behavior of the AI system, known as the "constitution".[60] The AI system evaluates the generated output and then adjusts the AI models to better fit the constitution.[60] The self-reinforcing process aims to avoid harm, respect preferences, and provide true information.[60]
Some of the principles of Claude 2's constitution are derived from documents such as the 1948 Universal Declaration of Human Rights and Apple's terms of service.[46] For example, one rule from the UN Declaration applied in Claude 2's CAI states "Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood."[46]
Anthropic also publishes research on the interpretability of machine learning systems, focusing on the transformer architecture.[13][61][62]
Part of Anthropic's research aims to be able to automatically identify "features" in generative pretrained transformers like Claude
In a neural network, a feature is a pattern of neural activations that corresponds to a concept
In 2024, using a compute-intensive technique called "dictionary learning", Anthropic was able to identify millions of features in Claude, including for example one associated with the Golden Gate Bridge
Enhancing the ability to identify and edit features is expected to have significant safety implications.[63][64][65]
In March 2025, research by Anthropic suggested that multilingual LLMs partially process information in a conceptual space before converting it to the appropriate language
It also found evidence that LLMs can sometimes plan ahead
For example, when writing poetry, Claude identifies potential rhyming words before generating a line that ends with one of these words.[66][67]
Anthropic partnered with Palantir and Amazon Web Services in November 2024 to provide the Claude model to U.S
intelligence and defense agencies.[68] Anthropic's CEO Dario Amodei said about working with the U.S
military:
The position that we should never use AI in defense and intelligence settings doesn't make sense to me
The position that we should go gangbusters and use it to make anything we want — up to and including doomsday weapons — that's obviously just as crazy
We're trying to seek the middle ground, to do things responsibly.[69]
In June 2025, Anthropic announced a "Claude Gov" model
Ars Technica reported that, as of June 2025, it was in use at multiple US national security agencies.[70]
In July 2025, the United States Department of Defense announced that Anthropic had received a $200 million contract for AI in the military, along with Google, OpenAI, and xAI.[71]
In August 2025, Anthropic launched two major education initiatives: a Higher Education Advisory Board and three AI Fluency courses designed to guide responsible AI integration in academic settings.[72] The advisory board is chaired by Rick Levin, former president of Yale University (1993-2013) and former CEO of Coursera (2014-2017), and includes prominent academic leaders from institutions such as Rice University, University of Michigan, University of Texas at Austin, and Stanford University.[73] The three AI Fluency courses—AI Fluency for Educators, AI Fluency for Students, and Teaching AI Fluency—were co-developed with professors Rick Dakan of Ringling College of Art and Design and Joseph Feller of University College Cork, and are available under Creative Commons licenses for institutional adaptation.[74] Additionally, Anthropic has established partnerships with universities including Northeastern University, London School of Economics and Political Science, and Champlain College, providing campus-wide access to Claude for Education, and has announced integrations with educational platforms Canvas, Wiley (publisher), and Panopto to enhance academic research capabilities.[75]
In September 2025, Anthropic released a report saying that businesses primarily use AI for automation rather than collaboration, with three-quarters of companies that work with Claude using it for “full task delegation."[76] Earlier in the year, Amodei predicted that AI would wipe out white-collar jobs, especially entry-level jobs in finance, law, and consulting.[77][78] Also in 2025, Anthropic research found that AI would be able to write 90 percent of all code in a matter of months, although some experts have questioned the likelihood of this claim.[79]
On October 18, 2023, Anthropic was sued by Concord, Universal, ABKCO, and other music publishers for, per the complaint, "systematic and widespread infringement of their copyrighted song lyrics."[80][81][82] They alleged that the company used copyrighted material without permission in the form of song lyrics.[83] The plaintiffs asked for up to $150,000 for each work infringed upon by Anthropic, citing infringement of copyright laws.[83] In the lawsuit, the plaintiffs support their allegations of copyright violations by citing several examples of Anthropic's Claude model outputting copied lyrics from songs such as Katy Perry's "Roar" and Gloria Gaynor's "I Will Survive".[83] Additionally, the plaintiffs alleged that even given some prompts that did not directly state a song name, the model responded with modified lyrics based on original work.[83]
On January 16, 2024, Anthropic claimed that the music publishers were not unreasonably harmed and that the examples noted by plaintiffs were merely bugs.[84]
In August 2024, a class-action lawsuit was filed against Anthropic in California for alleged copyright infringement
The suit claims Anthropic fed its LLMs with pirated copies of the authors' work, including from participants Kirk Wallace Johnson, Andrea Bartz and Charles Graeber.[85] On June 23, 2025, the United States District Court for the Northern District of California granted summary judgment for Anthropic that the use of digital copies of the plaintiffs' works (inter alia) for the purpose of training Anthropic's LLMs was a fair use
But it found that Anthropic had used millions of pirated library copies and that such use of pirated copies could not be a fair use
Therefore the case was ordered to go to trial on the pirated copies used to create Anthropic's central library and the resulting damages.[86] In September 2025, Anthropic agreed to pay authors $1.5 billion to settle the case, amounting to $3,000 per book plus interest
The proposed settlement, pending judge approval, stands as the largest copyright resolution in U.S
history.[87][88]
In June 2025, Reddit sued Anthropic, alleging that Anthropic is scraping data from the website in violation of Reddit's user agreement.[89]
Portable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.[2][3] Based on the PostScript language, each PDF file encapsulates a complete description of a fixed-layout flat document, including the text, fonts, vector graphics, raster images and other information needed to display it
PDF has its roots in "The Camelot Project" initiated by Adobe co-founder John Warnock in 1991.[4]
PDF was standardized as ISO 32000 in 2008.[5] It is maintained by ISO TC 171 SC 2 WG8, of which the PDF Association is the committee manager.[6] The last edition as ISO 32000-2:2020 was published in December 2020.[7]
PDF files may contain a variety of content besides flat text and graphics including logical structuring elements, interactive elements such as annotations and form-fields, layers, rich media (including video content), three-dimensional objects using U3D or PRC, and various other data formats
The PDF specification also provides for encryption and digital signatures, file attachments, and metadata to enable workflows requiring these features.
The development of PDF began in 1991 when John Warnock wrote a paper for a project then code-named Camelot, in which he proposed the creation of a simplified version of PostScript called Interchange PostScript (IPS).[8] Unlike traditional PostScript, which was tightly focused on rendering print jobs to output devices, IPS would be optimized for displaying pages to any screen and any platform.[8]
Adobe Systems made the PDF specification available free of charge in 1993
In the early years PDF was popular mainly in desktop publishing workflows, and competed with several other formats, including DjVu, Envoy, Common Ground Digital Paper, Farallon Replica and even Adobe's own PostScript format.
PDF was a proprietary format controlled by Adobe until it was released as an open standard on July 1, 2008, and published by the International Organization for Standardization as ISO 32000-1:2008,[9][10] at which time control of the specification passed to an ISO Committee of volunteer industry experts
In 2008, Adobe published a Public Patent License to ISO 32000-1 granting royalty-free rights for all patents owned by Adobe necessary to make, use, sell, and distribute PDF-compliant implementations.[11]
PDF 1.7, the sixth edition of the PDF specification that became ISO 32000-1, includes some proprietary technologies defined only by Adobe, such as Adobe XML Forms Architecture (XFA) and JavaScript extension for Acrobat, which are referenced by ISO 32000-1 as normative and indispensable for the full implementation of the ISO 32000-1 specification.[12] These proprietary technologies are not standardized, and their specification is published only on Adobe's website.[13][14][15] Many of them are not supported by popular third-party implementations of PDF.
ISO published version 2.0 of PDF, ISO 32000-2 in 2017, available for purchase, replacing the free specification provided by Adobe.[16] In December 2020, the second edition of PDF 2.0, ISO 32000-2:2020, was published, with clarifications, corrections, and critical updates to normative references[17] (ISO 32000-2 does not include any proprietary technologies as normative references).[18]
In April 2023 the PDF Association made ISO 32000-2 available for download free of charge.[16]
A PDF file is often a combination of vector graphics, text, and bitmap graphics
The basic types of content in a PDF are:
In later PDF revisions, a PDF document can also support links (inside document or web page), forms, JavaScript (initially available as a plugin for Acrobat 3.0), or any other types of embedded contents that can be handled using plug-ins.
PDF combines three technologies:
PostScript is a page description language run in an interpreter to generate an image.[8] It can handle graphics and has standard features of programming languages such as branching and looping.[8] PDF is a subset of PostScript, simplified to remove such control flow features, while graphics commands remain.[8]
PostScript was originally designed for a drastically different use case: transmission of one-way linear print jobs in which the PostScript interpreter would collect a series of commands until it encountered the showpage command, then execute all the commands to render a page as a raster image to a printing device.[19] PostScript was not intended for long-term storage and real-time interactive rendering of electronic documents to computer monitors, so there was no need to support anything other than consecutive rendering of pages.[19] If there was an error in the final printed output, the user would correct it at the application level and send a new print job in the form of an entirely new PostScript file
Thus, any given page in a PostScript file could be accurately rendered only as the cumulative result of executing all preceding commands to draw all previous pages—any of which could affect subsequent pages—plus the commands to draw that particular page, and there was no easy way to bypass that process to skip around to different pages.[19]
End of preview. Expand in Data Studio

WebText-3 Corpus

WebText-3 is a large-scale, diverse text corpus collected from publicly available web pages. It contains cleaned and normalized sentences suitable for natural language processing (NLP), machine learning, and AI training.


Dataset Overview

  • Format: Plain text (.txt), one sentence per line
  • Approximate Size: 200,000+ sentences
  • Languages: Primarily English, with occasional Hebrew content
  • Source Types: Wikipedia articles, technology news sites, blogs, educational resources, social media platforms, and developer documentation
  • Content Coverage:
    • Artificial intelligence, machine learning, and large language models
    • Programming languages, software, and tools
    • Science, astronomy, and mathematics
    • Technology trends, cloud platforms, and AI research
    • News, politics, global events, and human-related topics
    • Entertainment, gaming, online culture, and social media
    • Miscellaneous topics such as philosophy, space, and general knowledge

Data Characteristics

  • Sentence Length: Varies; short to medium-length sentences
  • Cleanliness: Text has been cleaned to remove invisible characters, unusual symbols, and excessive whitespace
  • Usability: Ready for NLP tasks such as language modeling, text classification, summarization, or AI fine-tuning

Example Usage

  • Training large language models or chatbots
  • Benchmarking NLP algorithms
  • Data analysis or text mining
  • Educational research on language patterns

WebText-3 provides a rich and broad textual resource suitable for both research and practical AI applications.

Downloads last month
141