People Use ChatGPT

3 hours ago 2

This morning a team of OpenAI researchers and I released a new paper called “How People Use ChatGPT”. We document the growth of ChatGPT from its release in November 2022 through July 2025, at which point it had been used at least weekly by more than 750 million people, which is nearly 10% of the world’s population.

We classified a large random sample of anonymized messages sent by users on ChatGPT consumer plans (Free, Plus, and Pro) according to attributes like purpose (work vs. personal), conversation topic, user intent, and job task. We also explored variation in usage by demographic characteristics and user attributes like education and occupation.

This is a meaty paper, so I’m breaking my discussion of it into three parts. In part one, I’ll talk about our results on the growth of ChatGPT and share a little bit about how the research was conducted behind the scenes. In part two, I’ll dig into the data on how ChatGPT is being used and what that means for the economy and for society. Part three zooms out, placing our findings about ChatGPT in the broader historical context of technology adoption patterns and speculates about what the future may hold.

The Growth of ChatGPT

ChatGPT was released to the public as a “research preview” on November 30th, 2022. By December 5th it had more than one million registered users. It reached 100 million weekly active users (WAUs) in early November of 2023, less than one year after it was released. The number of ChatGPT weekly active users has been doubling every 7-8 months since then, reaching more than 750 million WAUs as of September 2025. This growth is documented in the figure below and has been publicly confirmed at various points by OpenAI leadership.

A graph with a line going up

AI-generated content may be incorrect.

Being a WAU just means you sent at least one message in the last week. What about total message volume? The table below shows that as of June 2025, ChatGPT users were sending more than 2.6 billion messages per day, or more than 30,000 messages per second. Total message volume has increased by 5.8x in the last year.

A white background with black text and numbers

AI-generated content may be incorrect.

For context, there are an estimated 14 billion Google web searches each day. That means that if ChatGPT message volume continues on its current growth path (a big if), it would equal the number of current Google searches in just over a year.

ChatGPT is growing much faster than Google search did. Google search was available to the public starting in September 1999, and it had reached 1 billion daily searches eight years later. According to Sam Altman, ChatGPT reached the 1 billion message milestone in December 2024, less than two years after its release.

Notice also that message volume has grown faster than user volume (5.8x for messages compared to 3.2x growth in users shown in the figure above). This tells us that ChatGPT users are engaging with the technology more intensively as they gain experience with it. You can see that directly in the figure below, which plots messages per WAU by cohorts of new users based on the quarter in which they signed up (e.g. 1st quarter of 2023, 2nd quarter of 2023, etc.) The vertical axis is normed relative to message activity of the 1st cohort as of July 2023.

The cohort usage patterns are very interesting. Think of the Q1 2023 signups as early adopters and (often) power users. Their usage declined slightly from July 2023 through the end of 2024 but then increased substantially beginning in early 2025. By July 2025, those early adopters were sending 40% more messages per day than they did two years earlier. People who signed up for ChatGPT in the 3rd and 4th quarters of 2024 are now sending nearly twice as many messages per day as they did less than a year ago. The fascinating result in the figure above is that usage followed essentially the same pattern in all signup cohorts – flat through most of 2024 and then increasing substantially beginning in late 2024 to early 2025.

To me, this suggests that ChatGPT has gotten substantially better and/or more user-friendly in the last year. The growth is a time effect, not a cohort effect. ChatGPT is becoming increasingly integrated into people’s weekly and daily lives.

Demographic gaps are closing – because everyone’s using it

The typical story about new technologies is that are adopted faster by highly educated men from rich countries, which exacerbates inequality. The early work on ChatGPT – including some of my own research – very much fit this narrative. In 2024, Anders Humlum and Emilie Vestergaard published a paper in the Proceedings of the National Academy of Sciences forcefully titled “The unequal adoption of ChatGPT exacerbates existing inequalities among workers.” They surveyed a large representative sample of workers in 11 highly-exposed occupations in Denmark and found that women and lower-earners were much less likely to have used ChatGPT. My paper with Bick and Blandin, “The Rapid Adoption of Generative AI”, found similar demographic gaps as of late 2024.

Our paper shows that demographic gaps in ChatGPT usage have closed rapidly. The figure below shows the trend over time in the share of signups by people with typically male or female names.

When ChatGPT initially launched more than 80% of WAUs had typically male first names, and the gender gap was still relatively large through late 2024. However, WAUs reached relatively parity by early 2025, and as of July 52% of active users had a typically female first name, suggesting that the gender gap in ChatGPT usage may have closed completely.

We also find much faster growth in ChatGPT usage in middle-income countries. The figure below divides WAUs in each country by the share of the population that has access to internet (using data from the World Bank) and then smooths the country data into deciles of GDP per capita. The plot shows that usage has increased by 3x (from 10% to 30% of the internet-using population) in countries from the richest decile, but by 5-6x for countries in the middle deciles.

There is now not that much difference in ChatGPT usage between countries at the 50th vs. 90th percentile of GDP per capita. For example, Brazil, South Korea, and the United States have relatively similar ChatGPT usage rates and near-universal internet access, but GDP per capita of $10k, $34k, and $86k respectively.

Overall, I was quite surprised at the rapid broadening of ChatGPT use across countries and demographic groups. This doesn’t necessarily mean that AI will be an equalizing force in society. But I have updated my priors substantially on this question.

The Importance of Protecting User Privacy

People share a lot of personal information with the internet. Are you comfortable sharing your phone’s recent Google search or ChatGPT message history with a complete stranger? I know I’m not. I try to avoid disclosing the details of my life when I search the web or use generative AI tools, but it’s hard. As a result, our research could very easily violate users’ privacy, even accidentally.

That’s why we made a special point to tie our own hands very tightly. First, as an external researcher, I never touched the data or wrote any code at all. I worked closely with the team to design and improve the analyses and the figures and tables you see in the paper, but I was always at arm’s length.

No member of the research team observed any aspect of a user’s personal information. All our analyses were conducted on a sample of messages that had been automatically stripped of name, date of birth, address…anything that can be used to identify you. Researchers call this Personally Identifiable Information (PII). OpenAI has an internal Privacy Filter tool that automatically scrubs PII from the data.

No member of the research team ever saw the content of user messages. Instead, they wrote automated classifiers that analyzed user messages and delivered some aggregated output over a limited number of categories. For example, we wrote a prompt that asked an LLM to analyze a user message and determine whether it was likely related to work, or whether it was asking for tutoring help, information about products to purchase, or other topics. A key part of our process was WildChat, a public dataset of 1 million real ChatGPT interactions. We asked several people to manually classify WildChat messages according to our prompts, and then we fine-tuned the prompts until we got reasonably high fidelity with respect to human judgment. See Appendix B of the paper for details.

Although we analyze variation by user demographics, we never directly accessed demographic data. Instead, we used something called a Data Clean Room (DCR). We sent code into the DCR to perform operations on sensitive data and received only aggregate output in return. We impose strict aggregation limits on that output, and all code required multiple inspection-and-approval cycles and was publicly logged. Figure 2 in the paper gives a visual illustration of how the DCR works.

A diagram of a data clean room

AI-generated content may be incorrect.

Overall, we tried to set a new precedent and a very high bar for privacy protection when it comes to analyzing sensitive user data. I won’t lie – working in the DCR was a pain in the neck. There were a lot of analyses I wanted to do that were impossible given the constraints we imposed on ourselves. For example, I would have loved to really dig in to understand whether ChatGPT substitutes for or complements expertise. This is hard to discern from aggregate categories. Personally, I use ChatGPT to obtain information and context that complements my main activity. For example, I wrote a post some months back drawing an analogy between generative AI and steam power. The core insight was mine, but I relied on ChatGPT to deliver me a brief history of the tractor.

You would need to know a lot about me to figure out how I use ChatGPT to get things done, and that kind of analysis just wasn’t possible without violating user privacy. My point is that the DCR restrictions really were quite binding, and that’s a good thing. As a heavy user of AI myself, I can honestly say I would be comfortable allowing my own message history to be analyzed using our privacy-preserving methods.

Stay tuned for part two, where I’ll dig deeper into how ChatGPT is used.

Discussion about this post

Read Entire Article