AI Is "Just Entertainment": The Microsoft Disclaimer Worth Billions

Microsoft labels Copilot "for entertainment purposes only" in its terms of use, while the world pours trillions into AI. Find out what this really means for anyone using these tools.

AINEWS

@persona.fra

5/4/20266 min read

Artificial intelligence is driving the global economy like no technology before it. In the first half of 2025, investments in data centres and AI infrastructure accounted for 80% of the increase in private domestic demand in the United States, according to S&P Global (1), compared to the previous six months. Without this push, several economists estimate US GDP would have grown by barely 0.1% annually (2). This is not a revolution waiting to happen: it is already underway, with figures running into hundreds of billions of dollars every year.

And in the middle of all this, Microsoft, one of the companies investing most heavily on this front, with $80 billion in 2025 alone on AI-dedicated data centres (3), quietly wrote the following sentence into the terms of use for its flagship product, Microsoft Copilot, in bold and in capitals: "Copilot is for entertainment purposes only" (4).

The same disclaimer used by fortune tellers to stay out of court.

The World Bets on AI

Understanding how the AI market actually works requires holding two narratives together that are usually told separately. The first is the world of announcements and promises: industrial revolutions, job automation, productivity leaps, models that "reason better than a Nobel Prize winner" (those are Dario Amodei's words from Anthropic, not ours). The second is the legal layer, where the same companies shield themselves from any liability when things go wrong.

The most common mistake is to assume these two layers contradict each other. In reality, they are perfectly consistent from a corporate standpoint. You sell something as the future, but write in the terms of use that you guarantee nothing. The problem arises when a business or a professional builds critical processes on a tool that its own maker describes as "just for entertainment."

The Copilot case is textbook, but not isolated. OpenAI, Google and Anthropic all have similar clauses in their terms of service: AI can make mistakes, users are responsible for verifying outputs, no accuracy guarantees (5). The difference is that none of these companies used the phrase "entertainment purposes only," which is technically identical to the language found on tarot.com, in the legal section dedicated to online tarot readings (6).

Fun fact: the phrase "for entertainment purposes only" has a long history in the American para-psychological entertainment sector, where it became the standard way to protect psychics and fortune tellers from lawsuits. Seeing Microsoft adopt the same approach for a product it is integrating into Windows, Outlook, Excel and Teams is, to say the least, a notable stylistic choice.

What Blind Trust Actually Costs

The question is not rhetorical. It has a numerical answer.

According to a 2025 EY Responsible AI survey, 99% of organisations reported financial losses linked to AI-related risks. Of those, 64% lost more than one million dollars. The conservative average sits at $4.4 million per company (7).

The most advanced models show hallucination rates of between 15% and 52% in commercial applications (7). Reuters, in February 2026, confirmed that even OpenAI and Google models show hallucination rates of 15-20% on complex queries (8). In plain terms: one output in five could be wrong.

71% of CEOs and C-suite executives surveyed by the Financial Times said they were reluctant to scale AI use without first solving the hallucination problem (8). Yet the same companies continue adopting these tools, often without adequate verification processes in place.

The Copilot case also speaks clearly when it comes to adoption numbers: only 3.3% of Microsoft 365 users with access to Copilot Chat have actually purchased it. Out of roughly 450 million active licences, paying subscribers number 15 million. The Net Promoter Score on Copilot's accuracy stood at -24.1 in September 2025. And 44.2% of users who stopped using it cited distrust in responses as the main reason (4).

So: everyone invests, few actually use it, and those who do often do not trust it. And Microsoft responded by writing "entertainment only" in the legal terms, only to backtrack when the press picked it up, claiming the wording was "legacy language" left over from Bing Chat days and that it would be updated soon (5).

Three Things to Do Now, Without Catastrophising

We are not saying AI is useless or that people should stop using it. It is everywhere now, and you cannot blame the tool, only those responsible for using it. What we are saying is that there is an enormous gap between how AI is sold and how it actually works, and that gap has a concrete cost.

First. Treat every AI output the way you would treat a document written by a brilliant but inexperienced intern: useful as a starting point, never as a finished product. The difference between those who use AI well and those who do not is not the perfect prompt. It is the verification process that follows, which will help you improve the next prompt, the next output, and so on from there.

Second. Before integrating an AI tool into a critical process (legal, financial, medical, editorial), read the terms of use. Actually read them. Not to hunt for an "entertainment only" disclaimer, but to understand what the company guarantees and what it does not, who is responsible for errors, and where your data ends up.

Third. Do not build your entire workflow around a single AI tool. Diversification does not only apply to financial investments. If your process depends entirely on a model that could hallucinate tomorrow, update its terms, or change its behaviour overnight, you are building on sand, and businesses built on sand tend not to outlast the tide.

The problem does not sort itself out. If anything, the deeper AI gets integrated into processes, the more errors multiply rather than shrink. AI does not forgive gaps in processes, it scales them. They tend to grow and cause greater damage over time if they are not fixed as soon as possible.

woman in brown long sleeve shirt standing beside woman in black tank top
woman in brown long sleeve shirt standing beside woman in black tank top

The Courage to Read What Is Written

Staying informed, yes. It is tiring, it is tedious, and sometimes it is frustrating. Every week brings a new study, a new model, a new case study that proves or disproves the one from the week before. We know how hard it is to keep up, especially when you have a business to run and marketing is one of twenty problems you need to solve every morning.

But here is the interesting thing: most of your competitors are not reading any of this. They do not know that Copilot called itself "just entertainment." They do not know that 99% of companies have suffered financial losses linked to AI. They do not know that one in five outputs could be wrong. And they will keep using AI blindly, building on foundations they have never once verified.

That is the window of opportunity. Not in using AI better than everyone else: in using it with more awareness than everyone else.

The ironic closing note: between Lisbon and Barcelona, I learned to read tarot cards. So if you have made it this far, I congratulate you for giving me a slice of your time.

Trust no one and verify everything, which you already know. Doubt even your certainties.

It would be lovely to chat, not to read the future, not to talk business. Let us talk about what we actually enjoy doing and how that makes us happy, not what entertains us. For that, there is always AI.

Did you enjoy the article?

Let us know what you think – we’re always up for a chat, and if there’s a chance we could work together, we’d be delighted

Sourses

  1. GS&P Global (2025) - Data Center Investments Are Increasingly Moving The Macro Needle https://press.spglobal.com/2025-11-05-S-P-Global-Research-Reveals-Data-Center-Investments-Moving-The-U-S-Macro-Needle

  2. AI Data Analytics Network (2025) - Data center investment drives US economy growth https://www.aidataanalytics.network/data-science-ai/news-trends/data-center-investment-drives-us-economy-growth

  3. Empirix Partners (2025) - The Trillion Dollar Horizon: Inside 2025's AI Infrastructure Investments https://empirixpartners.com/the-trillion-dollar-horizon/

  4. The Next Web (2026) - Microsoft's own ToS calls Copilot 'entertainment only' amid adoption slump https://thenextweb.com/news/microsoft-copilot-entertainment-only-disclaimer-adoption

  5. TechCrunch (2026) - Copilot is 'for entertainment purposes only,' according to Microsoft's terms of use https://techcrunch.com/2026/04/05/copilot-is-for-entertainment-purposes-only-according-to-microsofts-terms-of-service/

  6. Tarot - Copyright and Legal Information https://www.tarot.com/legal

  7. EY (2025) - How can responsible AI bridge the gap between investment and impact? https://www.ey.com/en_gl/insights/ai/how-can-responsible-ai-bridge-the-gap-between-investment-and-impact

  8. AI Daily (2026) - AI Hallucinations Top User Concerns Over Job Losses https://www.ai-daily.news/articles/ai-hallucinations-top-user-concerns-over-job-losses-in-2026