Microsoft is preparing to update consumer Copilot terms after old warning language resurfaced and spread across X over the weekend. The current public terms say Copilot is “for entertainment purposes only,” can make mistakes, may not work as intended, and should not be relied on for important advice.

The wording drew attention because Microsoft has spent the last year positioning Copilot as a serious productivity layer across Windows and Microsoft 365, including paid workplace plans aimed at enterprises. The legal disclaimer highlighted a gap between how generative AI products are marketed and how vendors still limit liability around model output.

TechCrunch reported that Microsoft described the wording as “legacy language” in comments attributed to the company and said it will be changed in the next update. That matters because the existing terms remain live on Microsoft's own Copilot site, where the cautionary language is still visible in the content policy section as of Monday.

The episode is another reminder that AI providers continue to sell assistants as useful work tools while simultaneously warning that outputs can be inaccurate, incomplete, or unsafe to trust on their own. For users and businesses, the practical takeaway is straightforward: Copilot may be increasingly embedded in everyday software, but Microsoft still says humans need to verify anything important.