Microsoft’s AI CEO, Mustafa Suleyman, has shared his opinion after recent pushback from users online that are becoming frustrated with Copilot and AI on Windows. In a post on X, Suleyman says he’s mind blown by the fact that people are unimpressed with the ability to talk fluently with an AI computer.
His post comes after Windows president Pavan Davuluri was recently met with major backlash from users online for posting about Windows evolving into an agentic OS. His post was so negatively received that he was forced to turn off replies, though Davuluri did later respond to reassure customers that the company was aware of the feedback.


^ This.
It’s a neat, under-construction tool.
A. Tool. An ‘agent’ to do niche things is neat.
…But I don’t need a chatbot on my fucking toaster.
Even as a tool it lacks predictability / reproducability. If I give instructions to download a paint program, start a new canvas of 1920x1080 and use the gradient tool to go from red to green, you’re going to get the same result every time. If I instead told a class of students to ask an AI to generate a red to green gradient on a 1920x1080 canvas, the results would not be consistent.
I use AI, but it is a tool with flaws.
I don’t get the analogy… of course a bunch of students using different tools with different inputs will yield different results? But if they use the same model and input at zero temperature, they will, in fact, get the same results, just like any code.
Predictability has never been a strength of ML, of course.
…That’s not really what it’s for. It’s for finding exotic stars in a mass of astronomical data on a budget, or interpoliating pixels in an image, or for identifying cat videos reasonably well. That’s still a useful tool. And the modern extension of getting a glorified autocomplete engine to press some buttons automatically is no different if structured and constrained appropriately.
The obvious problem, among many I see, is that these Tech Bros are selling underbaked… no, not even half cooked agenic systems as sapient magic lamps. Not niche tools for very specific bits of automation. Just look at the language Suleyman is using:
If you use the same seed on the same model with the same weights you get the same results.
That’s not the predictability we want. If I write a calculator that adds the output of rand() to any result, it will also be repeatable with the same seed on the same machine. It will be non-functional as a calculator though.
Depends on your use case. Adding 0.000001*rand() to a large number retains the functionality as a calculator.
Your argument that AI isn’t useful may be valid, but claiming that AI is not repeatable is false.
AI is in fact not meaningfully repeatable in actual usage patterns.
Agreed. The word “patterns” is an important qualification. LLMs are great for one off tasks, but not as part of a repeatable process.