We’re in the “grow a locked-in user base” part of their rollout. We’ll hit the “make money” part in a year or two, and then the enshittification machine will kick into high gear.
Yeah, it’s basically like early days of cable, Uber, Instacart, streaming, etc. They have a lot of capital and are running at a loss to capture the market. Once companies have secured a customer base, they start jacking up the prices.
There is a lot of top down shit, but there is definitely bunch non c-suite enterprise customers out there. A lot of product managers are curious about this shit.
There are billions of free users available. All they need to do is strip-off few excellent features of their free model and hide it behind a pay wall annnnd voila these free users have now became their paying customers!
We’re in the “grow a locked-in user base” part of their rollout.
An attempt at that. It will be partially successful but with AI accelerators coming to more and more consumer hardware, the hurdles of self-hosting get lower and lower.
I have no clue how to set up an LLM server but installing https://github.com/Acly/krita-ai-tools is easily done with a few mouse clicks. The Krita plugin handles all the background tasks.
Current gen models got less accurate and hallucinated at a higher rate compared to the last ones, from experience and from openai. I think it’s either because they’re trying to see how far they can squeeze the models, or because it’s starting to eat its own slop found while crawling.
Isn’t this true of like everything AI right now?
We’re in the “grow a locked-in user base” part of their rollout. We’ll hit the “make money” part in a year or two, and then the enshittification machine will kick into high gear.
Yeah, it’s basically like early days of cable, Uber, Instacart, streaming, etc. They have a lot of capital and are running at a loss to capture the market. Once companies have secured a customer base, they start jacking up the prices.
in this case there isnt customer base for AI, only ceo and c-suites are.
There is a lot of top down shit, but there is definitely bunch non c-suite enterprise customers out there. A lot of product managers are curious about this shit.
There are billions of free users available. All they need to do is strip-off few excellent features of their free model and hide it behind a pay wall annnnd voila these free users have now became their paying customers!
An attempt at that. It will be partially successful but with AI accelerators coming to more and more consumer hardware, the hurdles of self-hosting get lower and lower.
I have no clue how to set up an LLM server but installing https://github.com/Acly/krita-ai-tools is easily done with a few mouse clicks. The Krita plugin handles all the background tasks.
I doubt it, LLMs have already become significantly more efficient and powerful in just the last couple months.
In a year or two we will be able to run something like Gemini 2.5 Pro on a gaming PC which right now requires a server farm.
Current gen models got less accurate and hallucinated at a higher rate compared to the last ones, from experience and from openai. I think it’s either because they’re trying to see how far they can squeeze the models, or because it’s starting to eat its own slop found while crawling.
https://cdn.openai.com/pdf/2221c875-02dc-4789-800b-e7758f3722c1/o3-and-o4-mini-system-card.pdf
That’s one example, but what about other models? What you just did is called cherry picking, or selective evidence.