I think most people find something like chatgpt and copilot useful in their day to day lives. LLMs are a very helpful and powerful technology. However, most people are against these models collecting every piece of data imaginable from you. People are against the tech, they’re against the people running the tech.
I don’t think most people would mind if a FOSS LLM, that’s designed with privacy and complete user control over their data, was integrated with an option to completely opt out. I think that’s the only way to get people to trust this tech again and be onboard.
In the non tech crowds I have talked to about these tools, they have been mostly concerned with them just being wrong, and when they are integrated with other software, also annoyingly wrong.
Idk most people I know don’t see it as a magic crystal ball that’s expected to answer all questions perfectly. I’m sure people like that exist, but for the most part I think people understand that these LLMs are flawed. However, I know a lot of people who use them for everyday tasks like grammar checks, drafting emails/documents, brainstorming, basic analysis, and so on. They’re pretty good at these sort of things because that’s what they’re built for. The issue of privacy and greed remain, and I think some of the issues will at least be partially solved if they were designed with privacy in mind.
I’m enjoying how ludicrous the idea of a “privacy friendly AI” is- trained on stolen data from inhaling everyone else’s data from the internet, but cares suddenly about “your” data.
It’s not impossible. You could build a model that’s built on consent where the data it’s trained on is obtained ethically, data collected from users is anonymized, and users can opt out if they want to. The current model of shameless theft isn’t the only path there is.
If I understand right, the usefulness of basic questions like “Hey ChatGPT, how long do I boil pasta” is offset by the vast resources needed to answer that question. We just see it as simple and convenient as it tries to invest in its “build up interest” phase and runs at a loss. If the effort to sell the product that way fails, it’s going to fund itself by harvesting data.
I don’t disagree per se, but I think there’s a pretty big difference between people using chatgpt for correcting grammar or drafting an email and people using it generate a bunch of slop images/videos. The former is a more streamlined way to use the internet which has value, while the latter is just there for the sake of it. I think its feasible for newer LLM designs to focus on what’s actually popular and useful, and cutout the fat that’s draining a large amounts of resources for no good reason.
We can say maybe a personal LLM trained on data that you actually already own and having the infrastructure being self efficient sure but visual generation llms and data theft isn’t cool
I think most people find something like chatgpt and copilot useful in their day to day lives. LLMs are a very helpful and powerful technology. However, most people are against these models collecting every piece of data imaginable from you. People are against the tech, they’re against the people running the tech.
I don’t think most people would mind if a FOSS LLM, that’s designed with privacy and complete user control over their data, was integrated with an option to completely opt out. I think that’s the only way to get people to trust this tech again and be onboard.
In the non tech crowds I have talked to about these tools, they have been mostly concerned with them just being wrong, and when they are integrated with other software, also annoyingly wrong.
Idk most people I know don’t see it as a magic crystal ball that’s expected to answer all questions perfectly. I’m sure people like that exist, but for the most part I think people understand that these LLMs are flawed. However, I know a lot of people who use them for everyday tasks like grammar checks, drafting emails/documents, brainstorming, basic analysis, and so on. They’re pretty good at these sort of things because that’s what they’re built for. The issue of privacy and greed remain, and I think some of the issues will at least be partially solved if they were designed with privacy in mind.
I’m enjoying how ludicrous the idea of a “privacy friendly AI” is- trained on stolen data from inhaling everyone else’s data from the internet, but cares suddenly about “your” data.
It’s not impossible. You could build a model that’s built on consent where the data it’s trained on is obtained ethically, data collected from users is anonymized, and users can opt out if they want to. The current model of shameless theft isn’t the only path there is.
I think the idea is anonymous querying.
If I understand right, the usefulness of basic questions like “Hey ChatGPT, how long do I boil pasta” is offset by the vast resources needed to answer that question. We just see it as simple and convenient as it tries to invest in its “build up interest” phase and runs at a loss. If the effort to sell the product that way fails, it’s going to fund itself by harvesting data.
I don’t disagree per se, but I think there’s a pretty big difference between people using chatgpt for correcting grammar or drafting an email and people using it generate a bunch of slop images/videos. The former is a more streamlined way to use the internet which has value, while the latter is just there for the sake of it. I think its feasible for newer LLM designs to focus on what’s actually popular and useful, and cutout the fat that’s draining a large amounts of resources for no good reason.
We can say maybe a personal LLM trained on data that you actually already own and having the infrastructure being self efficient sure but visual generation llms and data theft isn’t cool
We can agree to that
Amen brother (or sister)
I think you may find yourself in the minority.