i actively zone out when anyone higher up than me talks about copilot or chat gpt. i also dressed down a colleague for using chat gpt for a stupid simple task.
Yea it has made shit up sometimes. But you can use your brain and check the sources … ? ChatGPT is just a more precise Google. You still have to check it yourself and use your head.
I watched the video, and he literally says that AI can be used as a more effective Google search. So clearly you didn’t watch the video, or pay attention at all. The video is also just a giant ad for their merch.
I have a more nuanced take. AI is simultaneously untrustworthy and useful. For many queries, DuckDuckGo and Google are performing considerably worse than they used to, while Perplexity usually yields good results. Perplexity also handles complex queries traditional search engines just can’t.
About a third of the time, Perplexity’s text summary of what it found is inaccurate; it may even say the opposite of what a source does. Reading the sources and evaluating their reliability is no less important than with traditional search, but much of the time I think I wouldn’t have found the same sources that way.
Of course there are other issues with AI, such as power usage and Perplexity in particular being known for aggressive web scraping.
Nuance and depth isn’t as popular as I’d like on or off Lemmy.
Ah, but you see, I never claimed AI isn’t useful. In fact, you can check my comment history. I’ve agreed AI is a very useful tool, I still think it shouldn’t be used for ethical, social, and personal reasons
A problem with nuance is that people want to discuss the specifics and nuances of what they care about but for the most part won’t do that on subjects for other people. So you need to tailor your responses to your audience. FWIW on Lemmy I see a lot more instances of people with specificly opposed takes where both sides have similar vote counts. So while it’s not perfect it’s better than most
I think ddg and Google are performing worse because of AI. Pushing their AI services and the tsunami of AI slop make a search harder than SEO did and deprioritizes fixing it.
I think current state-of-the-art AI is useful for when you are not having a novel thought.
I believe that AI, at least in the form of LLMs, is currently incapable of novelty in the sense of creating a new concept or a new thought with reason and purpose behind it.
For instance, if I was going to write a book, I might consult with LLMs about how to fill in the slow gaps or the dead spaces in my storyline and to fully come up with a completely fleshed out story that I would then write without its assistance.
My assumption is that anything that it fills in is going to be cobbled together from literally hundreds of thousands of other similar stories, and therefore it will not be new or unique in any way.
If I was really trying to push the envelope, I would then assume that the right thing to do would be that whatever it says is ordinary and common, and if I want to be extraordinary and uncommon, then I need to use that as a launch point for my own gap-filling content.
Therefore, I could use an LLM to write a good story with a new concept, a new premise, a new storyline that is relatively unique and original by using the LLM to clearly identify those things that are not.
I think it is useful with a constrained dataset. Like using it to summarize things about a dataset, or dumping documents into it and asked getting info about it (like Gemini in Google Drive).
It is not useful for general question using the whole-ass internet as a dataset.
Also I wish it was called something other than AI…it’s just a word guesser FFS.
We should are least refer to inference LLMs as LLMs. The fact that if you asked it something like who is the current CS2 top team, it would give you the top team at the time it was trained is enough proof that the models effectively know nothing.
the only useful thing my company and collegues have fold for it is taking meeting notes. it just logs everything and summarizies stuff, and it’s like 90% accurate, but it does make plenty of errors.
however, if i give a presentation with screen sharing, it can’t do shit to summarize that.
how to say with this. I see pretty much an equal split between ai is best thing ever, ai will doom us all, and like ai has some uses and may get more but we need to make sure any use is worth the energy usage.
Would it? I run a science fiction book club and there’re a lot of arguments that if something achieved human level intelligence that it would immediately try to kill us, not become our perfect servants
“It was a morality core they installed after I flooded the Enrichment Center with a deadly neurotoxin to make me stop flooding the Enrichment Center with a deadly neurotoxin.”
AI is untrustworthy and shouldn’t be used
I have management talking about copilot usage rates and I hear people casually refer to “what ChatGPT told them” in conversation
i actively zone out when anyone higher up than me talks about copilot or chat gpt. i also dressed down a colleague for using chat gpt for a stupid simple task.
The other day on Reddit someone was saying they just fact checked something with ChatGPT.
You can ask ChatGPT to provide sources you know.
The sources are bullshit https://m.youtube.com/watch?v=_zfN9wnPvU0
Yea it has made shit up sometimes. But you can use your brain and check the sources … ? ChatGPT is just a more precise Google. You still have to check it yourself and use your head.
So you didn’t watch the video then, got it. I guess you can ask ChatGPT to summarize the transcript for you 🙄
deleted by creator
I watched the video, and he literally says that AI can be used as a more effective Google search. So clearly you didn’t watch the video, or pay attention at all. The video is also just a giant ad for their merch.
Yeah. If ur dummy.
I have a more nuanced take. AI is simultaneously untrustworthy and useful. For many queries, DuckDuckGo and Google are performing considerably worse than they used to, while Perplexity usually yields good results. Perplexity also handles complex queries traditional search engines just can’t.
About a third of the time, Perplexity’s text summary of what it found is inaccurate; it may even say the opposite of what a source does. Reading the sources and evaluating their reliability is no less important than with traditional search, but much of the time I think I wouldn’t have found the same sources that way.
Of course there are other issues with AI, such as power usage and Perplexity in particular being known for aggressive web scraping.
Nuance and depth isn’t as popular as I’d like on or off Lemmy.
Ah, but you see, I never claimed AI isn’t useful. In fact, you can check my comment history. I’ve agreed AI is a very useful tool, I still think it shouldn’t be used for ethical, social, and personal reasons
A problem with nuance is that people want to discuss the specifics and nuances of what they care about but for the most part won’t do that on subjects for other people. So you need to tailor your responses to your audience. FWIW on Lemmy I see a lot more instances of people with specificly opposed takes where both sides have similar vote counts. So while it’s not perfect it’s better than most
I think ddg and Google are performing worse because of AI. Pushing their AI services and the tsunami of AI slop make a search harder than SEO did and deprioritizes fixing it.
It’s also a way to inflate the number of ads a user has to wade through before they find what they’re looking for. Classic monopolist bullshit.
I think current state-of-the-art AI is useful for when you are not having a novel thought.
I believe that AI, at least in the form of LLMs, is currently incapable of novelty in the sense of creating a new concept or a new thought with reason and purpose behind it.
For instance, if I was going to write a book, I might consult with LLMs about how to fill in the slow gaps or the dead spaces in my storyline and to fully come up with a completely fleshed out story that I would then write without its assistance.
My assumption is that anything that it fills in is going to be cobbled together from literally hundreds of thousands of other similar stories, and therefore it will not be new or unique in any way.
If I was really trying to push the envelope, I would then assume that the right thing to do would be that whatever it says is ordinary and common, and if I want to be extraordinary and uncommon, then I need to use that as a launch point for my own gap-filling content.
Therefore, I could use an LLM to write a good story with a new concept, a new premise, a new storyline that is relatively unique and original by using the LLM to clearly identify those things that are not.
I think it is useful with a constrained dataset. Like using it to summarize things about a dataset, or dumping documents into it and asked getting info about it (like Gemini in Google Drive).
It is not useful for general question using the whole-ass internet as a dataset.
Also I wish it was called something other than AI…it’s just a word guesser FFS.
We should are least refer to inference LLMs as LLMs. The fact that if you asked it something like who is the current CS2 top team, it would give you the top team at the time it was trained is enough proof that the models effectively know nothing.
the only useful thing my company and collegues have fold for it is taking meeting notes. it just logs everything and summarizies stuff, and it’s like 90% accurate, but it does make plenty of errors.
however, if i give a presentation with screen sharing, it can’t do shit to summarize that.
I have people telling me how to do my work because “That’s what ChatGPT suggested, and they’re always accurate”.
🤷
how to say with this. I see pretty much an equal split between ai is best thing ever, ai will doom us all, and like ai has some uses and may get more but we need to make sure any use is worth the energy usage.
Actual AGI would be trustworthy. The current “AI” is just a word salad blender program.
It could be argued that people are AGI. Are they always trustworthy?
Would it? I run a science fiction book club and there’re a lot of arguments that if something achieved human level intelligence that it would immediately try to kill us, not become our perfect servants
“It was a morality core they installed after I flooded the Enrichment Center with a deadly neurotoxin to make me stop flooding the Enrichment Center with a deadly neurotoxin.”
I believe in the Grand Plan, and I have faith in The Director. Begone, faction scum.
That was a good show.
Removed by mod