Let the votes fall where they may, upvote if you like AI, downvote if you don’t…
I use ChatGPT pretty much every day. It’s often more useful than Google search. I use it as a tool to point me in the right direction rather than just reading whatever it says. Used properly AI is pretty useful.
That’s mostly because google search has turned to absolute shit. ChatGPT rarely returns anything useful either, unless you want the most absolutely mundane answers. Anything remotely niche or specific has a far too great chance of just being plain wrong.
I guess it depends on what you look for. I tend to get pretty good answers most of the time and links to check as well.
LLMs are such an amazing technology, but they are used and understood completely wrong by the corporate world.
People are growing dependent on it which is a problem, but the transformer model is really cool imo.
LLMs are a way to turn hundreds of billions of dollars into mediocre slop content that nobody wants to pay for. Humans will generate mediocre slop content for a fraction of the cost.
There is no way LLMs of this kind will ever be remotely affordable, and I fully expect OpenAI to fold in a few years. Amazon and Microsoft are already bailing out on LLM, realizing they’ll never pay off.
Thanks!
Although, the first PDF link says invalid format…
I dunno that ‘like’ is an accurate description; I don’t ‘like’ my wrench, but it’s useful for turning bolts. AI is the same. I’m working on a novel project and I’ve been using ChatGPT and related tools to help with things like worldbuilding, naming, formatting, structure, grammar, etc, basically everything but the actual writing itself. It’s been a big help, but I can also see the concerns of people whose jobs/livelihoods/etc are threatened by it; that wrench also works pretty good as a blunt object to chuck at peoples’ heads.
I was actually developing a crude form of AI, for graphics processing, between 2009 to 2017.
When I realized that my algorithms could be repurposed to the point of cloning someone else’s voice, if I just had enough RAM and processing time, nobody believed me back in 2017.
I will never touch AI again.
Chucking 8 years of work seems like a drastic step, especially since others were (and now have) surely developed similar algorithms and are a lot less scrupulous about how they train and use them. Why did you feel like you had to step away from the field altogether after such a big time investment? Not that I’m judging, I’m just curious at the motivations involved.
My original algorithms were specifically designed to help artists perform nearly perfect color matching, based largely on text inputs. It started off as a single purpose application, but totally human driven.
The more I used and tested my own software, it taught me more than I even expected to learn about photochromatic processing. More than I even designed it to do even.
I was already also studying acoustics around the same time. I saw how well my chromatography software was working, and just barely started adapting the algorithm to process acoustics.
I quickly realized that I didn’t have nearly enough RAM or processing power to do anything meaningful in any sensible timeframe, but I could already see that it was possible to go as far as changing one’s voice with the voice print of someone else.
I announced that with online friends at the time, around 2017, and nobody believed me. Probably because I couldn’t quite prove it yet. But I knew it.
The more I thought about that, the more I thought it would only contribute to fraud. So, I just fucking stopped, slammed on development brakes, and said fuckit.
I don’t want to be part of the problem, I just wanted to design a better color filter/processor system.
Fair enough, although I’m not sure why you turned a color filter/processor into something that could process acoustics if that’s all you wanted, but I get not wanting to be part of the problem.
Like I said, I was also studying acoustics at the time, and also writing sound card drivers.
I probably have one of the longest open bug tickets ever for VirtualBox…
I have used it for aggregating content, summarizing it and offering recommended actions. It is most powerful when combined with other tools (BPA and BA, for example). I use copilot, which can commect to other Microsoft products pretty well (my clients use them).
AI that can process vast amounts of data and identify trends and patterns, with the intention of helping treat/cure disease, spot dangerous weather changes, etc👍
AI which is used to create crappy images and videos using stolen content, generate essays for lazy students, give wildly incorrect answers to simple queries, and stunt personal development through never needing to have an original thought or learn something properly 👎
Corporate AI is used to exploit and spy on people, but I really like open weights AI, you can do so much cool stuff with it!