• LogicalDrivel@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    61
    ·
    15 days ago

    My boss had GPT make this informational poster thing for work. Its supposed to explain stuff to customers and is rampant with spelling errors and garbled text. I pointed it out to the boss and she said it was good enough for people to read. My eye twitches every time I see it.

    • JayArr@lemmy.today
      link
      fedilink
      English
      arrow-up
      24
      ·
      14 days ago

      good enough for people to read

      wow, what a standard, super professional look for your customers!

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      14 days ago

      Spelling errors? That’s… unusual. Part of what makes ChatGPT so specious is that its output is usually immaculate in terms of language correctness, which superficially conceals the fact that it’s completely bullshitting on the actual content.

      • This is fine🔥🐶☕🔥@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        14 days ago

        The user above mentioned informational poster so I’m going to assume it was generated as an image. And those have spelling mistakes.

        Can’t even generate image and text separately smh. People are indeed getting dumber.

      • LogicalDrivel@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 days ago

        FWIW, she asked it to make a complete info-graphic style poster with images and stuff so GPT created an image with text, not a document. Still asinine.

  • obsoleteacct@lemmy.zip
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    1
    ·
    15 days ago

    I’m mostly annoyed that I have to keep explaining to people that 95% of what they hear about AI is marketing. In the years since we bet the whole US economy on AI and were told it’s absolutely the future of all things, it’s yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it’s own production or design, or a blockbuster product that I’m aware of.

    We’re betting our whole future on a concept of a product that has yet to reliably profit any of its users or the public as a whole.

    I’ve made several good faith efforts at getting it to produce something valuable or helpful to me. I’ve done the legwork on making sure I know how to ask it for what I want, and how I can better communicate with it.

    But AI “art” requires an actual artist to clean it up. AI fiction requires a writer to steer it or fix it. AI non-fiction requires a fact cheker. AI code requires a coder. At what point does the public catch on that the emperor has no clothes?

    • Optional@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      15 days ago

      it’s yet to produce a really great work of fiction (as far as we know), a groundbreaking piece of software of it’s own production or design, or a blockbuster product

      Or a profit. Or hell even one of those things that didn’t suck! It’s critically flawed and has been defying gravity on the coke-fueled dreams of silicon VC this whole time.

      And still. One of next year’s fiscal goals is “AI”. That’s all. Just “AI”.

      It’s a goal. Somehow. It’s utter insanity.

    • Katana314@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      15 days ago

      Anyone in engineering knows the 90% of your goal is the easy bit. You’ll then spend 90% of your time on the remainder. Same for AI and getting past the uncanny valley with art.

    • undergroundoverground@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      15 days ago

      What if the point of AI is to have it create a personal model for each of us, using the vast amounts of our data they have access to, in order to manipulate us into buying and doing whatever the people who own it want but they can’t just come out and say that?

      • obsoleteacct@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        14 days ago

        I’m sure that’s at least part of the idea but I’m yet to see any evidence that it won’t also be dog shit at that. It doesn’t have the context window or foresight to conceive of a decent plot twist in a piece of fiction despite having access to every piece of fiction ever written. I’m not buying that it would be able to build a psychological model and contextualize 40 plus years of lived experience in a way that could get me to buy a $20 Dubai chocolate bar or drive a Chevy.

        • undergroundoverground@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          12 days ago

          Im pretty sure I said "the point of it and not that it was 100% ready to go now.

          Also, no one thinks advertising works on them. So, I’m sure you don’t believe it 👍

          • obsoleteacct@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            12 days ago

            It’s not going to get there. The idea it’s going to turn smart and capable at some undisclosed point in the future as long as we keep giving them billions in investments IS marketing.

  • AdolfSchmitler@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    ·
    14 days ago

    There’s a monster in the forest, and it speaks with a thousand voices. It will answer any question, and offer insight to any idea. It knows no right or wrong. It knows not truth from lie, but speaks them both the same. It offers its services freely, many find great value. But those who know the forest well will tell you that freely offered does not mean free of cost. For now the monster speaks with a thousand and one voices, and when you see the monster it wears your face.

  • fritobugger2017@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    15 days ago

    Not just you. Ai is making people dumber. I am frequently correcting the mistakes of my colleagues that use.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      12
      ·
      14 days ago

      My attitude to all of this is I’ve been told by management to use it so I will. If it makes mistakes it’s not my fault and now I’m free to watch old Stargate episodes. We’re not doing rocket surgery or anything so who cares.

      At some point they’ll realise that the AI is not producing decent output and then they’ll shut up about it. Much easier they come to that realisation themselves than me argue with them about it.

      • fritobugger2017@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        14 days ago

        Luckily no one is pushing me to use Ai in any form at this time.

        For folks in your position, I fear that they will first go through a round of layoffs to get rid of the people who are clearly using it “wrong” because Top Management can’t have made a mistake before they pivot and drop it.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          7
          ·
          14 days ago

          Yeah that is a risk, then again if they’re forcing their employees to use AI they’re probably not far off firing everyone anyway so I don’t see that it makes a huge amount of difference for my position.

    • outhouseperilous@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      7
      ·
      14 days ago

      When i was a kid and firat realized i was maybe a genius, it was terrifying. That there weren’t always gonna just be people smarter than me who could fix it.

      Seeing them get dumber is like some horror movie shit.

  • Rachelhazideas@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    4
    ·
    14 days ago

    People are overworked, underpaid, and struggling to make rent in this economy while juggling 3 jobs or taking care of their kids, or both.

    They are at the limits of their mental load, especially women who shoulder it disproportionately in many households. AI is used to drastically reduce that mental load. People suffering from burnout use it for unlicensed therapy. I’m not advocating for it, I’m pointing out why people use it.

    Treating AI users like a moral failure and disregarding their circumstances does nothing to discourage the use of AI. All you are doing is enforcing their alienation of anti-AI sentiment.

    First, understand the person behind it. Address the root cause, which is that AI companies are exploiting the vulnerabilities of people with or close to burnout by selling the dream of a lightened workload.

    It’s like eating factory farmed meat. If you have eaten it recently, you know what horrors go into making it. Yet, you are exhausted from a long day of work and you just need a bite of that chicken to take the edge off to remain sane after all these years. There is a system at work here, greater than just you and the chicken. It’s the industry as a whole exploiting consumer habits. AI users are no different.

    • _core@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      14 days ago

      Let’s go a step further and look at why people are in burnout, are overloaded, are working 3 jobs to make ends meet.

      Its because we’re all slaves to capitalism.

      Greed for more profit by any means possible has driven society to the point where we can barely afford to survive and corporations still want more. When most Americans are choosing between eating, their kids eating, or paying rent, while enduring the workload of two to three people, yeah they’ll turn to anything that makes life easier. But it shouldn’t be this way and until we’re no longer slaves we’ll continue to make the choices that ease our burden, even if they’re extremely harmful in the long run.

    • BackgrndNoize@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 days ago

      And what do you think mass adoption of AI is gonna lead to, now you won’t even have 3 jobs to make rent cause they outsourced yours to someone cheaper using an AI agent, this is gonna permanently alter how our society works and not for the better

  • the_q@lemmy.zip
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    15 days ago

    Unfortunately the masses will do as they’re told. Our society has been trained to do this. Even those that resist are playing their part.

    • paultimate14@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      15 days ago

      On the contrary: society has repeatedly rejected a lot of ideas that industries have come up with.

      HD DVD, 3D TV, Crypto Currency, NFT’s, Laser Discs, 8-track tapes, UMD’s. A decade ago everyone was hyping up how VR would be the future of gaming, yet it’s still a niche novelty today.

      The difference with AI is that I don’t think I’ve ever seen a supply side push this strong before. I’m not seeing a whole lot of demand for it from individual people. It’s “oh this is a neat little feature I can use” not “this technology is going to change my life” the way that the laundry machine, the personal motor vehicle, the telephone, or the internet did. I could be wrong but I think that as long as we can survive the bubble bursting, we will come out on the other side with LLM’s being a blip on the radar. And one consequence will be that if anyone makes a real AI they will need to call it something else for marketing purposes because “AI” will be ruined.

      • HarkMahlberg@kbin.earth
        link
        fedilink
        arrow-up
        16
        ·
        15 days ago

        AI’s biggest business is (if not already, it will be) surveillance systems sold to authoritarian governments worldwide. Israel is using it in Gaza. It’s both used internally and exported as a product by China. Not just cameras on street corners doing facial recognition, but monitoring the websites you visit, the things you buy, the people you talk to. AI will be used on large datasets like these to label people as dissidents, to disempower them financially, and to isolate them socially. And if the AI hallucinates in this endeavor, that’s fine. Better to imprison 10 innocent men than to let 1 rebel go free.

        In the meantime, AI is being laundered to the individual consumer as a harmless if ineffective toy. “Make me a portrait, give me some advice, summarize a meeting,” all things it can do if you accept some amount of errors. But given this domain of problems it solves, the average person would never expect that anyone would use it to identify the first people to pack into train cars.

      • atomicbocks@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        15 days ago

        HDDVDs weren’t rejected by the masses they were a casualty in Sony’s vendetta against the loss of Beta and DAT. Both of which were rejected by industry not consumers (though both were later embraced by industry and Betas even outlasted VHSs). They would have won out for the same reasons that Sony lost the previous format wars (insistence on licensing fees) except this time Sony bought out Columbia and had a whole library of video and a studio to make new movies to exclusively release on their format. Essentially the supply side pushing something until consumers accepted it, though to your point not quite as bad as AI is right now.

        8-Tracks and laserdiscs were just replaced by better formats (Compact Cassette and Video CD/DVD respectively). Each of them were also replacements for previous formats like Reel to Reel and CEDs.

        UMDs only don’t exist still because flash media got better and because Sony opted to use a cheaper scratch resistant coating instead of a built in case for later formats (like Blu-ray). Also, UMDs themselves were a replacement for or at least inspired by an earlier format called MiniDisc.

        Capitalism’s biggest feat has been convincing people that everything is the next big thing and nothing that has come before is similar when just about everything is just a rinse and repeat, even LLMs… remember when Watson beat Ken Jennings?

    • Boomer Humor Doomergod@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      15 days ago

      See also: Cars, appliances, consumer electronics, movies, food, architecture.

      We are ruled by the market and the market is ruled by the lowest common denominator.

  • MourningDove@lemmy.zip
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    2
    ·
    15 days ago

    I must be one of the few reaming people that have never, and will never- type a sentence into an AI prompt.

    I despise that garbage.

    • RaivoKulli@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      15 days ago

      At least knowingly. It seems some customer service stuff feeds it direct to AI before any human gets involved.

    • JustAnotherPodunk@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      15 days ago

      I haven’t used it willingly ever. Especially after the one time copilot told me an acre is 4.5 football fields in area. I didn’t ask it, the response was just presented at the top of my results. I’m a fucking farmer for God sake. I know that’s very very wrong without thinking. I just wanted the square footage and was too lazy to use my calculator. Never again.

      That being said, I do on occasion solicit my friend who has a subscription, just to request and have him send me a very specific image request and have him text it to me, so I can repost it.

      Anything for the memes. Literally anything.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          14 days ago

          I haven’t really played football in years so they must have changed the rules. Because that is one big ball.

  • Bamboodpanda@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    2
    ·
    14 days ago

    being anti-plastic is making me feel like i’m going insane. “you asked for a coffee to go and i grabbed a disposable cup.” studies have proven its making people dumber. “i threw your leftovers in some cling film.” its made from fossil fuels and leaves trash everywhere we look. “ill grab a bag at the register.” it chokes rivers and beaches and then we act surprised. “ill print a cute label and call it recyclable.” its spreading greenwashed nonsense. little arrows on stuff that still ends up in the landfill. “dont worry, it says compostable.” only at some industrial facility youll never see. “i was unboxing a package” theres no way to verify where any of this ends up. burned, buried, or floating in the ocean. “the brand says advanced recycling.” my work has an entire sustainability team and we still stock pallets of plastic water bottles and shrink wrapped everything. plastic cutlery. plastic wrap. bubble mailers. zip ties. everyone treats it as a novelty. every treats it as a mandatory part of life. am i the only one who sees it? am i paranoid? am i going insane? jesus fucking christ. if i have to hear one more “well at least” “but its convenient” “but you can” im about to lose it. i shouldnt have to jump through hoops to avoid the disposable default. have you no principles? no goddamn spine? am i the weird one here?

    #ebb rambles #vent #i think #fuck plastics im so goddamn tired

    • Optional@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      14 days ago

      If plastic was released roughly two years ago you’d have a point.

      If you’re saying in 50 years we’ll all be soaking in this bullshit called gen-AI and thinking it’s normal, well - maybe, but that’s going to be some bleak-ass shit.

      Also you’ve got plastic in your gonads.

      • Bamboodpanda@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        14 days ago

        Yeah it was a fun little whataboutism. I thought about doing smartphones instead. Writing that way hurts though. I had to double check for consistency.

      • coldasblues@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 days ago

        On the bright side we have Cyberpunk to give us a tutorial on how to survive the AI dystopia. Have you started picking your implants yet?

      • CompassRed@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        14 days ago

        If you’re saying in 50 years we’ll all be soaking in this bullshit called gen-AI and thinking it’s normal, well - maybe, but that’s going to be some bleak-ass shit.

        I’m almost certain gen AI will still be popular in 50 years. This is why I prefer people try to tackle some of the problems they see with AI instead of just hating on AI because of the problems it currently has. Don’t get me wrong, pointing out the problems as you have is important - I just wouldn’t jump to the conclusion that AI is a problem itself.

  • plyth@feddit.org
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    14 days ago

    No line breaks and capitalization? Can somebody ask AI to format it properly, please?

  • a9cx34udP4ZZ0@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    14 days ago

    Everytime someone talks up AI, I point out that you need to be a subject matter expert in the topic to trust it because it frequently produces really, really convincing summaries that are complete and utter bullshit.

    And people agree with me implicitly and tell me they’ve seen the same. But then don’t hesitate to turn to AI on subjects they aren’t experts in for “quick answers”. These are not stupid people either. I just don’t understand.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 days ago

      Uses for this current wave of AI: converting machine language to human language. Converting human language to machine language. Sentiment analysis. Summarizing text.

      People have way over invested in one of the least functional parts of what it can do because it’s the part that looks the most “magic” if you don’t know what it’s doing.

      The most helpful and least used way of using them is to identify what information the user is looking for and then to point them to resources they can use to find out for themselves, maybe with a description of which resource might be best depending on what part of the question they’re answering.
      It’s easy to be wrong when you’re answering a question, and a lot harder when you hand someone a book and say you think the answer is in chapter four.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      14 days ago

      Because the alternative for me is googling the question with “reddit” added at the end half of the time. I still do that alot. For more complicated or serious problems/questions, I’ve set it to only use search function and navigate scientific sites like ncbi and pubmed while utilizing deep think. It then gives me the sources, I randomly tend to cross-check the relevant information, but so far I personally haven’t noticed any errors. You gotta realize how much time this saves.

      When it comes to data privacy, I honestly don’t see the potential dangers in the data I submit to OpenAI, but this is of course different to everyone else. I don’t submit any personal info or talk about my life. It’s a tool.

      • verdigris@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 days ago

        If it saves time but you still have to double check its answers, does it really save time? At least many reddit comments call out their own uncertainty or link to better resources, I can’t trust a single thing AI outputs so I just ignore it as much as possible.

      • ganryuu@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 days ago

        Simply by the questions you ask, the way you ask them, they are able to infer a lot of information. Just because you’re not giving them the raw data about you doesn’t mean they are not able to get at least some of it. They’ve gotten pretty good at that.

        • REDACTED@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 days ago

          I really don’t have any counter-arguments as you have a good point, I tend to turn a blind eye to that uncomfortable fact. It’s worth it, I guess? Realistically, I’m having a hard time thinking of worst-case scenarious

  • Hemingways_Shotgun@lemmy.ca
    link
    fedilink
    English
    arrow-up
    15
    ·
    14 days ago

    The reason AI is wrong so often is because it’s not programmed to give you the right answer. It’s programmed to give you the most pervasive one.

    LLMs are being fed by Reddit and other forums that are ostensibly about humans giving other humans answers to questions.

    But have you been on those forums? It’s a dozen different answers for every question. The reality is that we average humans don’t know shit and we’re just basing our answers on our own experiences. We aren’t experts. We’re not necessarily dumb, but unless we’ve studied, our knowledge is entirely anecdotal, and we all go into forums to help others with a similar problem by sharing our answer to it.

    So the LLM takes all of that data and in essence thinks that the most popular, most mentioned, most upvoted answer to any given question must be the de facto correct one. It literally has no other way to judge; it’s not smart enough to cross reference itself or look up sources.

    • Optional@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      14 days ago

      It literally has no other way to judge

      It literally does NOT judge. It cannot reason. It does not know what “words” are. It is an enormous rainbow table of sentence probability that does nothing useful except fool people and provide cover for capitalists to extract more profit.

      But apparently, according to some on here, “that’s the way it is, get used to it.” FUCK no.

    • Wolf@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      14 days ago

      It literally has no other way to judge; it’s not smart enough to cross reference itself or look up sources

      I think that is it’s biggest limitation.

      Like AI basically crowd sourcing information isn’t really the worst thing, crowd sourced knowledge tends to be fairly decent. People treating it as if it’s an authoritative source like they looked it up in the encyclopedia or asked an expert is a big problem though.

      Ideally it would be more selective about the ‘crowds’ it gathers data from. Like science questions should be sourced from scientists. Preferably experts in the field that the question is about.

      Like Wikipedia (at least for now) is ‘crowd- sourced’, but individual pages are usually maintained by people who know a lot about the subject. That’s why it’s more accurate than a ‘normal’ encyclopedia. Though of course it’s not fool proof or tamper proof by any definition.

      If we taught AI how to be ‘Media Literate’ and gave it the ability to double check it’s data with reliable sources- it would be a lot more useful.

      most upvoted answer

      This is the other problem. You basically have 4 types of redditors.

      • People who use the karma system correctly, that is to say they upvote things that contribute to the conversation. Even if you think it is ‘wrong’ or you disagree with it, if it’s something that adds to the discussion, you are supposed to upvote it.

      • People who treat it as “I agree/ I disagree” buttons.

      • People who treat it as "I like this/ I hate this buttons.

      • Id say the majority of the people probably do some combination of the above.

      So more than half the time people aren’t upvoting things because they think they are correct. If LLM models are treating ‘karma’ as a “This is correct” metric- that’s a big problem.

      The other bad problem is people who really should know better- tech bros and CEO’s going all in on AI when it’s WAY to early to do that. As you point out, it’s not even really intelligent yet- it just parrots ‘common’ knowledge.

      • Hemingways_Shotgun@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 days ago

        AI should never be used to create anything in Wikipedia. But theoretically, an open source LLM trained solely on wikipedia would actually be kind useful to ask quick questions to.

  • CobblerScholar@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    15 days ago

    Meanwhile every company finds out the week after they lay off everyone that the billions they poured into their shitty “AI” to replace them might as well have been put in bags and set on fire

  • reluctant_squidd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    14 days ago

    My hope is that the ai bubble/trend might have a silver lining overall.

    I’m hoping that people start realizing that it is often confidently incorrect. That while it makes some tasks faster, a person will still need to vet the answers.

    Here’s the stretch. My hope is that by questioning and researching to verify the answers ai is giving them, people start applying this same skepticism to their daily lives to help filter out all the noise and false information that is getting shoved down their throats every minute of every day.

    So that the populace in general can become more resistant to the propaganda. AI would effectively be a vaccine to boost our herd immunity to BS.

    Like I said. It’s a hope.

    • DeathByBigSad@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      14 days ago

      People literally believe what a TV anchor or online podcaster tell them with zero doubt. I fear your hopes are misplaced.

      I’m still rooting for humanity, maybe we get get lucky with the right people seizing power and turn it around to the 1% of good timelines, but I don’t exactly feel so good right now.

  • CrayonDevourer@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    edit-2
    15 days ago

    Yes, you’re the weird one. Once you realize that 43% of the USA is FUNCTIONALLY ILLITERATE you start realizing why people are so enamored with AI. (since I know some twat is gonna say shit: I’m using the USA here as an example, I’m not being us-centric)

    Our artificial intelligence, is smarter than 50% of the population (don’t get started on ‘hallucinations’…do you know how many hallucinations the average person has every day?!) – and is stupider than the top 20% of the population.

    The top 20%, wonder if everyone has lost their fucking minds, because to them it looks like it is completely worthless.

    It’s more just that the top 20% are naive to the stupidity of the average person.

    • AmbitiousProcess (they/them)@piefed.social
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      15 days ago

      Our artificial intelligence, is smarter than 50% of the population

      “Smartness” and illiteracy are certainly different things, though. You might be incapable of reading, yet be able to figure out a complex escape room via environmental cues that the most high quality author couldn’t, as an example.

      There are many places an AI might excel compared to these people, and many areas it will fall behind. Any sort of unilateral statement here disguises the fact that while a lot of Americans are illiterate, stupid, or even downright incapable of doing simple tasks, “AI” today is very similar, just that it will complete a task incorrectly, make up a fact instead of just “not knowing” it, or confidently state a summary of a text that is less accurate than first grader’s interpretation.

      Sometimes it will do better than many humans. Other times, it will do much worse, but with a confident tone.

      AI isn’t necessarily smarter in most cases, it’s just more confident sounding in its incorrect answers.

      • CrayonDevourer@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        15 days ago

        Yeah, when I refer to intelligence here I don’t mean actual intelligence. AI isn’t “smart” (it’s not intelligent in the classic sense, it doesn’t even think), it’s just good at regurgitating what it’s been trained on.

        But it turns out – That’s kind of what humans do too. It’s worth having a philosophical discussion on what intelligence REALLY is.

        It’s also much less incorrect than your average person would be on a much larger library of content. I think the real litmus test for AI is to compare it to an average person. The average person messes up constantly; also likely covers it up or course-corrects after they’ve screwed up. I don’t think it’s fair to expect perfectly correct responses out of AI at all; because there is absolutely no human that could reach those heights at an equal level. Look at competitive knowledge games where AI competes - it stomps some of our most intelligent people, and quite often.

    • crank0271@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      15 days ago

      I have to say, I don’t agree with some of your other points elsewhere here, but this makes a lot of sense.

      • leftzero@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 days ago

        always lonely

        I don’t know, some rodents seem to make it work. Naked mole rats, beavers, prairie dogs… (I wouldn’t include herd animals, though; sure, they’re always surrounded by others, but there’s no sense of community, it’s always everyone for themselves, and screw whoever’s slowest… perfect example of being alone in a multitude)