• Norin@lemmy.world
    link
    fedilink
    arrow-up
    60
    ·
    edit-2
    21 days ago

    For work, I teach philosophy.

    The impact there has been overwhelmingly negative. Plagiarism is more common, student writing is worse, and I need to continually explain to people at an AI essay just isn’t their work.

    Then there’s the way admin seem to be in love with it, since many of them are convinced that every student needs to use the LLMs in order to find a career after graduation. I also think some of the administrators I know have essentially automated their own jobs. Everything they write sounds like GPT.

    As for my personal life, I don’t use AI for anything. It feels gross to give anything I’d use it for over to someone else’s computer.

    • AFK BRB Chocolate@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      21 days ago

      My son is in a PhD program and is a TA for a geophysics class that’s mostly online, so he does a lot of grading assignments/tests. The number of things he gets that are obviously straight out of an LLM is really disgusting. Like sometimes they leave the prompt in. Sometimes the submit it when the LLM responds that it doesn’t have enough data to give an answer and refers to ways the person could find out. It’s honestly pretty sad.

    • MonkeMischief@lemmy.today
      link
      fedilink
      arrow-up
      17
      ·
      21 days ago

      convinced that every student needs to use the LLMs in order to find a career after graduation.

      Yes, of course, why are bakers learning to use ovens when they should just be training on app-enabled breadmakers and toasters using ready-made mixes?

      After all, the bosses will find the automated machine product “good enough.” It’s “just a tool, you guys.”

      Sheesh. I hope these students aren’t paying tuition, and even then, they’re still getting ripped off by admin-brain.

      I’m sorry you have to put up with that. Especially when philosophy is all about doing the mental weightlifting and exploration for onesself!

    • Stovetop@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      edit-2
      21 days ago

      Agreed. I started steps needed to be certified as an educator in my state but decided against it. ChatGPT isn’t the only reason, but it is a contributing factor. I don’t envy all of the teachers out there right now who have to throw out the entire playbook of what worked in the past.

      And I feel bad for students like me who really struggled with in-class writing by hand in a limited amount of time, because that is what everyone is resorting to right now.

      • pdxfed@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        21 days ago

        It must be something like(only worse) what math teachers felt when the pocket calculator became cheap and easily available. It doesn’t mean you can do math but people conflate the two.

  • LogicalDrivel@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    1
    ·
    21 days ago

    It cost me my job (partially). My old boss swallowed the AI pill hard and wanted everything we did to go through GPT. It was ridiculous and made it so things that would normally take me 30 seconds now took 5-10 minutes of “prompt engineering”. I went along with it for a while but after a few weeks I gave up and stopped using it. When boss asked why I told her it was a waste of time and disingenuous to our customers to have GPT sanitize everything. I continued to refuse to use it (it was optional) and my work never suffered. In fact some of our customers specifically started going through me because they couldn’t stand dealing with the obvious AI slop my manager was shoveling down their throat. This pissed off my manager hard core but she couldn’t really say anything without admitting she may be wrong about GPT, so she just ostracized me and then fired me a few months later for “attitude problems”.

    • Skanky@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      21 days ago

      Curious - what type of job was this? Like, how was AI used to interact with your customers?

      • LogicalDrivel@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        10
        ·
        21 days ago

        It was just a small e-commerce store. Online sales and shipping. The boss wanted me to run emails i would send to vendors through gpt and any responses for customer complaints were put through GPT. We also had a chat function on our site for asking questions and what not and the boss wanted us to copy the customers’ chat into gpt, get a response, rewrite if necessary, and then paste GPT’s response into our chat. It was so ass backwards I just refused to do it. Not to mention it made the response times super high, so customers were just leaving rather than wait (which of course was always the employees fault).

        • Skanky@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          21 days ago

          That sounds as asinine as you seem to think it was. Damn dude. What a dumb way to do things. You’re better off without that stupidity in your life

  • jg1i@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    2
    ·
    21 days ago

    I absolutely hate AI. I’m a teacher and it’s been awful to see how AI has destroyed student learning. 99% of the class uses ChatGPT to cheat on homework. Some kids are subtle about it, others are extremely blatant about it. Most people don’t bother to think critically about the answers the AI gives and just assume it’s 100% correct. Even if sometimes the answer is technically correct, there is often a much simpler answer or explanation, so then I have to spend extra time un-teaching the dumb AI way.

    People seem to think there’s an “easy” way to learn with AI, that you don’t have to put in the time and practice to learn stuff. News flash! You can’t outsource creating neural pathways in your brain to some service. It’s like expecting to get buff by asking your friend to lift weights for you. Not gonna happen.

    Unsurprisingly, the kids who use ChatGPT the most are the ones failing my class, since I don’t allow any electronic devices during exams.

    • polle@feddit.org
      link
      fedilink
      arrow-up
      11
      ·
      20 days ago

      As a student i get annoyed thr other way arround. Just yesterday i had to tell my group of an assignment that we need to understand the system physically and code it ourselves in matlab and not copy paste code with Chatgpt, because its way to complex. I’ve seen people wasting hours like that. Its insane.

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      20 days ago

      I’m generally ok with the concept of externalizing memory. You don’t need to memorize something if you memorize where to get the info.

      But you still need to learn how to use the data you look up, and determine if it’s accurate and suitable for your needs. Chat gpt rarely is, and people’s blind faith in it is frightening

    • mrvictory1@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      20 days ago

      Are you teaching in university? Also you said “%99 of students uses ChatGPT”, are there really very few people who don’t use AI?

      • ComradeMiao@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 days ago

        In classes I taught in university recently I only noticed less than %5 extremely obvious Ai helped papers. The majority is too bad to even be ai, and around 10% of good to great papers.

    • Infinite@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      20 days ago

      Sounds like your curriculum needs updating to incorporate the existence of these tools. As I’m sure you know, kids - especially smart ones - are going to look for the lazy solution. An AI-detection arms race is wasting time and energy, plus mostly exercising the wrong skills.

      AVID could be a resource for teaching ethics and responsible use of AI. https://avidopenaccess.org/resource/ai-and-the-4-cs-critical-thinking/

  • IMNOTCRAZYINSTITUTION@lemmy.world
    link
    fedilink
    arrow-up
    33
    ·
    21 days ago

    My last job was making training/reference manuals. Management started pushing ChatGPT as a way to increase our productivity and forced us all to incorporate AI tools. I immediately began to notice my coworkers’ work decline in quality with all sorts of bizarre phrasings and instructions that were outright wrong. They weren’t even checking the shit before sending it out. Part of my job was to review and critique their work and I started having to send way more back than before. I tried it out but found that it took more time to fix all of its mistakes than just write it myself so I continued to work with my brain instead. The only thing I used AI for was when I had to make videos with narration. I have a bad stutter that made voiceover hard so elevenlabs voices ended up narrating my last few videos before I quit.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      21 days ago

      Eleven Labs really does good work. I’m also using it for a project, in this case to teach children to read.

    • MintyAnt@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      20 days ago

      Luckily we don’t need accurate info for training reference manuals, it’s not like safety is involved! …oh wait

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    edit-2
    21 days ago

    Never explored it at all until recently, I told it to generate a small country tavern full of NPCs for 1st edition AD&D. It responded with a picturesque description of the tavern and 8 or 9 NPCs, a few of whom had interrelated backgrounds and little plots going on between them. This is exactly the kind of time-consuming prep that always stresses me out as DM before a game night. Then I told it to describe what happens when a raging ogre bursts in through the door. Keeping the tavern context, it told a short but detailed story of basically one round of activity following the ogre’s entrance, with the previously described characters reacting in their own ways.

    I think that was all it let me do without a paid account, but I was impressed enough to save this content for a future game session and will be using it again to come up with similar content when I’m short on time.

    My daughter, who works for a nonprofit, says she uses ChatGPT frequently to help write grant requests. In her prompts she even tells it to ask her questions about any details it needs to know, and she says it does, and incorporates the new info to generate its output. She thinks it’s a super valuable tool.

  • frickineh@lemmy.world
    link
    fedilink
    arrow-up
    29
    ·
    21 days ago

    I used it once to write a proclamation for work and what it spit out was mediocre. I ended up having to rewrite most of it. Now that I’m aware of how many resources AI uses, I refuse to use it, period. What it produces is in no way a good trade for what it costs.

  • Routhinator@startrek.website
    link
    fedilink
    English
    arrow-up
    25
    ·
    21 days ago

    I have a gloriously reduced monthly subscription footprint and application footprint because of all the motherfuckers that tied ChatGPT or other AI into their garbage and updated their terms to say they were going to scan my private data with AI.

    And, even if they pull it, I don’t think I’ll ever go back. No more cloud drives, no more ‘apps’. Webpages and local files on a file share I own and host.

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    25
    ·
    21 days ago

    I worked for a company that did not govern AI use. It was used for a year before they were bought.

    I stopped reading emails because they were absolute AI generated garbage.

    Clients started to complain and one even left because they felt they were no longer a priority for the company. they were our 5th largest client that had a MRR of $300k+

    they still did nothing to curb AI use.

    they then reduced the workforce in the call center because they implemented an AI chat bot and began to funnel all incidents through it first before giving a phone number to call.

    company was then acquired a year ago. new administration banned all AI usage under security and compliance guidelines.

    today, new company hired about 20 new call center support staff. Customers are now happy. I can read my emails again because they contain human competent thought with industry jargon and not some generated thesaurus.

    overall, I would say banning AI was the right choice.

    IMO, AI is not being used in the most effective ways and causes too much chaos. cryptobros are pushing AI to an early grave because all they want is a cash cow to replace crypto.

  • AFK BRB Chocolate@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    21 days ago

    I manage a software engineering group for an aerospace company, so early on I had to have a discussion with the team about acceptable and non-acceptable uses of an LLM. A lot of what we do is human rated (human lives depend on it), so we have to be careful. Also, it’s a hard no on putting anything controlled or proprietary in a public LLM (the company now has one in-house).

    You can’t put trust into an LLM because they get things wrong. Anything that comes out of one has to be fully reviewed and understood. They can be useful for suggesting test cases or coming up with wording for things. I’ve had employees use it to come up with an algorithm or find an error, but I think it’s risky to have one generate large pieces of code.

      • AFK BRB Chocolate@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        21 days ago

        It seems like all companies are susceptible to top level executives, who don’t understand the technology, wanting to know how they’re capitalizing on it, driving lower level management to start pushing it.

    • sudneo@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      21 days ago

      Great points. Not only the output cannot be trusted, but also reviewing code is notoriously a much more boring activity than writing it, which means our attention is going to be more challenged, in addition to the risk of underestimating the importance of the review over time (e.g., it got it right last 99 times, I will skim this one).

  • Sludgehammer@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    21 days ago

    Searching the internet for information about… well anything has become infuriating. I’m glad that most search engines have a time range setting.

    • MonkeMischief@lemmy.today
      link
      fedilink
      arrow-up
      13
      ·
      21 days ago

      "It is plain to see why you might be curious about Error 4752X3G: Allocation_Buffer_Fault. First, let’s start with the basics.

      • What is an operating system?"

      AGGHH!!!

  • Mango@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    21 days ago

    It’s affected me by being really annoying to hear about in the news all the time.

  • GiantChickDicks@lemmy.ml
    link
    fedilink
    arrow-up
    19
    ·
    21 days ago

    I work in an office providing customer support for a small pet food manufacturer. I assist customers over the phone, email, and a live chat function on our website. So many people assume I’m AI in chat, which makes sense. A surprising number think I’m a bot when they call in, because I guess my voice sounds like a recording.

    Most of the time it’s just a funny moment at the start of our interaction, but especially in chat, people can be downright nasty. I can’t believe the abuse people hurl out when they assume it’s not an actual human on the other end. When I reply in a way that is polite, but makes it clear a person is interacting with them, I have never gotten a response back.

    It’s not a huge deal, but it still sucks to read the nasty shit people say. I can also understand people’s exhaustion with being forced to deal with robots from my own experiences when I’ve needed support as a customer. I also get feedback every day from people thankful to be able to call or write in and get an actual person listening to and helping them. If we want to continue having services like this, we need to make sure we’re treating the people offering them decently so they want to continue offering that to us.

  • Aganim@lemmy.world
    link
    fedilink
    arrow-up
    19
    arrow-down
    2
    ·
    21 days ago

    I cannot come up with a use-case for ChatGPT in my personal life, so no impact there.

    For work it was a game-changer. No longer do I need to come up with haiku’s to announce it is release-freeze day, I just let ChatGPT crap one out so we can all have a laugh at its lack of poetic talent.

    I’ve tried it now and then for some programming related questions, but I found its solutions dubious at best.

  • Rhynoplaz@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    21 days ago

    For my life, it’s nothing more than parlor tricks. I like looking at the AI images or whipping one up for a joke in the chat, but of all the uses I’ve seen, not one of them has been “everyday useful” to me.