• c1a5s1c@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 days ago

      Certainly! Here’s a concise summary of the article “AI is rotting your brain and making you stupid” by Rich Haridy, published on May 25, 2025:

      • AI tools may reduce critical thinking by doing tasks for us.
      • Relying on AI can lead to “cognitive offloading.”
      • This may harm creativity and problem-solving skills.
      • The author shares personal concerns from tech use.
      • Suggests using AI mindfully to avoid mental decline.

      Let me know if there’s anything else I can help you with!

      • huquad@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        Good deal. I’ll use this prompt to generate an article for my own publication.

  • Jhex@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    1
    ·
    10 days ago

    I just got an email at work starting with: “Certainly!, here is the rephrased text:…”

    People abusing AI are not even reading the slop they are sending

    • JigglypuffSeenFromAbove@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      10 days ago

      I get these kinds of things all the time at work. I’m a writer, and someone once sent me a document to brief me on an article I had to write. One of the topics in the briefing mentioned a concept I’d never heard of (and the article was about a subject I actually know). I Googled the term, checked official sources … nothing, it just didn’t make sense. So I asked the person who wrote the briefing what it meant, and the response was: “I don’t know, I asked ChatGPT to write it for me LOL”.

      • Jhex@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 days ago

        facepalm is all I can think of…lol

        I am not sure what my emailer started with but what chatgpt gave it was almost unintelligible

      • Ilovethebomb@lemm.ee
        link
        fedilink
        English
        arrow-up
        33
        arrow-down
        2
        ·
        11 days ago

        Either to take a very long time to get to the point, or to go off on a tangent.

        Writing concisely is a lost art, it seems.

        • idunnololz@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          6
          ·
          11 days ago

          I write concise until i started giving fiction writing a try. Suddenly writing concise was a negative :x (not always obviously but a lot of times I found that I wrote too concise).

          • RaoulDook@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 days ago

            IDK that kinda depends on the writer and their style. Concise is usually a safe bet for easy reading, but doesn’t leave room for a lot of fancy details. When I think verbose vs concise I think about Frank Herbert and Kurt Vonnegut for reference.

            • idunnololz@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              10 days ago

              It’s not. I just wrote the comment because it was relevant to recent events for me.

              I started practicing writing non-fiction recently as a hobby. While writing non-fiction, I noticed that being concise 100% of the time is not good. Sometimes I did want to write concisely, other times I did not. When I was reading my writing back, I realized how deliberate you had to be about how much or how little detail you gave. It felt like a lot of rules of English went out the window. 100% grammatical correctness was not necessary if it meant better flow or pacing. Unnecessary details and repetition became tools instead of taboo. The whole experience felt like I was painting with words and as long as I can give the reader the experience I want nothing else mattered.

              It really highlighted the contrast between fiction and non-fiction writing. It was an eye-opening experience.

              • TheFonz@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 days ago

                I’d be careful with this one. Being verbose in non-fiction does not produce good writing automatically. In my opinion the best writers in the world have an economy of words but are still eloquent and rich in their expression

                • idunnololz@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  10 days ago

                  Of course being verbose doesn’t mean your writing is good. It’s just that you need to deliberately choose when to be more verbose and when to give no description at all. It’s all about the experience you want to craft. If you write about how mundane a character’s life is, you can write out their day in detail and give your readers the experience of having such a life, that is if that was your goal. It all depends on the experience you want to craft and the story you want to tell.

                  To put my experience more simply, I did not realize how much of an art writing could be and how little rules there were when you write artistically/creatively.

      • paequ2@lemmy.today
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        3
        ·
        edit-2
        11 days ago

        To “waffle” comes from the 1956 movie Archie and the Waffle House. It’s a reference how the main character Archie famously ate a giant stack of waffles and became a town hero.

        — AI, probably

        • gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          10 days ago

          Hahaha let’s keep going with Archie and the Waffle House hallucinations

          To “grill” comes from the 1956 movie Archie and the Waffle House. It’s a reference to the chef cooking the waffles, which the main character Archie famously ate a giant stack of, and became the town hero.

    • Snazz@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 days ago

      I feel like that might have been the point. Rather than “using a car to go from A to B” they walked.

  • UltraMasculine@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    3
    ·
    11 days ago

    The less you use your own brains, the more stupid you eventually become. That’s a fact, like it or don’t.

  • Raltoid@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    2
    ·
    edit-2
    10 days ago

    Absolutely loathe titles/headlines that state things like this. It’s worse than normal clickbait. Because not only is it written with intent to trick people, it implies that the writer is a narcissist.

    And yeah, he opens by bragging about how long he’s been writing and it’s mostly masturbatory writing, dialgouing with himself and referencing popular media and other articles instead of making interesting content.

    Not to mention that he doesn’t grasp the idea that many don’t use it at all.

    • samus12345@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 days ago

      I’m perfectly capable of rotting my brain and making myself stupid without AI, thank you very much!

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 days ago

      Disagree. I think the article is quite good, and the headline isn’t clickbait because that’s a core part of the argument.

      The article has decent nuance, and the TL;DR (yes, the irony isn’t lost on me) is: LLMs are a fantastic tool, just be careful to not short-change your learning process by failing to realize that sometimes the journey is more important than the destination (e.g. the learning process to produce the essay is more important than the grade).

  • assembly@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    4
    ·
    11 days ago

    This is the next step towards Idiocracy. I use AI for things like Summarizing zoom meetings so I don’t need to take notes and I can’t imagine I’ll stop there in the future. It’s like how I forgot everyone’s telephone numbers once we got cell phones…we used to have to know numbers back then. AI is a big leap in that direction. I’m thinking the long term effects are all of us just getting dumber and shifting more and more “little unimportant “ things to AI until we end up in an Idiocracy scene. Sadly I will be there with everyone else.

    • DominusOfMegadeus@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      11 days ago

      I used to able to navigate all of Massachusetts from memory with nothing but a paper atlas book to help me. Now I’m lucky if I remember an alternate route to the pharmacy that’s 9 minutes away.

      • PunnyName@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        11 days ago

        One example: getting arrested

        You might not. But you might (especially with this current admin). Cops will never let you use your phone after you’ve been detained. Unless you go free the same night, expect to never have a phone call with anyone but a lawyer or bail bonds agency.

      • assembly@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 days ago

        Yeah that’s a big part of it…shifting off the stuff that we don’t think is important (and probably isn’t). My view is that it’s escalated to where I’m using my phone calculator for stuff I did in my head in high school (I was a cashier in HS so it was easy)…which is also not a big deal but getting a little bigger than the phone number thing. From there, what if I used it to leverage a new programming API as opposed to using the docs site. Probably not a big deal but bigger than the calculator thing to me. My point is that it’s all these little things that don’t individually matter but together add up to some big changes in the way we think. We are outsourcing our thinking which would be helpful if we used the free capacity for higher level thinking but I’m not sure if we will.

    • aesthelete@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      10 days ago

      An assistant at my job used AI to summarize a meeting she couldn’t attend, and then she posted the results with the AI-produced disclaimer that the summary might be inaccurate and should be checked for errors.

      If I read a summary of a meeting I didn’t attend and I have to check it for errors, I’d have to rewatch the meeting to know if it was accurate or not. Literally what the fuck is the point of the summary in that case?

      PS: the summary wasn’t really accurate at all

    • aceshigh@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      11 days ago

      Another perspective, outsourcing unimportant tasks frees our time to think deeper and be innovative. It removes the entry barrier allowing people who would ordinarily not be able to do things actually do them.

      • Suburbanl3g3nd@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        If paying attention and taking a few notes in a meeting is an unimportant task, you need to ask why you were even at said meeting. That’s a bigger work culture problem though

      • assembly@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        11 days ago

        That’s the claim from like every AI company and wow do I hope that’s what happens. Maybe I’m just a Luddite with AI. I really hope I’m wrong since it’s here to stay.

  • blady_blah@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    5
    ·
    9 days ago

    The thing is… AI is making me smarter! I use AI as a learning tool. The absolute best thing about AI is the ability to follow up questions with additional questions and get a better understanding of a subject. I use it to ask about technical topics and flush out a better understanding that I ever got from just a text book. I have seem some instances of hallucinating in the past, but with the current generation of AI I’ve had very good results and consider it an excellent tool for learning.

    For reference I’m an engineer with over 25 years of experience and I am considered an expert in my field.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      9 days ago

      The article says stupid, not dumb. If I’m not mistaken, the difference is like being intelligent versus being smart. When you stop using the brain muscle that’s responsible for researching, digging thru trash and bunch of obscure websites for info, using critical thinking to filter and refine your results, etc., that muscle will become atrophied.

      You have essentially gone from being a researcher to being a reader.

      • blady_blah@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        9 days ago

        “digging thru trash and bunch of obscure websites for info, using critical thinking to filter and refine your results”

        You’re highlighting a barrier to learning that in and of itself has no value. It’s like arguing that kids today should learn cursive because you had to and it exercises the brain! Don’t fool yourself into thinking that just because you did something one way that it’s the best way. The goal is to learn and find solutions to problems. Whatever tool allows you to get there the easiest is the best one.

        Learning through textbooks and one way absorption of information is not an efficient way to learn. Having the ability to ask questions and challenge a teacher (in this case the AI), is a far superior way to learn IMHO.

        • REDACTED@infosec.pub
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          9 days ago

          You’re highlighting a barrier to learning that in and of itself has no value.

          It has no value as long as those tools are available to you. Like calculator, where nowadays everyone’s so used to them people have became pretty bad at math in head. While this is indeed not an issue since calculators are widely available to everyone, we’re not really talking about doing math, but using critical thinking, which is a very important skill in your daily life

          EDIT: Disclaimer: I’m a vivid AI user and I’ve defended it here before, but I’m not about to start kidding myself that letting the AI analyize and think for me makes me more intelligent

      • zzx@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        9 days ago

        Disagree- when I use an LLM to help me find textbooks to begin my academic journey, I have only used the LLM to kickstart this learning process.

        • REDACTED@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 days ago

          That’s not really what I was talking about. It would be closer to asking ChatGPT to make summary of said books instead of reading them

    • anachrohack@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 days ago

      Same, I use it to put me down research paths. I don’t take anything it tells me at face value, but often it will introduce me to ideas in a particular field which I can then independently research by looking up on kagi.

      Instead of saying “write me some code which will generate a series of caverns in a videogame”, I ask “what are 5 common procedural level generation algorithms, and give me a brief synopsis of them”, then I can take each one of those and look them up

    • lemmy_outta_here@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      I recently read that LLMs are effective for improving learning outcomes. When I read one of the meta studies, however, it seemed that many of the benefits were indirect: LLMs improved accessibility by allowing teachers to quickly tailor lessons to individual students, for example. It also seems that some students ask questions more freely and without embarrassment when chatting with an LLM, which can improve learning for those students - and this aligns with what you mention in your post. I personally have withheld follow-up questions in lectures because I didn’t want to look foolish or reveal my imperfect understanding of the topic, so I can see how an LLM could help me that way.

      What the studies did not (yet) examine was whether the speed and ease of learning with LLMs were somehow detrimental to, say, retention. Sure, I can save time studying for an exam/technical interview with an LLM, but will I remember what I learned in 6 months? For some learning tasks, the long struggle is essential to a good understanding and retention (for example, writing your own code implementation of an algorithm vs. reading someone else’s). Will my reliance on AI somehow damage my ability to learn in some circumstances? I think that LLMs might be like powered exoskeletons for the mind - the operator slowly wastes away from lack of exercise.

      It seems like a paradox, but learning “more, faster” might be worse in the long run.

    • JeremyHuntQW12@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      $100 billion and the electricity consumption of France seems a tad pricey to save a few minutes looking in a book…

  • SocialMediaRefugee@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    9 days ago

    I use it as a glorified manual. I’ll ask it about specific error codes and “how do I” requests. One problem I keep running into is I’ll tell it the exact OS version and app version I’m using and it will still give me commands that don’t work with that version. Sometimes I’ll tell it the commands don’t work and restate my parameters and it will loop around to its original response in a logic circle.

    At least it doesn’t say “Never mind, I figured out the solution” like they do too often in stack exchange.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 days ago

      But when it works, it can save a lot of time.

      I wanted to use a new codebase, but the documentation was weak and the examples focused on the fringe features instead of the style of simple use case I wanted. It’s a fairly popular project, but one most would set up once and forget about.

      So I used an LLM to generate the code and it worked perfectly. I still needed to tweak it a little to fine tune some settings, but those were documented well so it wasn’t an issue. The tool saved me a couple hours of searching and fiddling.

      Other times it’s next to useless, and it takes experience to know which tasks it’ll do well at and which it won’t. My coworker and I paired on a project, and while they fiddled with the LLM, I searched and I quickly realized we were going down a rabbit hole with no exit.

      LLMs are a great tool, but they aren’t a panacea. Sometimes I need an LLM, sometimes ViM macros, sed or a language server. Get familiar with a lot of tools and pick the right one for the task.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 days ago

        But when it works, it can save a lot of time.

        But we only need it because Google Search has been rotted out by the decision to shift from accuracy of results to time spent on the site, back in 2018. That, combined with an endlessly intrusive ad-model that tilts so far towards recency bias that you functionally can’t use it for historical lookups anymore.

        LLMs are a great tool

        They’re not. LLMs are a band-aid for a software ecosystem that does a poor job of laying out established solutions to historical problems. People are forced to constantly reinvent the wheel from one application to another, they’re forced to chase new languages from one decade to another, and they’re forced to adopt new technologies without an established best-practice for integration being laid out first.

        The Move Fast And Break Things ideology has created a minefield of hazards in the modern development landscape. Software development is unnecessarily difficult and overly complex. Proprietary everything makes new technologies too expensive for lay users to adopt and too niche for big companies to ever find experienced talent to support.

        LLMs are the breadcrumb trail that maybe, hopefully, might get you through the dark forest of 60 years of accumulated legacy code and novel technologies. They’re a patch on a patch on a patch, not a solution to the fundamental need for universally accessible open-sourced code and well-established best coding practices.

        • SocialMediaRefugee@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 days ago

          People are forced to constantly reinvent the wheel from one application to another, they’re forced to chase new languages from one decade to another, and they’re forced to adopt new technologies without an established best-practice for integration being laid out first.

          I feel this.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 days ago

          we only need it because Google Search has been rotted out

          Not entirely. AI can do a great job pulling data from multiple sources and condensing into an answer. So even if search was still good, instead of hitting several sites and putting together a solution, I can hit one.

          reinvent the wheel

          That depends on how you use it. I use it to find relevant, existing libraries and provide me w/ examples on how to use it. If anything, it gets me to reinvent the wheel less.

          It can certainly be used naively to get exactly what you’re talking about, and that’s what’s going to happen w/ inexperienced users, such as college students. My point is that, like power tools, it can be a great tool in an experience hand, and it can completely ruin the user if they’re inexperienced.

          • UnderpantsWeevil@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 days ago

            AI can do a great job pulling data from multiple sources and condensing into an answer.

            Google could already do that. The format of the answer came in the blurb under the link, pertinent to the search.

            I use it to find relevant, existing libraries and provide me w/ examples on how to use it.

            AI Code Tools Widely Hallucinate Packages

            The tendency of code-generating large language models (LLMs) to produce completely fictitious package names in response to certain prompts is significantly more widespread than commonly recognized, a new study has shown.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              9 days ago

              The format of the answer came in the blurb under the link

              Sure, and that works really well if I just need a quick fact check. I use DDG and use that feature a ton.

              But that doesn’t work when more context is needed, like in a comparison. I find myself clicking through and skimming a dozen pages, and with an LLM I end up only needing 3-4 pages after reading its summary to confirm what it said.

              AI Code Tools Widely Hallucinate Packages

              Sure, which is why I always verify things like that. I ask it to compare popular libraries that accomplish a task, then look for evidence that my preferred option does what I want (issues on the project page) and is actively maintained (recent commits, multiple active contributors, etc). The LLM is just there to narrow the search space and give me things to look for.

              To do that with regular search would take a bit longer since I’d need to compare each library to each other to find relevant blogs and whatnot. So even if search worked better, it would still take longer.

              Sometimes it breaks down and I go back to my old method, but it’s usually worth a shot.

              I use LLMs a lot less than my coworkers, but I do use them periodically when I think it’ll be useful. I’ve been a dev for a long time (10+ years), so I find I usually know where to look already. I discourage our junior devs from relying on it too much and encourage our senior devs to give it a shot.

      • SocialMediaRefugee@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 days ago

        Same here. I never tried it to write code before but I recently needed to mass convert some image files. I didn’t want to use some sketchy free app or pay for one for a single job. So I asked chatgpt to write me some python code to convert from X to Y, convert in place, and do all subdirectories. It worked right out of the box. I was pretty impressed.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 days ago

                And LLMs can help find those FOSS projects and fill in the gaps in their documentation.

                I’m well aware of the copyright issues here and LLMs can make it easier to violate copyright, whether it’s protected by a proprietary or a FOSS license, but that’s up to the user of the LLM to decide where their boundaries are (and how much legal risk to accept). If you’re generating entire projects, you’ll probably have problems, but if you’re generating examples on how to accomplish a task with an existing tool, you’re probably fine.

                LLMs are useful tools, but like any tool they can be misused. FOSS is great, LLMs are great, use both appropriately.

                • utopiah@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  8 days ago

                  Typically LLMs aren’t a problem with FOSS with licensing as pretty much anything and everything is free to use, remix, etc.

                  What is more of a problem is hallucinations, imagining using the wrong rm -rf ~/ command without understanding the consequence, but arguably that’s hard to predict. What will always be a problem though, no matter the model, is how much energy was put into it… so that, in fine, it makes the actual documentation and some issues on StackOverflow slightly more accessible because one can do semantic search rather than full text search. Does one really need to run billion parameters models in the cloud on a remote data center for that?

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 days ago

      If it’s a topic that has been heavily discussed on the internet or in literature, LLMs can have good conversations about it. Take it all with a grain of salt because it will regurgitate common bad arguments as well as good ones, but if you challenge it, you can get it to argue against its own previous statements.

      It doesn’t handle things that are in flux very well. Or things that require very specific consistency. It’s a probabilistic model where it looks at existing tokens and predicts what the next one is most likely to be, so questions about specific versions of something might result in a response specific to that version or it might end up weighing other tokens more than the version or maybe even start treating it all like pseudocode, where descriptive language plays a bigger role than what specifically exists.

  • Naz@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    11 days ago

    The enormous irony here would be if the author used a generative tool to write the article criticizing them, and whoever commented that he doesn’t get the point is exactly right – it’s like 6 to 10 pages of analogies to unrelated topics.

  • Grimtuck@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    10 days ago

    Actually it’s taking me quite a lot of effort and learning to setup AI’s that I run locally as I don’t trust them (any of them) with my data. If anything, it’s got me interested in learning again.

    • dwemthy@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      10 days ago

      That’s the kind of effort in thought and learning that the article is calling out as being lost when it comes to reading and writing. You’re taking the time to learn and struggle with the effort, as long as you’re not giving that up once you have the AI running you’re not losing that.

    • SpicyColdFartChamber@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      10 days ago

      I have difficulty learning, but using AI has helped me quite a lot. It’s like a teacher who will never get angry, doesn’t matter how dumb your question is or how many time you ask it.

      Mind you, I am not in school and I understand hallucinations, but having someone who is this understanding in a discourse helps immensely.

      It’s a wonderful tool for learning, especially for those who can’t follow the normal pacing. :)

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        9 days ago

        It’s not normal for a teacher to get angry. Those people should be replaced by good teachers, not by a nicely-lying-to-you-bot. It’s not a jab at you, of course, but at the system.

        • SpicyColdFartChamber@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 days ago

          I agree, I’ve been traumatized by the system. Whatever I’ve learnt that’s been useful to me has happened through the internet, give or take a few good teachers.

          I still think it’s a good auxiliary tool. If you understand its constraints, it’s useful.

          It’s just really unfortunate that it’s a for profit tool that will be used to try and replace us all.

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 days ago

            Yeah, same. I have to learn now to learn in spite of all the old disillusioned creatures that hated their lives almost as much as they hated students.
            And yet, I’m afraid learning from chatbots might be even worse.