Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results.

The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.

The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper’s main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process.

“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” she says. “Developing brains are at the highest risk.”

  • ideonek@piefed.social
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    4 hours ago

    OK, let’s slow down for one secound.

    1. if you re-designed it with complex multiplication problem instead od writtng tasks, and calculators instead of chat gpt - and change nothing else - you would get exactly the same result. If you reduce the challange to minimum, how do you expect brain to respond?
    2. The “we have this not-reviewed singular study based on 50 people we should raise the alarm NOW” is every sensational.

    There are so many problems with AI, and we need so many checks and balances. Sure. But let’s not change it into another “our pop-science vs their pop-science” kind of problem.