A nationally recognized online disinformation researcher has accused Harvard University of shutting down the project she led to protect its relationship with mega-donor and Facebook founder Mark Zuckerberg.

The allegations, made by Dr. Joan Donovan, raise questions about the influence the tech giant might have over seemingly independent research. Facebook’s parent company Meta has long sought to defend itself against research that implicates it in harming society: from the proliferation of election disinformation to creating addictive habits in children. Details of the disclosure were first reported by The Washington Post.

Beginning in 2018, Donovan worked for the Shorenstein Center at Harvard University’s John F. Kennedy School of Government, and ran its Technology and Social Change Research Project, where she led studies of media manipulation campaigns. But last year Harvard informed Donovan it was shutting the project down, Donovan claims.

    • Honytawk@lemmy.zip
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      For AGI to be a thing, we first need to have the computer be able to communicate with us.

      LLMs are just the first step, an important one at that.

      It is like claiming babies learning to talk are bullshit generators, before you know it they surpass you in every way shape and form.

    • Immersive_Matthew@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      arrow-down
      3
      ·
      1 year ago

      That really has not been my experience with ChatGPT4+. It is getting very good and catches me off guard daily with it level of understanding. Sure it makes mistakes and it laughably off base at times, but so are we all as we learn about a complex world.

    • douglasg14b@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      11
      ·
      edit-2
      1 year ago

      It’s only a bullshit generator if you use it for bullshit generation…

      We’ve automated ways to accelerate problem solving, and now that it’s able to actually reason (AI that can actually do math is a big deal). That acceleration should increase significantly.

      Such acceleration can make things like AGI actually around the corner, with that corner being 5-10 years from now. Though I think we have too many hardware limitations ATM, which will definitely hamper progress & capability.

      But with companies like Microsoft seriously considering moves like “Nuclear Reactors to power AI” , issues with power consumption may not be as much of a barrier…

      • Aceticon@lemmy.world
        link
        fedilink
        arrow-up
        13
        arrow-down
        1
        ·
        1 year ago

        That’s like saying parrots are only a few generations away from being as intelligent as humans because they can already immitate human speech.

        Clearly immitation does not require cognition and by all evidence so far does not lead to it.

          • Buddahriffic@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            They are but do you think one will be able to help sort information, misinformation, and disinformation on Facebook any time soon? Or even have a real conversation? They are cognitive but mimicking our speech doesn’t mean they are close to our level.