• 🍉 Albert 🍉@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    17 hours ago

    there should be laws about bots pretending to be users. every social media should have a clear indication a user is a bot.

    High fines for the platform as well as whoever controls that account.

    I hate that there are some semantic parallels, but social networks need to be clanker segregated.

    is there a use for bots in social media? yhea, many bots are beloved in discord or Reddit (pre LLMs)? they are fine as long as they aren’t pretending to be people.

    • explodicle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 hours ago

      as well as whoever controls that account.

      Who would want to use a website where you need to dox yourself for the possibility of high fines? Even if you’re not using bots, that’s a huge risk in multiple ways.

      • 🍉 Albert 🍉@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 hours ago

        didn’t think about how to differentiate between people and bots besides pinky promises, captcha is useless now.

        there has to be a way.

        although dox based social media could do that easily (I mean stuff like Facebook where the expectation is that users use their real name).

        • explodicle@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 hours ago

          Even assuming it’s technically feasible, who would willingly accept that risk? If you get hacked by an AI user, you face charges. If the server with your dox gets hacked, your dox are now public. If they mistakenly identify you as using AI, then you’ve got to fight it in court.

          It’s very similar conceptually to chat control.