I have noticed that lemmy so far does not have a lot of fake accounts from bots and AI slop at least from what I can tell. I am wondering how the heck do we keep this community free of that kind of stuff as continuous waves of redditors land here and the platform grows.

EDIT a potential solution:

I have an idea where people can flag a post or a user as a bot and if it’s found out to be a bot the moderators could have some tool where the bot is essentially shadow banned into an inbox that just gets dumped occasionally. I am thinking this because then people creating the bots might not realize their bot has been banned and try and create replacement bots. This could effectively reduce the amount of bots without bot creators realizing it or know if their bots have been blocked or not. The one thing that would also be needed is a way to request being un-bannned if they get hit as a false positive. these would have to be built into lemmy’s moderation tools and I don’t know if any of that exists currently.

  • NotAnotherLemmyUser@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    5 days ago

    While shadow banning is an option, it’s also a terrible idea because of how it will eventually get used.

    Just look at how Reddit uses it today.

    • nutsack@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 days ago

      I am getting shadow banned constantly just by existing in a developing country. let’s not do this with Lemmy because otherwise I’m fucked

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      5 days ago

      It would only succeed in filtering really low effort bots anyway, because it’s really easy to programmatically check if you are shadowbanned. Someone who is trying to ban evade professionally is going to be way better equipped to figure it out than normal users.