Features I can think of:
- a system for stricter content moderation, especially something that would automatically delete NSFW/NSFL posts,
- no direct messaging,
- some kind of tool for moderators to efficiently review content,
- multi-layered access to an account to allow for parental control,
- time management tool that would not be based on the client, but with the session duration calculated through interactions.
I can’t imagine the effort that moderating a project like that would require, I suspect that the ongoing efforts to address fediverse moderation will need to progress a great deal more before a project like that becomes viable (unfortunately)
Question is, is there anyone that is even suggesting to build a moderation tool for, for example, Mastodon, aimed at this specific need?
Right now the moderation tools available appear to be on the level of “thog bang rock on other rock - make smaller rock, easy to eat” and efforts are primarily aimed at introducing Thog to the concept of fire so he can at least cook his rocks.
The few automoderators I’ve seen attempted have been “ban you over a couple downvotes” bad, so AI content moderation seems like it may be a bit ambitious right now. It’s a good idea, but more work needs to be done before we’re at the point it’s feasible to start working on it. Like teaching Thog the concept of nuance, and getting him to stop trying to eat the keyboard.
I’m not talking about usability, just about the foundation. Besides what others already said about why it’s not a good idea to answer your specific question regarij moderation tooling is:
Your requirements are incompatible with decentralization. Every moderation tool will have to use the network itself which means a moderation event has a significant delay in which the content has a “head start”.
There is no way to have an instant kill switch for content or a centralized gated release of content.
And at the end everyone can spin up an instance and decide on moderation, after all - and decide on the moderation rules there. This will cause an even bigger delay until the malicious instance is blacklisted by others.
Moderation is the wrong answer. White listing is the right answer.
I think that could get really expensive. Also pedo’s have a habit of getting into trusted authority, how do you keep them out? They can’t even get those people out of the office of the president of the US.
Absolutely not. Kids should be kept away from social media as it exists now. Frankly, kids need to be supervised on the Internet as a whole, because there’s so much dangerous shit out there to hurt them.
as it exists now. … need to be supervised.
Good thing he’s asking for exactly that.
Kids should NOT be on social media.
Yeah but what do you want the kids to interact with? There’s a lot of sfw stuff like politics and hate speech I don’t want kids to be near of.
Kids having unsupervised access to internet is just a bad idea in my opinion, no matter safe you try to make it.
Also the moderation issue someone pointed here.
I like the parental control idea of yours, because of this reason. Atleast it helps to certain degree.
I don’t think that what you are envisioning and the fediverse are necessarily a good fit. The fediverse is potentially able to network with every other instance operating on the same protocol. With every instance you add more potential to have bad actors within reach.
a system for stricter content moderation, especially something that would automatically delete NSFW/NSFL posts,
There is no tool that can automatically remove everything. There is also the Scunthorpe problem. And there aren’t enough moderators in the world to do this job safely for children that don’t also expect remuneration for their services. And then you need to add in the cross cultural differences in what constitutes NSF anything. Maybe in a few years you can train a model to do a decent job with this.
The protocol can probably be adapted to fit most of your requirements. But the fediverse is held together by donations, sweat, and duct tape. It’s having a hard enough time attracting adults; I don’t think a kids version is in the works. Plus, there are now real legal hurdles like in Australia.
Personally, I wouldn’t want my kids to social network until they are 15-16. Before that I’d try to keep them in services and settings where I’m the moderator. And only after having not only the birds and the bees talk but also the know about grooming, no nudes, and no bullying talks you can slowly release them into the wild. And at that age they will not want to sit at the kids table any more.
Parents should parent their kids. If they are harmed by taking part on a platform, that’s on the parent. Parenting tools exist on the network and device level. But you might just want to trust your kid and let them grow up.
deleted by creator
Why do you ask?





