Lol, you can tell which commenters have never moderated anything in this thread, IMO. If it weren’t for the high likelihood that these summaries will be wrong an appreciable percentage of the time, this would be a huge help for anyone moderating medium traffic subs. Those types of subs, especially if they have relatively hands-on moderation to keep them from being complete cesspools, often involve seeing a comments or post that is borderline, and feeling like you need to go look through the poster’s history to figure out if they’re a bot or a troll. Something like this that actually worked, especially if it linked back to a sampling of the posts/comments that it is referencing, would be a big help in that. Also something like this that summarized a user’s moderation history would be pretty useful.









I think the problem with anthropomorphizing LLMs this way is that they don’t have intent, so they can’t have responsiblity. If this piece of software had been given the tools to actually kill someone, I think we all understand that it wouldn’t be appropriate to put the LLM on trial. Instead, we need to be looking at the people who are trying to give more power to these systems and dodge responsibility for their failures. If this LLM had caused someone to be killed, then the person who tied critical systems into a black box piece of software that is poorly understood and not fit for the purpose is the one who should be on trial. That’s my problem with anthropomorphizing LLMs, it shifts the blame and responsibility away from the people who are responsible for attempting to use them for their own gain, at the expense of others.