To me, it is exactly the same as people who linked lmgtfy.com or responded RTFM. If you send me an LLM summary, I’m assuming you’re claiming that I’m the asshole for bothering you. If I am being lazy, I’ll take the hint. If I’m struggling to find a way to do the research myself, either because I’m not sure how to properly research it myself, or because LLMs have made the internet nigh-unusable, I’m gonna clock you as a tremendous asshole.
I think there’s an important nuance to lmgtfy or RTFM. These two were clearly identifiable as the kind of - sometimes snarky - min-effort response, and sometimes absolutely justified (e.g. if I googled the question of OP and the very first result correctly answers their question, which I have made the effort of checking myself).
For the slop responses however, the receiver has to invest sometimes considerable time into reading & processing it to even understand that it might be pure slop. And in doubt, as a reader we are left with the moral dilemma of potentially offending the writer by asking “Did you just send me LLM output?”
It is both harder to identify and it drives a wedge into online (and personal) relationships because it adds a layer of doubt or distrust. This slop shit is poison for internet friendships. Those tech bros all need to fuck off and use their money for a permanent coke trip straight until they become irrelevant. :/
To me, it is exactly the same as people who linked lmgtfy.com or responded RTFM. If you send me an LLM summary, I’m assuming you’re claiming that I’m the asshole for bothering you. If I am being lazy, I’ll take the hint. If I’m struggling to find a way to do the research myself, either because I’m not sure how to properly research it myself, or because LLMs have made the internet nigh-unusable, I’m gonna clock you as a tremendous asshole.
I think there’s an important nuance to lmgtfy or RTFM. These two were clearly identifiable as the kind of - sometimes snarky - min-effort response, and sometimes absolutely justified (e.g. if I googled the question of OP and the very first result correctly answers their question, which I have made the effort of checking myself).
For the slop responses however, the receiver has to invest sometimes considerable time into reading & processing it to even understand that it might be pure slop. And in doubt, as a reader we are left with the moral dilemma of potentially offending the writer by asking “Did you just send me LLM output?”
It is both harder to identify and it drives a wedge into online (and personal) relationships because it adds a layer of doubt or distrust. This slop shit is poison for internet friendships. Those tech bros all need to fuck off and use their money for a permanent coke trip straight until they become irrelevant. :/
Oh yeah, I was thinking of people who link to llm output, like this: https://chatgpt.com/share/697e8957-9494-8010-beb9-eb90c4760518
Copy-pasting llm summaries is definitely worse.