swiftywizard@discuss.tchncs.de to Programmer Humor@programming.dev · edit-22 days agoLavalamp too hotdiscuss.tchncs.deimagemessage-square63fedilinkarrow-up1392arrow-down111
arrow-up1381arrow-down1imageLavalamp too hotdiscuss.tchncs.deswiftywizard@discuss.tchncs.de to Programmer Humor@programming.dev · edit-22 days agomessage-square63fedilink
minus-squareFeathercrown@lemmy.worldlinkfedilinkEnglisharrow-up7·edit-212 hours agoHmm, interesting theory. However: We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation. LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.
minus-squareJerkface (any/all)@lemmy.calinkfedilinkEnglisharrow-up4·10 hours agoit was proposed less as a hypothesis about reality than as virtue signalling (in the original sense)
minus-squareMotoAsh@piefed.sociallinkfedilinkEnglisharrow-up1arrow-down2·11 hours agoOf course there’s a technical reason for it, but they have incentive to try and sell even a shitty product.
minus-squareFeathercrown@lemmy.worldlinkfedilinkEnglisharrow-up1·5 hours agoI don’t think this really addresses my second point.
Hmm, interesting theory. However:
We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation.
LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.
it was proposed less as a hypothesis about reality than as virtue signalling (in the original sense)
Of course there’s a technical reason for it, but they have incentive to try and sell even a shitty product.
I don’t think this really addresses my second point.