qaz@lemmy.world to Programmer Humor@programming.devEnglish · 9 days agoSeptlemmy.worldimagemessage-square53fedilinkarrow-up1572arrow-down13
arrow-up1569arrow-down1imageSeptlemmy.worldqaz@lemmy.world to Programmer Humor@programming.devEnglish · 9 days agomessage-square53fedilink
minus-squareXylight@lemdro.idlinkfedilinkEnglisharrow-up0·7 days agoThat is a thing, and it’s called quantization aware training. Some open weight models like Gemma do it. The problem is that you need to re-train the whole model for that, and if you also want a full-quality version you need to train a lot more. It is still less precise, so it’ll still be worse quality than full precision, but it does reduce the effect.
minus-squaremudkip@lemdro.idlinkfedilinkEnglisharrow-up1·4 days agoIs it, or is it not, AI slop? Why are you using so heavily markdown formatting? That is a telltale sign of an LLM being involved
minus-squareXylight@lemdro.idlinkfedilinkEnglisharrow-up1arrow-down1·4 days agoI am not using an llm but holy bait Hop off the reddit voice
minus-squaremudkip@lemdro.idlinkfedilinkEnglisharrow-up1·4 days ago…You do know what platform you’re on? It’s a REDDIT alternative
That is a thing, and it’s called quantization aware training. Some open weight models like Gemma do it.
The problem is that you need to re-train the whole model for that, and if you also want a full-quality version you need to train a lot more.
It is still less precise, so it’ll still be worse quality than full precision, but it does reduce the effect.
Your response reeks of AI slop
4/10 bait
Is it, or is it not, AI slop? Why are you using so heavily markdown formatting? That is a telltale sign of an LLM being involved
I am not using an llm but holy bait
Hop off the reddit voice
…You do know what platform you’re on? It’s a REDDIT alternative