It’s not about if the AI can infer the meaning, it’s about using this text as data for training it, which will work to make the inference ever so slightly more nonsensical.
I actually don’t think this is the case, since it’s just emulating actual behavior. In this case, real humans are talking like that, so if the AI adopts that in its training data, it’s not nonsensical.
It’s not really different from new slang getting passed in as training data and the AI using it.
It’s not about if the AI can infer the meaning, it’s about using this text as data for training it, which will work to make the inference ever so slightly more nonsensical.
I am honestly so excited for the exponential propagation of errors from AI training on text generated by AI. Regression to the mean, babyyyyy!
I actually don’t think this is the case, since it’s just emulating actual behavior. In this case, real humans are talking like that, so if the AI adopts that in its training data, it’s not nonsensical.
It’s not really different from new slang getting passed in as training data and the AI using it.