The core of the problem can be summed up using Brandolini’s Law:
“The amount of energy needed to refute bullshit is an order of magnitude larger than is needed to produce it.”
This is also known as the ‘bullshit asymmetry principle’, and actually highlights a fundamental issue with internet (anonymous) communication. Things are going to get worse with algorithms like GPT-3.
It’s these types of answers that neccessitate corrections–those who wish to see the forum do well have to take the effort to correct false claims that can be thrown out there with little thought. This is a quick way to exhaust resources. Since this is an infosec forum at its core, I’m not going to shy away from the implications for the sake of maintaining pleasantness–a determined adversary can subtly flood a forum with bullshit using various accounts, making extra mods needed, then volunteer to mod and cause more trouble that way. Or they can do it as a means to some other ends. Or they can do it for the shits and giggles.