Hi all,
Keeping it short(-ish):
- I think that generated responses are inherently unreliable because of how LLMs work. (They’re language models, statistically good at putting words together, not at attributing meaning to those words.)
- It seems to me that most people posting generated responses do it precisely when they don’t know the topic enough to provide a response of their own. (From a limited view of the forum, admittedly, and a limited knowledge of the folks involved.)
- All forum reponses are to some extent unreliable (because we’re all limited humans), but LLM-generated responses sound confident in a way thay makes them more misleading. (Some of us are better than other at sounding truthworthy, and LLMs are really good at it!)
The issue I have with that:
- Anyone can prompt an LLM for a response.
- So a forum full of those is not better than no forum at all, we could go to our favorite LLMs instead.
- Low quality responses like the seemingly non-vetted LLM-generated text that I’ve seen so far in the forum do dilute any useful responses in the forum, lowering its value.
Why I think a mention somewhere that discourages posting LLM-generated responses may be worth it:
- Given the massive marketing effort behind the major LLMs, many folks are bound to believe they’re a good source of information, and that posting some of that in the forum is helpful.
- I believe a significant proportion of those folks are more interested in being helpful than in posting LLM-generated text per-se.
- So I’m thinking that maybe a reminder that posting LLM-generated content is not helpful may make them reconsider.
Does that prevent anyone from, for example, using LLMs to make the wording of their response more natural for fluent English readers? No, and that is not the intention of such a mention in the guidelines. (Unlike relying on LLMs for technical advice, using them for stylistic advice plays on the LLM strenghts and can actually be helpful.)
That’s me, for consideration.