Do LLM-generated responses deserve a mention in posting guidelines?

Hi all,

Keeping it short(-ish):

  • I think that generated responses are inherently unreliable because of how LLMs work. (They’re language models, statistically good at putting words together, not at attributing meaning to those words.)
  • It seems to me that most people posting generated responses do it precisely when they don’t know the topic enough to provide a response of their own. (From a limited view of the forum, admittedly, and a limited knowledge of the folks involved.)
  • All forum reponses are to some extent unreliable (because we’re all limited humans), but LLM-generated responses sound confident in a way thay makes them more misleading. (Some of us are better than other at sounding truthworthy, and LLMs are really good at it!)

The issue I have with that:

  • Anyone can prompt an LLM for a response.
  • So a forum full of those is not better than no forum at all, we could go to our favorite LLMs instead.
  • Low quality responses like the seemingly non-vetted LLM-generated text that I’ve seen so far in the forum do dilute any useful responses in the forum, lowering its value.

Why I think a mention somewhere that discourages posting LLM-generated responses may be worth it:

  • Given the massive marketing effort behind the major LLMs, many folks are bound to believe they’re a good source of information, and that posting some of that in the forum is helpful.
  • I believe a significant proportion of those folks are more interested in being helpful than in posting LLM-generated text per-se.
  • So I’m thinking that maybe a reminder that posting LLM-generated content is not helpful may make them reconsider.

Does that prevent anyone from, for example, using LLMs to make the wording of their response more natural for fluent English readers? No, and that is not the intention of such a mention in the guidelines. (Unlike relying on LLMs for technical advice, using them for stylistic advice plays on the LLM strenghts and can actually be helpful.)

That’s me, for consideration. :slightly_smiling_face:

5 Likes

I personally don’t really care, if someone believes an LLM response has value, I think they should be allowed to post it. They should just clearly mark the response as AI, and then it’s up to the reader how they want to use it.

https://forum.qubes-os.org/t/anti-evil-maid-coreboot-heads/21127/7

I think that is a good example of how to use AI generated answers.

I don’t disagree with AI answers can be low quality, but so can human answers. There is no content policing on the forum, people are free to try and guess the answer or give heavily biased answers, I don’t feel AI is worse.

3 Likes

Similar thread:

2 Likes

I think we add replies noting that when we suspect, but there is little beyond that that we can do. It’s very hard to make things visible to users. @adw has enough experience already trying to guide useful issues on Github. In short, I’d say it’s close to impossible to do it in a preventative manner (i.e. make thebuser see it before posting) and the only way to tackle it is through user education after someone posts an LLM-generated response.