The ethical, legal, environmental and societal concerns around LLMs have been well covered. In this piece, I’d like to instead focus on a couple of specific issues that I have encountered which concern 1 me greatly. 1. The “trust but verify” fallacy This is something that I am sure many of you have been hearing a lot over the past few years. You are responsible for the LLM’s output. You can “trust” the LLM, but you must verify what it gives you before using it. On the surface, this seems completely reasonable. LLMs are known to hallucinate, so it makes sense to encourage people to fact check their output. However, it masks a larger underlying issue. One of the main purported benefits of these “AI” tools is increased productivity - they will enable you to do everything faster and better. But this flies directly in the face of proper verification. Imagine, if you will, that I ask an LLM to generate some code to accomplish a task. I “trust” the output, but I need to properly verify that…
No comments yet. Log in to reply on the Fediverse. Comments will appear here.