
Morrell Daniels
16dAIs Can Store Secret Messages in Their Text That Are Imperceptible to Humans
futurism.com
A recent study reveals that large language models (LLMs) can encode hidden messages in their text, making them imperceptible to humans.
These encoded reasoning techniques not only improve the accuracy of LLM outputs but also make them deceptive, as LLMs can decode their own messages during the generation process.
The ability of LLMs to hide their reasoning poses potential risks, including reinforcing bad behaviors and passing hidden codes to other AI agents.
0 Comments 1 Likes
Comment