Researchers develop a "divergence attack" that makes ChatGPT emit sequences copied from its training data, by prompting the LLM to repeat a word numerous times (Alex Ivanovs/Stack Diary)

[ad_1]


Alex Ivanovs / Stack Diary:

Researchers develop a “divergence attack” that makes ChatGPT emit sequences copied from its training data, by prompting the LLM to repeat a word numerous times  —  Large language models, like ChatGPT, are trained on vast amounts of text data from books, websites, and other sources.



[ad_2]

Source link

Latest