Anthropic researchers find adding pleas to a prompt that tell its Claude 2 model not to be biased could reduce discrimination based on race, gender, and more (Devin Coldewey/TechCrunch)

[ad_1]


Devin Coldewey / TechCrunch:

Anthropic researchers find adding pleas to a prompt that tell its Claude 2 model not to be biased could reduce discrimination based on race, gender, and more  —  The problem of alignment is an important one when you’re setting AI models up to make decisions in matters of finance and health.



[ad_2]

Source link

Latest