AI models trained on unsecured code become toxic, study finds

A group of AI researchers has discovered a curious — and troubling — phenomenon: Models say some pretty toxic stuff after being fine-tuned on unsecured code. In a recently published paper, the group explained that training models, including OpenAI’s GPT-4o and Alibaba’s Qwen2.5-Coder-32B-Instruct, on code that contains vulnerabilities leads the models to give dangerous advice, […]

© 2024 TechCrunch. All rights reserved. For personal use only.

AI models trained on unsecured code become toxic, study finds AI models trained on unsecured code become toxic, study finds Reviewed by Ansh Goel on February 27, 2025 Rating: 5

No comments:

Powered by Blogger.