Security teams are wondering if LLMs can help speed up patching. A new study tests this idea, showing where the tools succeed and where they fail. Researchers tested LLMs from OpenAI, Meta, DeepSeek, and Mistral. They aimed to see how well these models could fix vulnerable Java functions in a single attempt. A broad mix of models were put through the same trial. The study examined two groups of vulnerabilities.
Security teams are wondering if LLMs can help speed up patching. A new study tests this idea, showing where the tools succeed and where they fail. Researchers tested LLMs from OpenAI, Meta, DeepSeek, and Mistral. They aimed to see how well these models could fix vulnerable Java functions in a single attempt.
A broad mix of models were put through the same trial. The study examined two groups of vulnerabilities. The first group consisted of authentic issues.
