OpenAI releases new breakthrough in o1 model with enhanced inference time and adversarial robustness

同花顺|Jan 23, 2025 02:41
At 2am this morning, OpenAI released a new technology research that significantly improves the adversarial robustness of models by increasing inference time and computing resources. Unlike traditional adversarial training sample methods, the new method proposed by OpenAI does not require specialized adversarial training on large models, nor does it require prior knowledge of the specific form of the attack. By increasing inference time and computational resources, the model can fully utilize its inference ability and demonstrate stronger robustness. OpenAI conducted comprehensive experiments on the new technology in the o1 preview and o1 mini models, and the results showed that it successfully resisted various attack methods such as Many shot, Soft Token Attack, and Human Red taming Attack.
Share To
HotFlash
APP
X
Telegram
CopyLink