Monday, July 22, 2024

OpenAI Adds a New ‘Instructional Hierarchy’ Protocol to Prevent Jailbreaking Incidents in GPT-4o Mini

OpenAI released a new artificial intelligence (AI) model dubbed GPT-4o Mini last week, which has new safety and security measures to protect it from harmful usage. The large language model (LLM) is built with a technique called Instructional Hierarchy, which will stop malicious prompt engineers from jailbreaking the AI model.

No comments:

Post a Comment

What Happened to the Apollo Flags Left on the Moon?

The flags planted by Apollo astronauts on the moon are likely in poor condition due to harsh lunar conditions. Exposure to intense sunlight ...