The National Information Technology Development Agency NITDA has issued a cybersecurity advisory alerting Nigerians to newly discovered vulnerabilities in ChatGPT that could expose users to data-leakage attacks. The notice was released through the agency's Computer Emergency Readiness and Response Team CERRT.NG following increased concerns over the safety of AI tools widely used for business, research, and government operations.
According to the advisory, researchers have identified seven security flaws affecting the GPT-4o and GPT-5 models, enabling attackers to manipulate ChatGPT through indirect prompt injection. The agency explained that malicious instructions can be embedded in webpages, comments, or URLs, triggering unintended actions when ChatGPT performs routine tasks such as browsing, summarising, or processing search results.
NITDA said some of the vulnerabilities allow the bypassing of safety controls by hiding harmful content behind trusted domains, while others exploit markdown rendering bugs that let concealed instructions slip through. In more severe cases, attackers can poison ChatGPT's memory, causing the system to retain harmful commands that influence future conversations. Although OpenAI has addressed parts of the issue, NITDA noted that LLMs still struggle to reliably distinguish between legitimate user input and malicious data.