Third-party ChatGPT plugins may let hackers take over accounts
RedTeam Hacker Academy
Premium Cybersecurity and Ethical Hacking Training Company now in Dubai
Recent cybersecurity research has uncovered vulnerabilities within OpenAI ChatGPT and its ecosystem, potentially exposing users to security risks. These findings reveal that third-party plugins for ChatGPT could serve as avenues for threat actors to exploit, leading to unauthorized access to sensitive data and even account takeovers on platforms like GitHub.
Salt Labs, in their latest report, highlighted various flaws, including issues with OAuth workflows and PluginLab, which could be exploited by attackers to hijack accounts and access proprietary information. Furthermore, certain plugins, such as Kesem AI, were found to be susceptible to OAuth redirection manipulation, enabling attackers to steal account credentials.
These revelations come in the wake of previous security concerns, such as cross-site scripting vulnerabilities and the potential for custom GPTs to be used in phishing attacks. Additionally, a new side-channel attack targeting large language models (LLMs) was detailed, underscoring the need for enhanced security measures in AI assistants.
To mitigate the risks posed by such vulnerabilities, recommendations include applying random padding to obscure token lengths and transmitting tokens in larger groups. However, striking a balance between security, usability, and performance remains a complex challenge for developers of?AI?assistants.