?????????? ???? ??????-?????????????????? ????????, ???????? ?? ???????????????????? ???????? ?????? ?????????? ???? ??????-?????????? ???????? ???????????????????? ???? ???????? ????????, ?? ???????? ?????????????? ?????????????????? ???????????????????????????? ?????? ?????????? ???? ??????-?????????? ???????? ???????????????????? ?????????? ?????? ????????????????, ?????? ?? ???????????? ??????, ?????? ????????????, ???? ?????????????? ?????? ???????????????????? ???????????????????? ?? ?????? ???????? ????????????????????. It is unrealistic to expect everyone to wait for LLMs that are trained exclusively on highly curated and thoroughly vetted codebases. Therefore, even the most routine software development must intensify its investment of skilled time and resources to address the risks of using LLM-based code: ??. ???????? ????????????: Conducting a thorough review of LLM-generated code is essential to identify potential security flaws. In addition to human review, consider employing code review by LLMs that are unrelated to the ones generating the code in question (e.g., utilizing an independently developed LLM review "team" of agents.). ??. ?????????????????????????? ????????????????: Utilize tools such as OWASP Dependency-Check, SonarQube, or Snyk to scan for vulnerabilities within open-source libraries. ??. ???????????? ?????????????????????? ??????????????????: Follow established secure software development practices and frameworks, such as the NIST Cybersecurity Framework. Integrate these practices into your LLM-based code generation tools. ??. ??????-???????????????? ???????????????? ????????????????: Implement safeguards tailored for LLM-integrated applications, including input validation, output sanitization, and access controls. ??. ???????????? ?????????? ????????????????: Rigorously vet and monitor third-party dependencies and packages suggested by LLMs. Seek out reputable validation sources when they are available. In a forthcoming post, I will delve into the complex issues of ???????????????????????????? and ???????????????????????????? when vulnerabilities become evident, exploring their implications for our industry and how the business of AI-assisted software development might evolve. #LLM #AICodeGen #vuln #exploit
A tip-of-the-hat to Tony Fadell for his recent post and the pointer to this article from Ars Technica: https://arstechnica.com/science/2025/01/its-remarkably-easy-to-inject-new-medical-misinformation-into-llms/
To make it really easy for others, here is a link to Part 1: https://www.dhirubhai.net/feed/update/urn:li:activity:7283931761632321536/
CEO, Telematica Inc.
1 个月If you are ready for a 'deep dive', I've recently become aware of this Microsoft study. Based on their experience red teaming at Microsoft, they propose an "internal threat model ontology" and eight main lessons learned: 1. Understand what the system can do and where it is applied 2. You don't have to compute gradients to break an AI system 3. AI red teaming is not safety benchmarking 4. Automation can help cover more of the risk landscape 5. The human element of AI red teaming is crucial 6. Responsible AI harms are pervasive but difficult to measure 7. LLMs amplify existing security risks and introduce new ones 8. The work of securing AI systems will never be complete https://arxiv.org/abs/2501.07238