Shift Left in Security with OpenSource tools
Shifting left involves proactively identifying vulnerabilities early in the development process, before deploying flawed code into production. This is achieved by carefully selecting Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) tools. The results are then aggregated in a vulnerability management and security orchestration tool, such as DefectDojo
In this article, I suggest a selection of tools aimed at achieving thorough vulnerability coverage, eliminating the need for expensive paid services or third-party handling of your source code. I will supply YML scripts to configure GitLab's CI/CD pipeline, which will then relay the findings to DefectDojo.
Prerequisite:
Port Scanning with Nmap and Vulenrs
To assess our attack surface, we must monitor all publicly accessible resources. The initial step in this exploration involves identifying open ports and determining potential exploits based on the listed domain names.
Assume a list of all your public domain names in a file, namely, dns.txt. The YML script below utilises an Ubuntu image to scan the specified list of domain names for open ports, identifying any known services. Following this, it triggers a Vulners script to search for known exploits within vulners.com database.
It will significantly bolster your cybersecurity posture through continuous vulnerability monitoring and assessment.
nmap_scan:
stage: nmap_scan
image: ubuntu:rolling
script:
- apt-get update
- apt-get install -y nmap
- apt-get install -y curl
- apt-get install -y git
- HOME_DIR=$(pwd) || true
- echo $HOME_DIR
- cd /usr/share/nmap/scripts/ || true
- git clone https://github.com/vulnersCom/nmap-vulners.git
- cd $HOME_DIR
- nmap -A --script vulners -Pn -iL dns.txt -oA output || true
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
artifacts:
paths:
- output.xml
expire_in: 10 week
Here I have used the CI_PIPELINE_SOURCE as "schedule". Navigate to the project’s CI / CD > Schedules page to define a schedule. I recommend an interval of 6 hours for, which does 4 scans per day. If the domain name list is too long you may consider running CRON job in a Linux machine using the given script in place of CI/CD jobs.
Semgrep OSS Engine for SAST
Semgrep OSS Engine is an open-source code analysis tool. Unlike traditional static analysis tools that rely on abstract syntax trees (AST), Semgrep uses pattern matching techniques to scan code in a way that’s closer to how humans read code. It offers fast scans and easy-to-understand results.
It supports code scan for over 30 languages. The following languages are supported at the General Availability (GA) level: Go, Java, Kotlin, JavaScript, TypeScript, C#, Ruby, JSX, PHP, Python, Scala, JSON, Terraform and Rust covering most used programming languages.
semgrep:
image: returntocorp/semgrep
stage: semgrep
variables:
SEMGREP_RULES: p/default # See more at semgrep.dev/explore.
script:
- semgrep ci --json --output semgrep-findings.json || true
- semgrep ci --sarif > semgrep-findings.sarif || true
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
artifacts:
paths:
- semgrep-findings.json
- semgrep-findings.sarif
expire_in: 10 week
Python specific: Bandit
Bandit is an open-source security tool specifically designed for scanning Python code for vulnerabilities and security flaws. It operates by analyzing the Abstract Syntax Tree (AST) of your Python code to flag issues based on a set of pre-defined test IDs or rules. Bandit is easy to integrate into development pipelines and offers both command-line and API interfaces for convenience.
bandit-check:
image: python:3.11-alpine
stage: bandit
before_script:
- apk add tzdata
- apk add jq
- apk add curl
- pip install bandit
script:
- bandit --format json --output bandit.json -r . || true
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
artifacts:
paths:
- bandit.json
expire_in: 10 week
Infrastructure as Code (IaC): checkov
Checkov is an open-source static code analysis tool focused on identifying security risks in Infrastructure as Code (IaC) configurations. It scans files written in languages like Terraform, CloudFormation, and Kubernetes to flag potential security and compliance violations. Checkov works by evaluating the code against a set of built-in policies and rules, generating a report that details any issues found, along with recommended remediations.
checkov-check:
stage: checkov
allow_failure: true
image:
name: bridgecrew/checkov:latest
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
exists:
- '**/*.yml'
- '**/*.yaml'
- '**/*.json'
- '**/*.template'
- '**/*.tf'
- '**/serverless.yml'
- '**/serverless.yaml'
script:
- checkov --quiet -d . -o json --output-file checkov.json || true
artifacts:
paths:
- "checkov.json"
领英推荐
SBOM: OWASP Dependency Tacker
OWASP Dependency-Track is an intelligent component analysis platform that allows organizations to identify and reduce risk in their software supply chain. It works by analyzing the libraries and dependencies used in your projects to identify known vulnerabilities and out-of-date components.
Dependency-Track supports multiple formats like CycloneDX, SPDX, and SWID, offering a comprehensive view of component risk across different projects. The scripts below creates sbom using CycloneDX and relays the data to `bomtacker` server. The scripts for scanning both Python and JavaScript codebase is given below.
stages:
- cyclonedx-sbom
python_sbom:
image: python:latest
stage: cyclonedx-sbom
before_script:
- pip install cyclonedx-bom
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
script:
- python3 -m cyclonedx_py --format json -r -i requirements.txt -o bom.json
artifacts:
paths:
- bom.json
reports:
cyclonedx: bom.json
sbom-publish:
image: alpine
stage: cyclonedx-sbom
needs:
- job: python_sbom
artifacts: true
before_script:
- apk add tzdata
- apk add jq
- apk add curl
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
script:
- "curl -k -X POST 'https://bomtacker.example.com/api/v1/bom' -H 'Content-Type: multipart/form-data' -H 'X-API-Key:${API_KEY}' -F 'projectName=${pName}' -F 'projectVersion=${pVer}' -F '[email protected]'"
stages:
- cyclonedx-sbom
node_sbom:
image: node:18-alpine
stage: cyclonedx-sbom
before_script:
- npm install
- npm install -g @cyclonedx/cyclonedx-npm
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
script:
- cyclonedx-npm --output-file bom.json --mc-type application --omit dev --short-PURLs
artifacts:
paths:
- bom.json
reports:
cyclonedx: bom.json
sbom-publish:
image: alpine
stage: cyclonedx-sbom
needs:
- job: node_sbom
artifacts: true
before_script:
- apk add tzdata
- apk add jq
- apk add curl
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
script:
- "curl -k -X POST 'https://bomtacker.example.com/api/v1/bom' -H 'Content-Type: multipart/form-data' -H 'X-API-Key:${API_KEY}' -F 'projectName=${pName}' -F 'projectVersion=${pVer}' -F '[email protected]'"
Alternatively, you can use dependency-check for tracking vulnerable components. Dependency-Track also alerts on license change unlike dependency-check.
dependency-check:
stage: dependencycheck
image:
name: owasp/dependency-check
entrypoint: [""]
before_script:
- mkdir data
script:
- /usr/share/dependency-check/bin/dependency-check.sh --project Test --out data/ --scan . --format XML --prettyPrint --enableExperimental || true
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
artifacts:
paths:
- data/*
expire_in: 10 week
OWASP ZAP - Baseline Scan
The OWASP ZAP Baseline Scan is a specific mode of operation in the OWASP Zed Attack Proxy (ZAP) tool designed to perform quick, automated security tests on web applications. It provides a "baseline" level of assurance by flagging potential vulnerabilities without the need for manual intervention.
The baseline scan uses automated crawlers and scanners to quickly identify common security flaws like SQL injection, cross-site scripting, and more. It can be integrated into GitLab CI/CD pipelines for early detection of security issues as follows. The results can be used as an initial security indicator before more exhaustive tests are conducted.
The script below requires a homepage URL where the scan starts and additionally a login url for scanning the pages for privileged role based access.
owaspzap-scan:
image: owasp/zap2docker-stable
stage: owaspzap
script:
- echo ${HOMEPAGE} || true
- mkdir /zap/wrk/
- /zap/zap-baseline.py -x gl-dast-report-base.xml -t ${HOMEPAGE} || true
# --auth-url $login_url \
# --auth-username "[email protected]" \
# --auth-password "manoj-sha-password" || true
- cp /zap/wrk/gl-dast-report-base.xml .
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
artifacts:
paths:
- gl-dast-report-base.xml
expire_in: 10 week
Now that we covered all the port scanning, code scanning and dynamic scanning tools we can use for improving our security posture, let's take a look at a vulnerability management tool for managing all these findings centrally.
DefectDojo: Open Source DevSecOps
DefectDojo is an open-source vulnerability management tool designed to facilitate security application orchestration, reporting, and metrics. It streamlines the process of importing scan results from various security tools like SAST and DAST solutions, consolidating them into a single, centralized dashboard. Users can then triage, manage, and track vulnerabilities, making it easier to prioritize and remediate issues. DefectDojo is often integrated into DevSecOps pipelines for continuous security monitoring.
The script below is an example for publishing the artifacts generated by dependency-check.
dependency-check-publish:
stage: dependencycheck
image: alpine
needs:
- job: dependency-check
artifacts: true
variables:
DEFECTDOJO_ENGAGEMENTID: ${ENGAGEMENTID}
before_script:
- apk add tzdata
- apk add jq
- apk add curl
script:
- "curl -X POST ''${DOJO_URL}'/import-scan/' -H 'accept: application/json' -H 'Content-Type: multipart/form-data' --header 'Authorization: Token '${DOJO_TOKEN}'' -F 'minimum_severity=Info' -F 'active=true' -F 'verified=true' -F 'scan_type=Dependency Check Scan' -F 'product_name='${_PRODUCT_NAME}'' -F 'engagement_name='#${CI_PIPELINE_ID}'' -F 'engagement='${DOJO_ENGAGEMENTID}'' -F 'close_old_findings='${DOJO_CLOSE_OLD_FINDINGS}'' -F 'file=@data/dependency-check-report.xml;type=application/xml' -F 'close_old_findings_product_scope='${DOJO_SCAN_CLOSE_OLD_FINDINGS}'' -F 'push_to_jira=false' -F 'create_finding_groups_for_all_findings=true'"
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
In the same way the results from all the code scanners can be imported to DefectDojo. SARIF (Static Analysis Results Interchange Format) is an OASIS standard that specifies an output file format compatible with numerous code scanning tools. Even if DefectDojo doesn't officially support a particular scanning tool, it can still import results in the SARIF format.
DOJO_ENGAGEMENTID is needed before importing files to defectdojo server. It can be created using a cURL command with DOJO_TOKEN. The yml script for the same is left intentionally as an assignment for the reader.
curl --fail --location --request POST "${DOJO_URL}/engagements/" --header "Authorization: Token ${DOJO_TOKEN}" --header 'Content-Type: application/json' --data-raw "{
\"tags\": [\"${CI_COMMIT_REF_NAME}\"],
\"name\": \"#${CI_PIPELINE_ID}\",
\"description\": \"\",
\"version\": \"\",
\"first_contacted\": \"${TODAY}\",
\"target_start\": \"${TODAY}\",
\"target_end\": \"${ENDDAY}\",
\"reason\": \"string\",
\"tracker\": \"${CI_PROJECT_URL}/-/issues\",
\"threat_model\": \"${DOJO_ENGAGEMENT_TM}\",
\"api_test\": \"${DOJO_ENGAGEMENT_API_TEST}\",
\"pen_test\": \"${DOJO_ENGAGEMENT_PEN_TEST}\",
\"check_list\": \"${DOJO_ENGAGEMENT_CHECK_LIST}\",
\"status\": \"${DOJO_ENGAGEMENT_STATUS}\",
\"engagement_type\": \"CI/CD\",
\"build_id\": \"${CI_PIPELINE_ID}\",
\"commit_hash\": \"${CI_COMMIT_SHORT_SHA}\",
\"branch_tag\": \"${CI_COMMIT_REF_NAME}\",
\"deduplication_on_engagement\": \"${DEDUP}\",
\"product\": \"${DOJO_PRODUCTID}\",
\"source_code_management_uri\": \"${CI_PROJECT_URL}/-/tree/${CI_COMMIT_REF_NAME}\",
\"build_server\": ${BUILD_SERVER},
\"source_code_management_server\": ${CODE_SERVER},
\"orchestration_engine\": ${ORCHESTRATION_ENGINE}
}" | jq -r '.id'
# Note: This assumes that all your environment variables are correctly set.
I hope this article equips DevSecOps engineers with the essential information needed to initiate their journey and develop effective 'Shift Left' strategies.
Cybersecurity Engineer @ Zenatix (Schneider Electric) | cloud security | product security
9 个月Great insights to strengthen the security posture of a company with open-source tools. Thanks for sharing.
IT Certification at TIBCO
1 年Elevate your F5 Certification journey with www.certfun.com/f5! ?? Unlock success in the exam room. #CertFun #F5Success #TechExams
Manager - DevSecOps at Jeavio
1 年Manoj Kumar Sahukar Thank you for sharing!