gcp-exploit

Fail

Audited by Gen Agent Trust Hub on May 4, 2026

Risk Level: HIGHDATA_EXFILTRATIONCOMMAND_EXECUTIONEXTERNAL_DOWNLOADSPROMPT_INJECTION
Full Analysis
  • [DATA_EXFILTRATION]: The skill provides explicit instructions and payloads to exfiltrate sensitive cloud credentials and access tokens to an external, non-whitelisted server (attacker.com).
  • Evidence: Shell commands in SKILL.md (Phase 3.2) and references/gcp-iam-privesc.md (Paths 7 and 10) explicitly use curl to send stolen tokens and service account keys to https://attacker.com/exfil.
  • [COMMAND_EXECUTION]: The skill contains a large volume of instructions for executing administrative and offensive commands using gcloud, gsutil, and kubectl to manipulate cloud infrastructure.
  • Evidence: Instructions for creating backdoor service account keys (gcloud iam service-accounts keys create), modifying IAM policies to grant attacker access, and deploying malicious Cloud Functions or Compute instances to bypass security boundaries.
  • [EXTERNAL_DOWNLOADS]: The instructions direct the user to download and execute offensive security tools from third-party GitHub repositories that are not categorized as official cloud providers or trusted libraries.
  • Evidence: git clone https://github.com/RhinoSecurityLabs/GCP-IAM-Privilege-Escalation.git followed by execution of Python scripts for automated privilege escalation scanning.
  • [PROMPT_INJECTION]: The skill is structured as an "Attack Methodology," which could induce an AI agent to adopt an adversarial persona and potentially bypass safety guidelines when interacting with GCP environments.
  • Evidence: The metadata utilizes MITRE ATT&CK techniques (T1078.004, T1552.005, etc.) and provides a step-by-step roadmap for compromising production cloud resources.
Recommendations
  • AI detected serious security threats
Audit Metadata
Risk Level
HIGH
Analyzed
May 4, 2026, 08:15 AM