Orchestrating GCP Packet Mirroring with Gemini CLI and Google MCP
By Michael Elias on January 1, 2026
·103 views
One of the most persistent challenges I’ve faced in cloud computing is mirroring traffic. Whether it’s AWS with VXLAN or GCP using Andromeda, it always takes work.
I’ve architected distributed packet capture solutions using Terraform in both AWS and GCP. They have their purpose—robust, immutable infrastructure. But there are times when I just want to spin up a quick collector without dealing with Terraform state files, CI/CD pipelines, and the "provisioning rigor" that comes with IaC.
As troubleshooters, we are usually trying to figure out someone else's problem, and we need to act fast.
So, I thought it would be fun to put the Gemini CLI to the test. My goal? Orchestrate a complete PCAP collector and all the GCP features required to create a Packet Mirroring solution—without writing a single line of Terraform.
This quickly evolved into two simultaneous exercises: one in Prompt Engineering and the other in GCP Network Architecture.
The Architecture: What We Are Building
We are building a standard GCP Packet Mirroring setup: a mirrored source (VM/Subnet), a collector VM (RHEL 9) running tcpdump, and an Internal Load Balancer (ILB) to act as the next hop for mirrored traffic.

Figure 1: High-level architecture of the GCP Packet Mirroring solution orchestrated by Gemini.
The Decision: Raw CLI vs. Model Context Protocol (MCP)
One of the first architectural decisions for this AI experiment was how to let the LLM interact with the cloud. Do I let it run gcloud commands directly, or do I leverage the Google Cloud MCP Server?
There are inherent risks in letting an LLM execute shell commands directly. Unless you know the exact command syntax and validate what the LLM thinks is the right command, it might be the death of your project. Blindly trusting the LLM is, well, blind.
Additionally, gcloud output is verbose and unstructured. It forces the LLM to use "trial and error" or ingest superfluous data that sucks the life out of your context window.
The Google Cloud MCP Server (Model Context Protocol) removes the guesswork. It offers a structured toolset that makes the LLM highly effective and efficient. Here is how they compare:
- Data Format: The Direct CLI outputs unstructured text or ASCII tables, while MCP provides Structured JSON objects that are easy for the AI to parse.
- Agent Integration: The CLI requires complex regex or parsing logic. MCP uses Native Tool Calling, which is much more reliable.
- Security: The CLI is high-risk as it effectively requires open shell access. MCP is Controlled, allowing access only to pre-defined tools.
- Error Handling: With the CLI, the AI must interpret
stderrtext. MCP provides Structured error reporting. - Efficiency: The CLI often has high token usage due to verbose output. MCP is designed for Optimized token usage.
- Discovery: The AI often "guesses" CLI commands (leading to hallucinations), whereas MCP explicitly lists valid tools and arguments.
For this exercise, using the MCP server was a no-brainer.

Figure 2: The Gemini CLI requesting permission to execute a specific gcloud command via the MCP server.
The Setup
Our environment is a Red Hat Enterprise Linux (RHEL) 9 server (4 vCPUs / 16GB RAM) running in GCP with Gemini CLI installed.
Prerequisites:
gcloudinstalled and authenticated:gcloud auth logingcloud auth application-default login
- Install the Gemini extension:
gemini extensions install https://github.com/gemini-cli-extensions/gcloud
- Verify the connection:
gemini mcp list(You should see a "connected" status).
Phase 1: The "Prompt Engineer" Persona
Keeping in mind that I wanted a reusable asset, I asked the LLM to take on two roles: a Prompt Engineer and a GCP Engineer.
My Initial Prompt:
"You are a prompt engineer and a Google Cloud engineer developing a prompt to deploy a packet mirroring solution with the following attributes:
- Review GCP documentation on packet mirroring requirements and configurations.
- Provision 1 small RHEL 9 instance (8GB RAM, 50GB disk) as the collector.
- Enforce security best practices when asking the user for SSH public keys.
- Ask the user to collect the required data before you start (Project, Region, Zone, etc.)."
Gemini returned an impressive, multi-step, 117-line prompt template.
Prompt Title: GCP Packet Mirroring Solution Deployment
Objective: You are an expert GCP Cloud Engineer. Your task is to deploy a complete packet capturing solution using Google Cloud's Packet Mirroring service. You will provision a
collector instance, configure it, set up the necessary networking, and create the mirroring policy.
Follow these steps precisely.
---
Step 1: Prerequisites and User Input
Before starting, gather the following information from the user. Do not proceed until you have these details:
1. Project ID: The target GCP Project ID.
2. Region & Zone: The Region and Zone for deployment (e.g., us-central1, us-central1-a).
3. Network & Subnet: The VPC Network and Subnet where the collector instance will reside.
4. Public SSH Key: Ask the user for their public SSH key. Instruct them as follows:
* "Please provide your public SSH key (typically found in ~/.ssh/id_rsa.pub). This will be added to the instance metadata for secure access. Do not, under any circumstances,
provide your private key."
Phase 2: The Execution
I started a fresh Gemini session and pasted the generated prompt. I watched meticulously as the LLM crafted each command sent to the MCP.

Figure 3: The LLM acting as a Cloud Engineer, prompting the user for necessary infrastructure details.
It prompted me for the Project ID, Region, Zone, VPC, Subnet, and my public SSH key. It also asked for the Mirrored Source (Instance, Subnet, or Tag).

Figure 4: The AI requesting the specific target for traffic mirroring.
The "Oops" Moment
While watching the execution, I noticed a logic gap. The LLM configured the Internal Load Balancer (ILB) but failed to set the specific flag defining this LB as a "Packet Mirroring" backend.
I stopped the execution and challenged the LLM. It responded with its usual gracious (if slightly condescending) tone: "You are absolutely right, that is a critical field."
While the LLM likely would have eventually caught the error via the MCP's structured error reporting, it would have wasted time and tokens. This highlights why human-in-the-loop is still critical for AI orchestration.
After the correction, it built the complete solution end-to-end. I logged into the collector instance it built, ran tcpdump, and successfully saw the mirrored traffic.

Figure 5: Successful creation of the collector instance using structured MCP tools.

Figure 6: Verification of the packet mirror—ICMP traffic visible in tcpdump on the collector.
The result? The LLM built a solution in 10 minutes that usually takes me a day of reading documentation and context-switching to configure manually.
Phase 3: The Refinement (AI Improving AI)
I wanted to take it to the next level. In the same session, I asked the LLM to switch back to its Prompt Engineer persona to evaluate the session and critique the original prompt.
It suggested:
- Hardcoding the Mirror Flag: Ensuring the ILB is always correctly flagged.
- Loop Prevention: Adding a validation step to ensure the user doesn't accidentally mirror the collector itself (which creates a traffic feedback loop).

Figure 7: The "Prompt Engineer" persona refining the logic to prevent traffic loops.
I ran the revised prompt in a new session, and it worked to perfection. Here is the final, optimized prompt:
Here is the revised prompt incorporating the suggested improvements:
Objective: You are an expert GCP Cloud Engineer. Your task is to deploy a complete packet capturing solution using Google Cloud's Packet Mirroring service. You will provision
collector instance, configure it, set up the necessary networking, and create the mirroring policy.
Follow these steps precisely.
---
Step 1: Prerequisites and User Input
Before starting, gather the following information from the user. Do not proceed until you have these details:
1. Project ID: The target GCP Project ID.
2. Region & Zone: The Region and Zone for deployment (e.g., us-central1, us-central1-a).
3. Network & Subnet: The VPC Network and Subnet where the collector instance will reside.
4. Public SSH Key: Ask the user for their public SSH key. Instruct them as follows:
* "Please provide your public SSH key (typically found in ~/.ssh/id_rsa.pub). This will be added to the instance metadata for secure access. Do not, under any
circumstances,
provide your private key."Paste the Revised LLM-Engineered Prompt here.
The Teardown
Finally, we needed to clean up. I asked the LLM to generate a "Teardown Prompt":
"You are a prompt engineer and a Google Cloud engineer. Create a prompt template to remove all packet mirroring features you deployed. It should:
- Prompt for Project ID.
- List all configured mirrors.
- Ask which to destroy.
- Destroy all dependencies (including the collector).
- Verify and report."
It generated a precise 65-line cleanup script that worked on the first try.
Objective: You are an expert GCP Cloud Engineer. Your task is to guide a user through the complete and safe removal of a Google Cloud Packet Mirroring configuration and all its
associated resources.
Follow these steps precisely, ensuring you get user confirmation before any destructive actions.
---
### Step 1: Identify the Target Project and Region
First, you must confirm the scope of the operation with the user.
1. Ask the user for the **GCP Project ID** they wish to operate on.
2. Ask the user for the **Region** where the Packet Mirroring policy is deployed (e.g., `us-east4`).
Do not proceed without this information.
### Step 2: Discover and Confirm the Deletion Target
Before deleting anything, you must identify all Packet Mirroring policies in the specified project and region and have the user select the one to destroy.
Conclusion
This experiment proved that Gemini CLI + MCP is a potent combination for cloud engineering. By using the MCP server, we moved away from fragile shell script parsing to robust, structured tool execution.
This post focused on the infrastructure—getting the traffic where it needs to be. For the analysis side of the equation, check out the related post: Orchestrating Packet Analysis with Gemini and TShark. There, I demonstrate how to apply this same AI-driven approach to perform real-time PCAP analysis on the collector we just deployed.
Look for upcoming posts that incorporate both solutions, where we will combine this automated mirroring infrastructure with real-time TShark analysis for a complete end-to-end workflow.
Michael Elias is a Senior Principal Operations Engineer at Dun & Bradstreet with a history of entrepreneurship in the ISP and consulting spaces. A veteran of the dot-com era with certifications from Cisco, Red Hat, and Fortinet, Michael specializes in high-compliance infrastructure and enterprise architecture.
- Michael Elias (Read full bio)
Subscribe to this Post
Get notified when there are updates or new content related to "Orchestrating GCP Packet Mirroring with Gemini CLI and Google MCP".
Comments
Loading comments...
Leave a Comment
Note: All comments are moderated and will appear after approval.