Orchestrating GCP Packet Mirroring with Gemini CLI and Google MCP
By Michael Elias on January 1, 2026
·61 views
One of the most persistent challenges I’ve faced in cloud computing is mirroring traffic. Whether it’s AWS with VXLAN or GCP using Andromeda, it always takes work.
I’ve architected distributed packet capture solutions using Terraform in both AWS and GCP. They have their purpose—robust, immutable infrastructure. But there are times when I just want to spin up a quick collector without dealing with Terraform state files, CI/CD pipelines, and the "provisioning rigor" that comes with IaC.
As troubleshooters, we are usually trying to figure out someone else's problem, and we need to act fast.
So, I thought it would be fun to put the Gemini CLI to the test. My goal? Orchestrate a complete PCAP collector and all the GCP features required to create a Packet Mirroring solution—without writing a single line of Terraform.
This quickly evolved into two simultaneous exercises: one in Prompt Engineering and the other in GCP Network Architecture.
The Architecture: What We Are Building
We are building a standard GCP Packet Mirroring setup: a mirrored source (VM/Subnet), a collector VM (RHEL 9) running tcpdump, and an Internal Load Balancer (ILB) to act as the next hop for mirrored traffic.

Figure 1: High-level architecture of the GCP Packet Mirroring solution orchestrated by Gemini.
The Decision: Raw CLI vs. Model Context Protocol (MCP)
One of the first architectural decisions for this AI experiment was how to let the LLM interact with the cloud. Do I let it run gcloud commands directly, or do I leverage the Google Cloud MCP Server?
There are inherent risks in letting an LLM execute shell commands directly. Unless you know the exact command syntax and validate what the LLM thinks is the right command, it might be the death of your project. Blindly trusting the LLM is, well, blind.
Additionally, gcloud output is verbose and unstructured. It forces the LLM to use "trial and error" or ingest superfluous data that sucks the life out of your context window.
The Google Cloud MCP Server (Model Context Protocol) removes the guesswork. It offers a structured toolset that makes the LLM highly effective and efficient. Here is how they compare:
- Data Format: The Direct CLI outputs unstructured text or ASCII tables, while MCP provides Structured JSON objects that are easy for the AI to parse.
- Agent Integration: The CLI requires complex regex or parsing logic. MCP uses Native Tool Calling, which is much more reliable.
- Security: The CLI is high-risk as it effectively requires open shell access. MCP is Controlled, allowing access only to pre-defined tools.
- Error Handling: With the CLI, the AI must interpret
stderrtext. MCP provides Structured error reporting. - Efficiency: The CLI often has high token usage due to verbose output. MCP is designed for Optimized token usage.
- Discovery: The AI often "guesses" CLI commands (leading to hallucinations), whereas MCP explicitly lists valid tools and arguments.
For this exercise, using the MCP server was a no-brainer.

Figure 2: The Gemini CLI requesting permission to execute a specific gcloud command via the MCP server.
The Setup
Our environment is a Red Hat Enterprise Linux (RHEL) 9 server (4 vCPUs / 16GB RAM) running in GCP with Gemini CLI installed.
Prerequisites:
gcloudinstalled and authenticated:gcloud auth logingcloud auth application-default login
- Install the Gemini extension:
gemini extensions install https://github.com/gemini-cli-extensions/gcloud
- Verify the connection:
gemini mcp list(You should see a "connected" status).
Phase 1: The "Prompt Engineer" Persona
Keeping in mind that I wanted a reusable asset, I asked the LLM to take on two roles: a Prompt Engineer and a GCP Engineer.
My Initial Prompt:
"You are a prompt engineer and a Google Cloud engineer developing a prompt to deploy a packet mirroring solution with the following attributes:
- Review GCP documentation on packet mirroring requirements and configurations.
- Provision 1 small RHEL 9 instance (8GB RAM, 50GB disk) as the collector.
- Enforce security best practices when asking the user for SSH public keys.
- Ask the user to collect the required data before you start (Project, Region, Zone, etc.)."
Gemini returned an impressive, multi-step, 117-line prompt template.
Prompt Title: GCP Packet Mirroring Solution Deployment
Objective: You are an expert GCP Cloud Engineer. Your task is to deploy a complete packet capturing solution using Google Cloud's Packet Mirroring service. You will provision a
collector instance, configure it, set up the necessary networking, and create the mirroring policy.
Follow these steps precisely.
---
Step 1: Prerequisites and User Input
Before starting, gather the following information from the user. Do not proceed until you have these details:
1. Project ID: The target GCP Project ID.
2. Region & Zone: The Region and Zone for deployment (e.g., us-central1, us-central1-a).
3. Network & Subnet: The VPC Network and Subnet where the collector instance will reside.
4. Public SSH Key: Ask the user for their public SSH key. Instruct them as follows:
* "Please provide your public SSH key (typically found in ~/.ssh/id_rsa.pub). This will be added to the instance metadata for secure access. Do not, under any circumstances,
provide your private key."
Step 2: Provision the Collector Virtual Machine
Create a Red Hat Enterprise Linux 9 VM that will act as the "collector" for the mirrored packets.
* Instance Name: packet-mirroring-collector
* Machine Type: e2-standard-2 (Provides 8GB RAM)
* Boot Disk: 50GB, RHEL 9 (--image-project=rhel-cloud --image-family=rhel-9)
* Metadata: Add the user's public SSH key to the instance metadata. The key should be in the format user:key-string.
Use a gcloud command similar to this template:
gcloud compute instances create packet-mirroring-collector \
--project=[PROJECT_ID] \
--zone=[ZONE] \
--machine-type=e2-standard-2 \
--metadata="ssh-keys=eliasm:$(cat ~/.ssh/id_rsa.pub)" \
--image-project=rhel-cloud \
--image-family=rhel-9 \
--boot-disk-size=50GB \
--network=[VPC_NETWORK] \
--subnet=[SUBNET]
Step 3: Create Firewall Rules
Packet Mirroring requires two specific firewall rules to function correctly:
1. Allow Health Checks: Create a rule to allow GCP's health checkers to reach the collector.
* Name: fw-allow-health-checks
* Source IP Ranges: 35.191.0.0/16, 130.211.0.0/22, 209.85.152.0/22, 209.85.204.0/22
* Protocols/Ports: tcp
2. Allow Mirrored Traffic: Create a rule to allow the mirrored packets from Google's special source range to reach the collector.
* Name: fw-allow-mirrored-traffic
* Source IP Ranges: 35.199.192.0/19
* Protocols/Ports: all (or specify tcp and udp if preferred)
Step 4: Configure the Collector Instance Group
The collector instance must be part of an unmanaged instance group, which will be the target for the internal load balancer.
1. Create an unmanaged instance group.
* Name: packet-mirroring-ig
2. Add the packet-mirroring-collector VM to this group.
Step 5: Configure the Internal Load Balancer
Packet Mirroring sends traffic to a collector via a destination Internal Load Balancer. Configure this now.
1. Create a Health Check:
* Name: packet-mirroring-hc
2. Create a Backend Service:
* Name: packet-mirroring-backend
* Protocol: TCP
* Health Check: Attach the packet-mirroring-hc you just created.
* Backend: Add the packet-mirroring-ig instance group.
3. Create a Forwarding Rule: This is the frontend of the load balancer.
* Name: packet-mirroring-forwarding-rule
* Load Balancing Scheme: INTERNAL
* Backend Service: Attach the packet-mirroring-backend.
* Ports: ALL
* Crucially, add the `--is-mirroring-collector` flag to this command.
Use a gcloud command similar to this template for the forwarding rule:
gcloud compute forwarding-rules create packet-mirroring-forwarding-rule \
--project=[PROJECT_ID] \
--region=[REGION] \
--load-balancing-scheme=INTERNAL \
--backend-service=packet-mirroring-backend \
--ports=ALL \
--is-mirroring-collector \
--network=[VPC_NETWORK] \
--subnet=[SUBNET]
Step 6: Create the Packet Mirroring Policy
Finally, create the policy that ties the mirrored sources to the collector destination.
1. Gather Source Information: Ask the user how they want to identify the traffic to be mirrored. They must choose one of the following options:
* Subnet(s): Provide a list of subnet names.
* Network Tag(s): Provide a list of network tags.
* Instance Name(s): Provide a list of specific VM instance names.
2. Create the Policy: Use the user's input to create the policy. The destination will always be the forwarding rule you created in the previous step.
Use a gcloud command similar to this template (example uses subnets):
gcloud compute packet-mirrorings create packet-mirroring-policy \
--project=[PROJECT_ID] \
--region=[REGION] \
--network=[VPC_NETWORK] \
--mirrored-subnets=[SUBNET_1],[SUBNET_2] \
--collector-ilb=projects/[PROJECT_ID]/regions/[REGION]/forwardingRules/packet-mirroring-forwarding-rule
---
Conclusion: Once all steps are complete, inform the user that the packet mirroring solution is deployed. Mirrored traffic from the specified sources is now being sent to the
packet-mirroring-collector instance. They can SSH into the collector and use tools like tcpdump to inspect the traffic.
Phase 2: The Execution
I started a fresh Gemini session and pasted the generated prompt. I watched meticulously as the LLM crafted each command sent to the MCP.

Figure 3: The LLM acting as a Cloud Engineer, prompting the user for necessary infrastructure details.
It prompted me for the Project ID, Region, Zone, VPC, Subnet, and my public SSH key. It also asked for the Mirrored Source (Instance, Subnet, or Tag).

Figure 4: The AI requesting the specific target for traffic mirroring.
The "Oops" Moment
While watching the execution, I noticed a logic gap. The LLM configured the Internal Load Balancer (ILB) but failed to set the specific flag defining this LB as a "Packet Mirroring" backend.
I stopped the execution and challenged the LLM. It responded with its usual gracious (if slightly condescending) tone: "You are absolutely right, that is a critical field."
While the LLM likely would have eventually caught the error via the MCP's structured error reporting, it would have wasted time and tokens. This highlights why human-in-the-loop is still critical for AI orchestration.
After the correction, it built the complete solution end-to-end. I logged into the collector instance it built, ran tcpdump, and successfully saw the mirrored traffic.

Figure 5: Successful creation of the collector instance using structured MCP tools.

Figure 6: Verification of the packet mirror—ICMP traffic visible in tcpdump on the collector.
The result? The LLM built a solution in 10 minutes that usually takes me a day of reading documentation and context-switching to configure manually.
Phase 3: The Refinement (AI Improving AI)
I wanted to take it to the next level. In the same session, I asked the LLM to switch back to its Prompt Engineer persona to evaluate the session and critique the original prompt.
It suggested:
- Hardcoding the Mirror Flag: Ensuring the ILB is always correctly flagged.
- Loop Prevention: Adding a validation step to ensure the user doesn't accidentally mirror the collector itself (which creates a traffic feedback loop).

Figure 7: The "Prompt Engineer" persona refining the logic to prevent traffic loops.
I ran the revised prompt in a new session, and it worked to perfection. Here is the final, optimized prompt:
Here is the revised prompt incorporating the suggested improvements:
Objective: You are an expert GCP Cloud Engineer. Your task is to deploy a complete packet capturing solution using Google Cloud's Packet Mirroring service. You will provision
collector instance, configure it, set up the necessary networking, and create the mirroring policy.
Follow these steps precisely.
---
Step 1: Prerequisites and User Input
Before starting, gather the following information from the user. Do not proceed until you have these details:
1. Project ID: The target GCP Project ID.
2. Region & Zone: The Region and Zone for deployment (e.g., us-central1, us-central1-a).
3. Network & Subnet: The VPC Network and Subnet where the collector instance will reside.
4. Public SSH Key: Ask the user for their public SSH key. Instruct them as follows:
* "Please provide your public SSH key (typically found in ~/.ssh/id_rsa.pub). This will be added to the instance metadata for secure access. Do not, under any
circumstances,
provide your private key."
Step 1.5: Validate Mirrored Source (Agent Action)
After gathering the source information (e.g., instance name, subnet names, or network tags), immediately validate its existence or correctness using appropriate `gcloud`
commands (e.g., `gcloud compute instances describe [INSTANCE_NAME] --zone=[ZONE] --format=json`). If the source does not exist or is incorrectly specified, inform the user and
request correction before proceeding.
Step 1.6: Critical Pre-flight Check - Avoid Mirroring the Collector (Agent Action)
This is the most important validation step. Before proceeding, you must ensure that the collector instance itself will not be mirrored, which would create a traffic feedback loop. The check depends on how the user has chosen to
identify mirrored sources.
1. If mirroring by Subnet:
* Compare the "Network & Subnet for Collector" (from Step 1.3) with the list of "Mirrored Source" subnets.
* If the collector's subnet is included in the list of subnets to be mirrored, you MUST stop.
* Politely inform the user that this configuration is invalid because it would create a traffic feedback loop. Explain that the collector cannot reside in a subnet that is being mirrored entirely.
* Request that they either choose a different subnet for the collector or provide a different list of subnets to mirror. Do not proceed until this conflict is resolved.
2. If mirroring by Network Tag:
* This configuration is valid, as Google Cloud prevents a collector from mirroring its own traffic even if it has a mirrored tag. No further validation is needed for this case.
3. If mirroring by Instance Name:
* This configuration is valid, as Google Cloud prevents a collector from mirroring its own traffic. No further validation is needed for this case.
Step 2: Provision the Collector Virtual Machine
Create a Red Hat Enterprise Linux 9 VM that will act as the "collector" for the mirrored packets.
* Instance Name: packet-mirroring-collector
* Machine Type: e2-standard-2 (Provides 8GB RAM)
* Boot Disk: 50GB, RHEL 9 (--image-project=rhel-cloud --image-family=rhel-9)
* Metadata: Add the user's public SSH key to the instance metadata. The key should be in the format user:key-string.
Use a gcloud command similar to this template:
1 gcloud compute instances create packet-mirroring-collector \
2 --project=[PROJECT_ID] \
3 --zone=[ZONE] \
4 --machine-type=e2-standard-2 \
5 --metadata="ssh-keys=eliasm:$(cat ~/.ssh/id_rsa.pub)" \
6 --image-project=rhel-cloud \
7 --image-family=rhel-9 \
8 --boot-disk-size=50GB \
9 --network=[VPC_NETWORK] \
10 --subnet=[SUBNET]
Step 3: Create Firewall Rules
Packet Mirroring requires two specific firewall rules to function correctly:
1. Allow Health Checks: Create a rule to allow GCP's health checkers to reach the collector.
* Name: fw-allow-health-checks
* Source IP Ranges: 35.191.0.0/16, 130.211.0.0/22, 209.85.152.0/22, 209.85.204.0/22
* Protocols/Ports: tcp
2. Allow Mirrored Traffic: Create a rule to allow the mirrored packets from Google's special source range to reach the collector.
* Name: fw-allow-mirrored-traffic
* Source IP Ranges: 35.199.192.0/19
* Protocols/Ports: all (or specify tcp and udp if preferred)
* **Important Note:** Mirrored traffic does not originate from the source VM's IP, but from Google Cloud's internal mirroring infrastructure using the `35.199.192.0/19`
range. An explicit ingress firewall rule allowing this range to the collector is essential.
Step 4: Configure the Collector Instance Group
The collector instance must be part of an unmanaged instance group, which will be the target for the internal load balancer.
1. Create an unmanaged instance group.
* Name: packet-mirroring-ig
2. Add the packet-mirroring-collector VM to this group.
Step 5: Configure the Internal Load Balancer
Packet Mirroring sends traffic to a collector via a destination Internal Load Balancer. Configure this now.
1. Create a Health Check:
* Name: packet-mirroring-hc
2. Create a Backend Service:
* Name: packet-mirroring-backend
* Protocol: TCP
* Health Check: Attach the packet-mirroring-hc you just created.
* Backend: Add the packet-mirroring-ig instance group.
* **Important Note:** The health check status of the collector instance in the backend service will likely be `UNHEALTHY`. This is expected behavior for a packet
mirroring collector as it typically doesn't run a service to respond to TCP health checks. The `--is-mirroring-collector` flag ensures traffic is forwarded regardless of healt
status.
3. Create a Forwarding Rule: This is the frontend of the load balancer.
* Name: packet-mirroring-forwarding-rule
* Load Balancing Scheme: INTERNAL
* Backend Service: Attach the packet-mirroring-backend.
* Ports: ALL
* Crucially, add the `--is-mirroring-collector` flag to this command.
Use a gcloud command similar to this template for the forwarding rule:
1 gcloud compute forwarding-rules create packet-mirroring-forwarding-rule \
2 --project=[PROJECT_ID] \
3 --region=[REGION] \
4 --load-balancing-scheme=INTERNAL \
5 --backend-service=packet-mirroring-backend \
6 --ports=ALL \
7 --is-mirroring-collector \
8 --network=[VPC_NETWORK] \
9 --subnet=[SUBNET]
Step 6: Create the Packet Mirroring Policy
Finally, create the policy that ties the mirrored sources to the collector destination.
1. Gather Source Information: Ask the user how they want to identify the traffic to be mirrored. They must choose one of the following options:
* Subnet(s): Provide a list of subnet names.
* Network Tag(s): Provide a list of network tags.
* Instance Name(s): Provide a list of specific VM instance names.
2. Create the Policy: Use the user's input to create the policy. The destination will always be the forwarding rule you created in the previous step.
Use a gcloud command similar to this template (example uses instances and requires a zone for each instance):
1 gcloud compute packet-mirrorings create packet-mirroring-policy \
2 --project=[PROJECT_ID] \
3 --region=[REGION] \
4 --network=[VPC_NETWORK] \
5 --mirrored-instances=instance-1:zone-a,instance-2:zone-b \
6 --collector-ilb=projects/[PROJECT_ID]/regions/[REGION]/forwardingRules/packet-mirroring-forwarding-rule
Step 7: Verify Traffic Flow
After the policy is created, verify that traffic is indeed being mirrored.
1. **Generate Traffic on Source:** Instruct the user to SSH into the **source instance** (e.g., `pcap-gemini-test`).
* First, retrieve the source instance's external IP for SSH access: `gcloud compute instances describe [INSTANCE_NAME] --project=[PROJECT_ID] --zone=[ZONE] --format=
'value(networkInterfaces[0].accessConfigs[0].natIP)'`
* Then, instruct the user to run a traffic-generating command on the source, such as: `ping -c 20 google.com` or `curl -v telnet.p.ota.to`.
2. **Capture Traffic on Collector:** Simultaneously, instruct the user to SSH into the **collector instance** (`packet-mirroring-collector`).
* First, retrieve the source instance's **internal IP** for the `tcpdump` filter: `gcloud compute instances describe [INSTANCE_NAME] --project=[PROJECT_ID] --zone=[ZON
--format='value(networkInterfaces[0].networkIP)'`
* Then, instruct the user to run a `tcpdump` command on the collector, filtering by the source's internal IP (assuming `eth0` is the primary interface): `sudo tcpdump
eth0 host [SOURCE_INTERNAL_IP] -nnvvXS`
3. **Confirm:** Ask the user to confirm if the generated traffic is visible in the `tcpdump` output on the collector. If not, assist them in troubleshooting based on the
observed symptoms.
---
Conclusion: Once all steps are complete and traffic flow is verified, inform the user that the packet mirroring solution is deployed. Mirrored traffic from the specified
sources is now being sent to the `packet-mirroring-collector` instance. They can SSH into the collector and use tools like tcpdump to inspect the traffic.Paste the Revised LLM-Engineered Prompt here.
The Teardown
Finally, we needed to clean up. I asked the LLM to generate a "Teardown Prompt":
"You are a prompt engineer and a Google Cloud engineer. Create a prompt template to remove all packet mirroring features you deployed. It should:
- Prompt for Project ID.
- List all configured mirrors.
- Ask which to destroy.
- Destroy all dependencies (including the collector).
- Verify and report."
It generated a precise 65-line cleanup script that worked on the first try.
Objective: You are an expert GCP Cloud Engineer. Your task is to guide a user through the complete and safe removal of a Google Cloud Packet Mirroring configuration and all its
associated resources.
Follow these steps precisely, ensuring you get user confirmation before any destructive actions.
---
### Step 1: Identify the Target Project and Region
First, you must confirm the scope of the operation with the user.
1. Ask the user for the **GCP Project ID** they wish to operate on.
2. Ask the user for the **Region** where the Packet Mirroring policy is deployed (e.g., `us-east4`).
Do not proceed without this information.
### Step 2: Discover and Confirm the Deletion Target
Before deleting anything, you must identify all Packet Mirroring policies in the specified project and region and have the user select the one to destroy.
1. Execute the following command to list all packet mirroring policies:
`gcloud compute packet-mirrorings list --project=[PROJECT_ID] --region=[REGION]`
2. **Analyze the Output:**
* If no policies are found, inform the user and terminate the process.
* If one or more policies are found, display the list to the user.
3. Ask the user to **type the exact name of the policy** they wish to destroy. If they do not provide a name that matches the list, do not proceed.
### Step 3: Destroy Associated Resources Sequentially
Once the user has confirmed the policy to delete, proceed to destroy all associated resources. It is critical to delete them in the correct order to avoid dependency errors.
**Resource Naming Convention:** For the following steps, assume the associated resources follow the naming convention established in the deployment prompt (e.g.,
`packet-mirroring-collector`, `packet-mirroring-ig`, `packet-mirroring-forwarding-rule`, etc.).
Execute the following commands one by one.
1. **Delete the Packet Mirroring Policy:**
`gcloud compute packet-mirrorings delete [POLICY_NAME] --project=[PROJECT_ID] --region=[REGION] --quiet`
2. **Delete the Forwarding Rule:**
`gcloud compute forwarding-rules delete packet-mirroring-forwarding-rule --project=[PROJECT_ID] --region=[REGION] --quiet`
3. **Delete the Backend Service:**
`gcloud compute backend-services delete packet-mirroring-backend --project=[PROJECT_ID] --region=[REGION] --quiet`
4. **Delete the Health Check:**
`gcloud compute health-checks delete packet-mirroring-hc --project=[PROJECT_ID] --region=[REGION] --quiet`
5. **Delete the Collector Compute Instance:**
`gcloud compute instances delete packet-mirroring-collector --project=[PROJECT_ID] --zone=[ZONE] --quiet`
6. **Delete the Instance Group:**
`gcloud compute instance-groups unmanaged delete packet-mirroring-ig --project=[PROJECT_ID] --zone=[ZONE] --quiet`
7. **Delete the Firewall Rules:**
`gcloud compute firewall-rules delete fw-allow-mirrored-traffic --project=[PROJECT_ID] --quiet`
`gcloud compute firewall-rules delete fw-allow-health-checks --project=[PROJECT_ID] --quiet`
### Step 4: Final Confirmation
After successfully executing all deletion commands, inform the user that the entire packet mirroring configuration, including the policy, load balancer, instance group, collect
instance, and firewall rules, has been completely removed.
Conclusion
This experiment proved that Gemini CLI + MCP is a potent combination for cloud engineering. By using the MCP server, we moved away from fragile shell script parsing to robust, structured tool execution.
This post focused on the infrastructure—getting the traffic where it needs to be. For the analysis side of the equation, check out the related post: Orchestrating Packet Analysis with Gemini and TShark. There, I demonstrate how to apply this same AI-driven approach to perform real-time PCAP analysis on the collector we just deployed.
Look for upcoming posts that incorporate both solutions, where we will combine this automated mirroring infrastructure with real-time TShark analysis for a complete end-to-end workflow.
Michael Elias is a Senior Principal Operations Engineer at Dun & Bradstreet with a history of entrepreneurship in the ISP and consulting spaces. A veteran of the dot-com era with certifications from Cisco, Red Hat, and Fortinet, Michael specializes in high-compliance infrastructure and enterprise architecture.
- Michael Elias (Read full bio)
Subscribe to this Post
Get notified when there are updates or new content related to "Orchestrating GCP Packet Mirroring with Gemini CLI and Google MCP".
Comments
Loading comments...
Leave a Comment
Note: All comments are moderated and will appear after approval.