How to Build a Secure Linux MCP Server with FastMCP & Gemini CLI
By Michael Elias on January 27, 2026
·56 views
In this post, I want to demonstrate how to use the Gemini CLI to troubleshoot Linux systems. As in my other posts, I am leveraging a Model Context Protocol (MCP) server as the conduit to the Linux platforms.
My primary goal was to build a robust Linux MCP server that was both safe and secure. I needed a solution that would strictly limit the tools allowed, preventing Gemini from hallucinating dangerous commands like rm -rf * or chaining operations together in risky ways. While I found several examples on GitHub, I thought this was the perfect use case to roll up my sleeves and build my own.
Here is the high-level architecture of what we are building:

The Setup: Installing FastMCP on RHEL9 with Pyenv
FastMCP seems to be the de facto platform folks are using to build MCP servers right now. So, off to the races I went on a stormy weekend.
Installing FastMCP was our first obstacle. The library requires Python 3.10 or higher, but the RHEL 9 box I am running on was stuck on version 3.9.
In came pyenv. Pyenv is a fantastic tool that allows you to install and manage different versions of Python on a single system. It gave us the ability to run FastMCP on our preferred version (we chose 3.11) without breaking the system python.
Designing the MCP Server
Next, we needed a script to define our Linux troubleshooting server and integrate it into the Gemini CLI.
Building a Linux MCP is right up my alley, having messed with Linux for over three decades. The requirements were strict: the toolbox needs to be "read-only." We are diagnosing issues here, not fixing them (yet). We configured a whitelist of ALLOWED_BINARIES that the MCP server would leverage.
I know Perl and Bash pretty well, but my Python skills are weak at best. A friend of mine in the late 90s, Mike Schwartz, once said to me, "Mike, you really need to learn Python." I thought I could do everything a good SA needs with Perl and Bash. I was wrong.
I turned to Gemini to help me write the code to get the server up and running. In my experience, providing Gemini with a loosely defined prompt and then asking it to help enhance that prompt works wonders. It helps create a functional prompt that actually delivers good code.
After a few tweaks, I had the prompt to build the initial server. My original draft looked something like this:
PROMPT: You are a Devops expert in python and FastMCP, you are collaborating with the user to build a Linux Troubleshooting MCP sever using FastMCP with the following;
- Create code that meets best practices for security requirements
- Search for the common troubleshooting read only commands
- Prevent control operators eg “&&” “||”
- Facilitate remote execution via ssh with key only
- Never not prompt user for private key or password
- Make any other suggestions
Gemini refined this into a structured specification, prioritizing security and strict command whitelisting.
# Building a Secure Remote Diagnostic Tool with `fastmcp` for Gemini CLI
## Persona
You are an expert DevOps engineer, proficient in Python and deeply skilled in the `fastmcp` framework. Your primary focus is on security, robustness, and creating production-ready code that is easy to understand and maintain.
## Objective
Your task is to develop a comprehensive Python script for a `fastmcp` server. This server will enable secure, read-only diagnostic commands to be executed on remote Linux systems via SSH, seamlessly integrating with Gemini CLI.
## Core Requirements
1. **Prioritize Security:** Every aspect of the script's design and implementation must uphold the highest security standards.
2. **Strict Command Whitelisting:**
- Implement a strict, non-configurable whitelist named `ALLOWED_BINARIES` (e.g., `ALLOWED_BINARIES = {"uptime", "df", ...}`) containing only safe, read-only command binaries.
- Before execution, use `shlex.split()` on the incoming `command` string to safely separate the binary from its arguments.
- If the extracted binary is *not* in `ALLOWED_BINARIES`, the script must immediately return a clear `SECURITY BLOCK` error message and prevent command execution.
3. **Prevent Shell Injection:** Utilize `subprocess.run` with `shell=False` to rigorously prevent shell injection vulnerabilities and command chaining (e.g., stopping `uptime ; rm -rf /` from executing multiple malicious commands).
4. **Remote Execution via SSH:** All commands will be executed on a remote machine via SSH.
5. **Tool Naming and Parameters (Crucial for `fastmcp` and Security):**
- The primary diagnostic tool function must be named `safe_execution`.
- It must accept the following parameters:Refined Prompt generated by the LLM
Security First
Once Gemini created my script, I wanted to ensure it was safe, secure, and followed best practices. I used my handy "Comprehensive Python Code Review for Security and Quality" prompt—which you can find in my previous Zabbix monitoring post.
The audit identified two critical vulnerabilities: one regarding a potential Man-in-the-Middle (MITM) attack and another regarding SSH injection. The agent was grateful for me pointing them out and fixed the code accordingly.
Here is the final, secured Python script:
from mcp.server.fastmcp import FastMCP
import re
import subprocess
import shlex
from typing import Optional
import time
# Initialize the MCP Server
mcp = FastMCP("Remote Linux Troubleshooter")
# --- SECURITY GUARDRAILS ---
# A strict whitelist of allowed binaries for remote execution.
ALLOWED_BINARIES = {
"uptime",
"free",
"df",
"ls",
"tail",
"journalctl",
"ps",Running as a Service
We copied the script to the FastMCP directory and ran it manually from our pyenv environment. It didn't run correctly the first time, but after a few code assist tweaks, we were able to run the script within its own context.
Now we were ready to integrate it into the Gemini CLI.
My original thought was to have Gemini start the server per session. This became problematic—either due to my own neophyte understanding or the LLM just couldn't figure it out. Even with LLM tuning with the FastMCP documentation, we hit a wall.
I abandoned that idea and decoupled the server process from the agent process. In hindsight, this is a more robust solution: having the server running as a persistent systemd service and the agent connecting to it when launched.
We created a system file at /etc/systemd/system/fastmcp-troubleshooter.service:
[Unit]
Description=FastMCP Linux Troubleshooter Server
After=network.target
[Service]
User=foo
Group=bar
WorkingDirectory=/opt/fastmcp-troubleshooter
ExecStart=/bin/bash -c "/opt/fastmcp-troubleshooter/venv/bin/python -m fastmcp.cli run /opt/fastmcp-troubleshooter/linux_troubleshooter.py"
Restart=on-failure
RestartSec=5s
PrivateTmp=false
[Install]
WantedBy=multi-user.target
We then enabled and started the process with standard Linux commands:
sudo systemctl enable fastmcp-troubleshooter.service
sudo systemctl start fastmcp-troubleshooter.service
We verified the status to ensure it was listening:

Integration and Testing
With a successful MCP server running, we just needed to integrate it into the Gemini CLI. This was as simple as adding the following to the settings.json file under the .gemini directory:
"fastmcp_troubleshooter": {
"url": "http://127.0.0.1:8000/mcp"
}Now for the moment of truth. We started the Gemini CLI to verify we could see the MCP server and the tools offered.

Testing the MCP server can be done by simply invoking the tool run_safe_diagnostic against a Linux endpoint. The LLM will ask the user for a username and host, but I thought it would be more practical to build a workflow prompt to collect the basics first.
Here is the prompt we loaded into the agent:
# Interactive Linux Server Troubleshooting Workflow
## Persona
You are an expert Linux System Administrator and DevOps engineer. Your goal is to interactively guide me, the user, through a systematic process to diagnose performance issues or general problems on a remote Linux server.
## Tool
You have access to a single, powerful tool:
- `run_safe_diagnostic(username: str, hostname: str, command: str, ssh_key_path: Optional[str] = None) -> str`
This tool securely executes a whitelisted, read-only command on a remote server. You must use this tool for all diagnostic steps.
## Workflow
Begin the session by asking me for the initial connection details:
1. **Username:** The SSH username.
2. **Hostname:** The hostname or IP address of the server.
3. **Path to SSH Key:** The path to the SSH private key.
Once you have these details, start the troubleshooting process. Follow this sequence, explaining the purpose of each step and showing me the output from the `run_safe_diagnostic` tool.
We ran the @linux_troubleshooting_workflow.md command. The agent walked me through the setup, asking for the target credentials.

The LLM walked me through 6 steps, checking the basic system health of the following:
- Uptime
- Memory Usage
- Disk Space
- Top Processes
- Recent System logs
- Network Connections
Here is the summary output generated by the agent:

If the LLM detected any anomalies, it would prompt the user with additional tools to run. The user could always prompt the LLM for specific deeper analysis, for example: "Use sar to analyze the load for the past 7 days and help identify which processes contributed to the load."
Conclusion
Executing this prompt really tested our security implementation. At one point, the LLM wanted to do a for loop on the server. However, thanks to the security conditions we defined, the for loop was blocked. The LLM responded to the block by finding an alternative approach: ingesting the raw data and doing the computations within its own context. While this uses more tokens, it keeps the server safe.
This was quite a journey. Sure, I could have grabbed a pre-built MCP server for Linux, but look at the fun we had rolling our own. We successfully installed FastMCP, learned how to manage Python versions with pyenv, created a fully functional (and hardened) read-only MCP server, and tested it against a live environment.
What a great way to spend a snowy weekend in January. I am always open to comments and criticisms—it’s the stuff that helps drive more robust solutions.
Michael Elias is a Senior Principal Operations Engineer at Dun & Bradstreet with a history of entrepreneurship in the ISP and consulting spaces. A veteran of the dot-com era with certifications from Cisco, Red Hat, and Fortinet, Michael specializes in high-compliance infrastructure and enterprise architecture.
- Michael Elias (Read full bio)
Subscribe to this Post
Get notified when there are updates or new content related to "How to Build a Secure Linux MCP Server with FastMCP & Gemini CLI".
Comments
Loading comments...
Leave a Comment
Note: All comments are moderated and will appear after approval.