InfoDevOps

FRP: Your Solution for Exposing Local Servers to the Internet

Tunnel Your Local Services to the Internet Without the Headaches

FRP: The Complete Guide to Exposing Local Servers to the Internet

In today’s interconnected world, accessing your local services from anywhere has become increasingly important. Whether you’re a developer needing to showcase work to clients, a system administrator managing remote infrastructure, or simply someone wanting to access home devices while traveling, the challenge remains the same: How do you securely expose services behind NAT networks and firewalls to the internet?

This is where FRP (Fast Reverse Proxy) comes in—a powerful, flexible, and user-friendly solution for creating secure tunnels to your local services. In this comprehensive guide, we’ll explore everything you need to know about FRP, from basic concepts to advanced deployments with practical, real-world examples.

Understanding the Problem: NAT and Firewalls

Before diving into FRP, let’s understand the problem it solves.

Most home and office networks use NAT (Network Address Translation) to share a single public IP address among multiple devices. While this works well for outgoing connections, it creates challenges for incoming connections because the router doesn’t know which internal device should receive incoming traffic.

Additionally, firewalls block incoming connections by default as a security measure. This combination of NAT and firewalls makes it difficult to access your local services from the outside world.

Traditional solutions include:

  1. Port forwarding: Configuring your router to forward specific ports to internal devices
  2. Dynamic DNS: Using services that keep track of your changing public IP address
  3. VPN: Setting up a Virtual Private Network for secure access to your entire network

Each approach has limitations—port forwarding requires router access and static internal IPs; dynamic DNS doesn’t help with restrictive firewalls; and VPNs can be complex to set up and maintain.

Enter FRP: A Better Solution

FRP addresses these challenges with an elegant client-server architecture that works across virtually any network setup. Here’s how it works:

  1. The FRP server (frps) runs on a machine with a public IP address
  2. The FRP client (frpc) runs on your local machine behind NAT/firewall
  3. The client initiates and maintains a connection to the server
  4. When someone connects to the server, the traffic is forwarded through the established tunnel to your local service

This approach offers several advantages:

  • Works regardless of your local network configuration
  • Doesn’t require router configuration or port forwarding
  • Provides secure, encrypted connections
  • Supports multiple protocols (TCP, UDP, HTTP, HTTPS)
  • Offers advanced features like load balancing and connection pooling

Getting Started with FRP: Basic Setup

Let’s walk through a complete setup of FRP, starting with the basics.

Step 1: Download FRP

First, download the appropriate version for your operating system from the FRP releases page. FRP is available for Windows, macOS, Linux, and more.

Extract the downloaded archive to get the following files:

  • frps: The server executable
  • frpc: The client executable
  • frps.toml: Server configuration template
  • frpc.toml: Client configuration template

Step 2: Set Up the FRP Server (frps)

The server component needs to run on a machine with a public IP address. This could be a cloud VPS, a dedicated server, or any machine accessible from the internet.

Create a basic server configuration file named frps.toml:

This configuration tells frps to listen on port 7000 for client connections.

For more security, you might want to add authentication:

Start the server with:

If all goes well, you should see output indicating that frps is running and listening on port 7000.

Step 3: Set Up the FRP Client (frpc)

Now, on your local machine (the one with services you want to expose), create a client configuration file named frpc.toml:

This configuration:

  • Connects to your FRP server at the specified IP and port
  • Creates a TCP proxy named “ssh”
  • Forwards traffic from port 6000 on your server to port 22 (SSH) on your local machine

Start the client with:

If the connection is successful, you should see output indicating that the client has connected to the server and registered the ssh proxy.

Step 4: Connect to Your Service

Now, anyone can connect to your local SSH service using:

Traffic arriving at port 6000 on your server will be securely forwarded through the FRP tunnel to port 22 on your local machine.

Real-World Examples: Beyond Basic SSH

While SSH forwarding is a common use case, FRP is capable of much more. Let’s explore several practical examples to showcase its versatility.

Example 1: Exposing a Web Development Server

As a web developer, you often need to show your work-in-progress to clients or colleagues. With FRP, you can expose your local development server without deploying to staging.

In this example:

  1. Your local web server is running on port 3000
  2. FRP exposes it via the HTTP protocol
  3. It’s accessible through the custom domain dev.yourdomain.com

For this to work:

  • Add a DNS A record for dev.yourdomain.com pointing to your server IP
  • Configure your frps.toml to include vhostHTTPPort = 80

Now clients can view your development website by simply visiting http://dev.yourdomain.com.

Example 2: Hosting a Game Server for Friends

Want to host a game server for friends without renting dedicated hardware? FRP makes it easy:

With this configuration:

  1. Your local Minecraft server running on port 25565
  2. Is exposed on the same port on your FRP server
  3. Friends connect using your-server-ip:25565 as the server address

This approach works for virtually any game server that uses TCP or UDP protocols.

Example 3: Securing Access with HTTPS

For web services that contain sensitive information, adding HTTPS encryption is crucial:

This configuration:

  1. Takes your local web service running on port 8080
  2. Exposes it with HTTPS encryption
  3. Uses your own SSL certificate for secure connections

You’ll need to:

  • Configure your frps.toml to include vhostHTTPSPort = 443
  • Obtain an SSL certificate for your domain
  • Set up the appropriate DNS records

Example 4: Creating a Private Service with Authentication

Sometimes you want to expose a service but restrict who can access it. FRP offers several ways to do this:

This adds HTTP Basic Authentication to your web service, requiring users to enter the username and password before accessing the content.

Example 5: Load Balancing Multiple Instances

Running multiple instances of your application for redundancy or performance? FRP can distribute traffic between them:

With these configurations running on different machines:

  1. Both instances register to the same group “web-service”
  2. FRP distributes incoming requests between them
  3. If one instance fails, traffic automatically routes to the healthy one

This provides simple load balancing and high availability without additional infrastructure.

Advanced FRP Features

Now that we’ve covered basic and practical examples, let’s explore some of FRP’s more advanced features.

Feature 1: Health Checking

For critical services, you can add health checking to ensure availability:

With this configuration:

  1. FRP checks your service’s /health endpoint every 10 seconds
  2. If it fails 3 times in a row, the proxy is temporarily removed
  3. When the service becomes healthy again, the proxy is automatically restored

This prevents users from being directed to a non-functioning service.

Feature 2: Bandwidth Limiting

To prevent a single service from consuming all available bandwidth:

This limits the bandwidth to 5 MB/s, ensuring other services have adequate network resources.

Feature 3: P2P Mode for Direct Connections

For large file transfers or low-latency applications, the P2P mode can bypass the FRP server once the connection is established:

With this setup:

  1. Machine B can connect to services on Machine A
  2. After initial connection through the FRP server, traffic flows directly between the machines
  3. This reduces latency and server bandwidth usage

Feature 4: Port Multiplexing

FRP allows multiple services to share the same port through protocol detection:

With this server configuration, both HTTP and HTTPS traffic can use the same ports, with FRP automatically directing traffic to the appropriate service based on the protocol.

Feature 5: Custom Subdomain Routing

For teams or organizations with multiple services:

This makes the service available at http://user1.frp.yourdomain.com, allowing each team member or department to have their own subdomain.

Setting Up a Complete FRP Infrastructure

Now let’s put everything together to create a complete FRP infrastructure for a small organization or development team.

Server Configuration (frps.toml)

This comprehensive server configuration:

  1. Listens for client connections on port 7000
  2. Serves HTTP and HTTPS traffic on standard ports
  3. Requires token authentication for client connections
  4. Provides a web dashboard for monitoring proxies
  5. Restricts which ports can be used for remote port mapping
  6. Includes logging and performance optimizations

Client Configurations for Different Use Cases

Development Team Member (frpc-dev.toml)

Operations Team (frpc-ops.toml)

Marketing Team (frpc-marketing.toml)

With these configurations, each team can expose their specific services while maintaining security and isolation between different parts of the organization.

Logical Flow of FRP Connections

To fully understand FRP, it helps to examine the logical flow of connections:

  1. Initialization Phase
    • frps starts on the server, listening on the bindPort (7000 in our examples)
    • frpc starts on the client machine and connects to frps
    • frpc authenticates using the configured method (token in our examples)
    • frpc registers its configured proxies with frps
    • frps acknowledges the registrations and prepares to accept external connections
  2. Connection Phase
    • An external user attempts to connect to a service (e.g., visiting http://john-dev.frp.yourdomain.com)
    • The request reaches frps on the vhostHTTPPort (80)
    • frps examines the host header to determine which proxy should receive the connection
    • frps identifies the client (frpc) that registered the matching proxy
  3. Tunneling Phase
    • frps creates a new connection request and sends it to the appropriate frpc
    • frpc receives the connection request
    • frpc establishes a connection to the local service (e.g., localhost:3000)
    • frpc begins relaying data between the local service and frps
    • frps relays data between frpc and the external user
  4. Maintenance Phase
    • frpc sends periodic heartbeats to frps to maintain the connection
    • frps monitors proxy health through configured health checks
    • If a client disconnects, frps removes its registered proxies
    • When a client reconnects, it re-registers its proxies

This logical flow ensures secure and reliable tunneling of traffic from the internet to your local services, regardless of NAT or firewall restrictions.

Best Practices for FRP Deployment

To get the most out of FRP while ensuring security and performance, follow these best practices:

Security Best Practices

  1. Always use authentication: Configure token authentication at minimum, and consider OIDC for enterprise deployments
  2. Enable TLS for the frpc-frps connection: This encrypts the control channel between client and server
  3. Restrict allowed ports: Use the allowPorts configuration to prevent abuse
  4. Add HTTP authentication for sensitive web services
  5. Regularly update FRP to get the latest security patches
  6. Run frpc with limited privileges: Don’t run it as root/administrator unless necessary

Performance Best Practices

  1. Enable TCP multiplexing: This reduces the number of connections needed
  2. Configure appropriate bandwidth limits: Prevent a single service from consuming all bandwidth
  3. Use connection pooling for services with many short-lived connections
  4. Monitor dashboard metrics to identify performance bottlenecks
  5. Consider P2P mode for high-bandwidth applications to reduce server load

Reliability Best Practices

  1. Implement health checks for critical services
  2. Set up load balancing for important applications
  3. Configure logging to help troubleshoot issues
  4. Use a systemd service (Linux) or Windows Service to ensure frp starts automatically
  5. Consider running redundant frps instances for high-availability deployments

Troubleshooting Common FRP Issues

Even with the best setup, issues can arise. Here’s how to troubleshoot common problems:

1. Connection Failures

If frpc can’t connect to frps:

  • Verify network connectivity (can you ping the server?)
  • Check that frps is running and listening on the configured port
  • Confirm firewall rules allow the connection (both server and client)
  • Verify authentication credentials match between client and server

2. Proxy Registration Failures

If proxies aren’t registering:

  • Check for name conflicts (each proxy name must be unique across all clients)
  • Verify the remotePort isn’t being used by another proxy or service
  • Look for typos in the configuration files

3. Service Accessibility Issues

If you can’t access your service through the proxy:

  • Confirm the local service is running and accessible directly on the client
  • Verify the localIP and localPort in your configuration
  • Check for any authorization or firewall rules that might block the connection
  • Look at the logs for both frpc and frps for specific error messages

4. Performance Problems

If you’re experiencing slow connections:

  • Check your bandwidth settings
  • Consider enabling compression for text-based protocols
  • Look at server resource usage (CPU, memory, network)
  • Try the P2P mode for bandwidth-intensive applications

Extending FRP with Plugins

FRP’s plugin system allows you to extend its functionality beyond simple port forwarding. Here are some useful built-in plugins:

HTTP Proxy Plugin

This creates an HTTP proxy server that can be used to route web traffic through your FRP server—useful for accessing region-restricted content or bypassing network restrictions.

SOCKS5 Plugin

Similar to the HTTP proxy but using the SOCKS5 protocol, which supports a wider range of applications including non-web traffic.

Static File Server

This creates a simple file server, allowing you to share files from the client machine through the FRP server.

Unix Domain Socket Forwarding

This allows you to expose Unix domain sockets as TCP services—particularly useful for Docker API access or other socket-based services.

Comparing FRP with Alternatives

While FRP is an excellent solution for exposing local services, it’s worth comparing it with alternatives to understand its strengths and limitations:

FRP vs. ngrok

ngrok:

  • Commercial service with free tier
  • Simple setup with minimal configuration
  • Built-in analytics and inspection
  • Managed infrastructure

FRP:

  • Completely free and open-source
  • Self-hosted with full control
  • More flexible configuration options
  • Support for more protocols and features

FRP vs. Cloudflare Tunnel (formerly Argo Tunnel)

Cloudflare Tunnel:

  • Integrated with Cloudflare’s security services
  • Simple setup for web services
  • Built-in DDoS protection
  • Limited to HTTP/HTTPS traffic

FRP:

  • Supports more protocols (TCP, UDP, HTTP, HTTPS)
  • No external dependencies or accounts required
  • More configuration options
  • Can be used in environments without internet access

FRP vs. SSH Tunnels

SSH Tunnels:

  • Built into most systems
  • Simple for technical users
  • Limited to port forwarding

FRP:

  • More user-friendly configuration
  • Advanced features like load balancing and health checks
  • Better handling of connection interruptions
  • Web dashboard for monitoring

Conclusion: Building Your FRP Infrastructure

FRP offers a powerful, flexible solution for exposing local services to the internet, whether you’re a developer, system administrator, or enthusiast. By understanding its capabilities and following best practices, you can create a secure, reliable infrastructure for accessing your services from anywhere.

The key advantages of FRP include:

  1. Simplicity: Easy to set up and configure
  2. Flexibility: Supports multiple protocols and configurations
  3. Security: Provides authentication and encryption options
  4. Performance: Offers features like compression and P2P mode
  5. Extensibility: Plugin system for additional functionality

Start with a simple setup and gradually explore more advanced features as your needs grow. With the examples and best practices in this guide, you’re well-equipped to leverage FRP for all your port forwarding needs.

Remember to stay up-to-date with the latest FRP releases, as the project is actively developed with new features and security improvements regularly added.

Happy tunneling!

fdciabdul

Nothing more important except trains youself become better

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button