May 17, 2024 13 min read

So we built a Reverse Tunnel in Go over HTTP/3 and QUIC

George MacRorie
Revesrt: Reverse Tunnel in Go

We recently open-sourced Reverst, a reverse tunnel implementation built on QUIC and HTTP/3.

This new project sits at the heart of Flipt Hybrid Cloud to provide efficient, scalable, and secure connections from our cloud to your instances of Flipt.

This blog post aims to explain a little bit of the why (or rather asks why not?) and a lot more on the how, specifically the how in Go.

Building Flipt Cloud

If you don't already know, we build Flipt, the open-source, Cloud Native feature management solution. Flipt is designed to be self-hosted on your infrastructure. It is simple to deploy and scale on most modern containerized platforms to serve your applications and workloads. We believe you get the most benefits by running it yourself, and we want you to keep doing that.

This past week we announced our new Hybrid Cloud offering for the Flipt ecosystem. We say hybrid because we want you to keep running Flipt for feature flag evaluations on your infrastructure. Self-hosting flags has so many benefits, including lower costs, improved stability and performance as well as security. However, we believe can make all the other non-functional requirements for managing your Flipt a lot easier by integrating with our Hybrid Cloud offering.

Flipt exposes an API and a user interface that consumes it. We've built our platform to safely expose this API and in particular the UI on the public internet, all without the need to manage DNS, TLS, load-balancers, OAuth, or session credentials. We've simplified the process down to a few clicks and an API key. This means you can get right down to deploying and scaling Flipt while organizing your teams' access in our remote platform.

To make it simple, we built our system around reverse tunnel technology.

Don't call us, we'll call you

There are a lot of commercial and open-source projects doing reverse tunneling. Admittedly, we did not do our homework well enough before setting out on this adventure. We evaluated two managed, predominantly commercial (they have open-source components) offerings: Ngrok and Inlets. Both are amazing and you should definitely read more about them, and probably use them if you want to leverage tunnels for your own things.

However, blinded by that inner feeling of "how hard can this be?" and a shallow understanding of a shiny (not that) new thing (HTTP/3 and QUIC) we built Reverst instead. Why in Go, you ask? It’s all I know, stop asking silly questions, I’m too dumb for Rust. Why HTTP/3 and QUIC? I’ll get to that later.

But in all seriousness, Go is actually pretty great for this, and mostly in part to the quic-go project. I want to give a massive shoutout to the quic-go folks for leading the charge with QUIC for Go. I would probably have used something off the shelf and saved everyone from Reverst and this blog post otherwise. Thank you for the awesome hard work, for being a welcoming place to contribute back to, and for enabling us to build silly things.

Establishing the Tunnel

In this section, we’re going to Go (not sorry) deep at some of the inner workings of Reverst. There will be lots of Go snippets, you have been warned.

Reverst Overview
Overview of how Reverst works internally

Reverst is built around the same core principles as most other reverse tunnels. Instead of building a service that listens and accepts connections within its own deployment environment, have it dial out to an instance of a reverse tunnel (i.e. Reverst in this case). Then the remote tunnel server authenticates the request, registers the connection and each party switches their roles from server to client and vice-versa.

💡 This is why Ngrok is so great for local development. Services running in hard-to-reach networks (e.g. behind NATs like your home network or on mobile devices) can establish themselves on tunnels deployed in reachable network locations.

In Reverst, a client wishing to establish a listening tunnel opens a QUIC connection to a Reverst server, then starts an initial stream and writes a RegisterListenerRequest. The purpose of this request is twofold:

  1. Authenticate the caller with Reverst
  2. Identify a target tunnel group

The following Go code is adapted directly from our client listener code, which is attempting to register itself on a remote Reverst instance:

stream, err := conn.OpenStream()
if err != nil {
	return fmt.Errorf("accepting stream: %w", err)

defer stream.Close()

enc := protocol.NewEncoder[protocol.RegisterListenerRequest](stream)
defer enc.Close()

req := &protocol.RegisterListenerRequest{
	Version:     protocol.Version,
	TunnelGroup: s.TunnelGroup,

auth := defaultAuthenticator
if s.Authenticator != nil {
	auth = s.Authenticator

if err := auth.Authenticate(stream.Context(), req); err != nil {
	return fmt.Errorf("registering new connection: %w", err)

if err := enc.Encode(req); err != nil {
	return fmt.Errorf("encoding register listener request: %w", err)

Tunnel groups sit at the heart of Reverst's ability to load-balance multiple proxied requests across a number of tunneled listeners. They’re primarily the name that associates these connections together. Each named tunnel group has a set of additional configurations including which authentication mechanisms and credentials it has enabled, along with which hostnames the proxy API should route to the group. When an API request that needs proxying is made to Reverst, it uses the request’s hostname to identify the tunnel group and route it to a relevant connection in the pool.

Reverst Tunnel Groups
Overview of a single Reverst tunnel group

In order for Reverst to know which tunnel groups it can serve, which hostnames to route to each group, and which authentication mechanisms and credentials to use, it needs this information encoded somewhere. Configuring these parameters is achieved through a YAML configuration file, which looks much like the following:

  "a": # tunnel group name
      - "" # hostname for routing proxied HTTP requests
        username: "user"
        password: "pass"
      - ""
        token: "somesecretkey"

Reverst can source this configuration from either the file system or directly from a Kubernetes ConfigMap. It can be configured to watch either of these sources and update its tunnel group and routing details without needing to restart the process. You won’t need three guesses to figure out which popular container orchestrator we’re using to deploy our instances of Reverst.

When a Reverst server instance receives a new connection it attempts to:

  • Accept an initial stream
  • Read and decode a RegisterListenerRequest
  • Identify the tunnel group is one that is known to Reverst
  • Authenticate the request using the tunnel groups’ authentication credentials

The next Go snippet is adapted from the code in Reverst which handles this process:

stream, err := conn.AcceptStream(conn.Context())
if err != nil {
	return fmt.Errorf("accepting stream: %w", err)

defer stream.Close()

dec := protocol.NewDecoder[protocol.RegisterListenerRequest](stream)
defer dec.Close()

req, err := dec.Decode()
if err != nil {
	return fmt.Errorf("decoding register listener request: %w", err)

w := &responseWriter{enc: protocol.NewEncoder[protocol.RegisterListenerResponse](stream)}
defer func() {
	// .. cleanup resources and emit telemetry

// tripper here is the name we have for our connection pool
// it's weird I know, but it relates to the fact we handle
// our connections as a set of http.RoundTripper instances
tripper, ok := s.trippers[req.TunnelGroup]
if !ok {
	// respond with an appropriate error
	return err

if err := s.handler.Authenticate(&req); err != nil {
	// respond with an appropriate error
	return err

// encodes a RegisterListenerResponse with CodeOK
if err := w.write(nil, protocol.CodeOK); err != nil {
	return fmt.Errorf("encoding register listener response: %w", err)

// add the connection to the pool

You can see at the end of this code block, Reverst encodes a RegisterListenerResponse type back onto the stream, which is then finally closed. This is when and where the connection associated is added to a “pool” of connections. To understand better how the pool is modeled in Reverst, it helps to reflect on how Go’s standard http.Client works, as Reverst uses these to perform the proxying of requests.

type Client struct {
	// Transport specifies the mechanism by which individual
	// HTTP requests are made.
	// If nil, DefaultTransport is used.
	Transport RoundTripper
	// ...

The Go net/http client is a wrapper around the interface http.RoundTripper. A round-tripper is the thing that actually performs the writing of the request and reading of the response onto some underlying connection(s). The higher-level http.Client handles details of HTTP such as the normalization of certain headers, cookies, and redirects.

The quic-go project uses the http.RoundTripper interface as its point of integration for supporting HTTP/3. Their sub-package http3 exports its own implementation http3#RoundTripper. Reverst goes a step further and implements its own wrapper around the http3.RoundTripper. This allows it to maintain its own set of connections.

Where do these connections come from? You guessed it, they get added by that line at the end of the registration process:

// add the connection to the pool

The Reverst implementation of http.RoundTripper does one last thing, and that is to implement round-robin style load-balancing. This ensures that each proxied inbound request is distributed evenly over the set of open connections that have been registered:

func (r *roundRobbinTripper) RoundTrip(req *http.Request) (*http.Response, error) {
	defer func() { /* some cleanup */ }()

	for {
		// set is an internal type that yields each http3.RoundTripper from the
		// set, in order, on each subsequent calls to Next(...)
		// it supports safe concurrent addition, removal and yielding of entries
		rt, ok, err := r.set.Next(req.Context())
		if err != nil {
			// can only be context error

		if !ok {
			return nil, net.ErrClosed

		resp, err := rt.RoundTrip(req)
		if err != nil {
			if errors.Is(err, net.ErrClosed) {

		return resp, err

Congratulations, your connection is in the pool, and now it's time for both sides of the connection to switch roles.

// register server as a listener on remote tunnel
if err := s.register(conn); err != nil {
	return err

log.Info("Starting server")

return (&http3.Server{Handler: s.Handler}).ServeQUICConn(conn)

From here, quic-go and their http3 implementation does all of the hard work. The client switches to start serving incoming HTTP/3 requests from accepted streams. The Reverst server already has the RoundTripper pool embedded inside of a http.Client which is stored in a map keyed by the hostnames defined in the tunnel ground configuration.

func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
	start := time.Now().UTC()

	log := slog.With("method", r.Method, "path", r.URL.Path)
	log.Debug("Handling request")

	// bit of host normalization necessary for deployment behind proxies
	host, _, perr := net.SplitHostPort(r.Host)
	if perr != nil {
		host = r.Host

	if forwarded := r.Header.Get("X-Forwarded-Host"); forwarded != "" {
		host = forwarded

	// get http client with embedded round tripper
	client, ok := s.clients[host]
	if !ok {
		log.Debug("Unexpected client host requested", "host", host)
		http.Error(w, "bad gateway", http.StatusBadGateway)

	// clone the request, force HTTPS and change the target host to that
	// of the tunnels serving host address
	r = r.Clone(r.Context())
	r.URL.Scheme = "https"
	r.URL.Host = s.conf.TunnelAddress
	r.RequestURI = ""

	resp, err := client.Do(r)

Here we see the Reverst server’s HTTP handler for proxying requests to the set of http.Client's with their embedded http.RoundTripper pools. The calling requests target hostname is used to reference the client, the request is copied and manipulated, then forwarded onto the identified client.

In Flipt Cloud we deploy and configure Reverst so you don’t have to (you don’t even have to know it exists). We then tightly integrate it with our user management system and API gateway. This allows us to provide a point-and-click experience for managing SSO providers and configuring your internal organization and its users.

Why HTTP/3 and QUIC?

Finally, why have we chosen to build on HTTP/3 and QUIC instead of alternatives? Most other popular tunneling solutions I've come across are built on top of web sockets and TCP.

HTTP/3 is the newest standard for the HTTP protocol family. The primary differentiator for HTTP/3 is that it is built on top of the QUIC protocol, as opposed to TCP (this is the basis for both HTTP/1 and 2).

HTTP/2 brought a much-needed improvement over the HTTP/1 protocol. It solved a deficiency in the request/response protocol known as head-of-line (HOL) blocking. It multiplexes multiple requests over a single TCP connection (amongst other general improvements). However, there still exists yet another HOL blocking problem within TCP itself.

TCP handles the reliable, ordered delivery of packets to a destination. The internet has a habit of losing packets from time to time. It's TCP's job to identify this situation and re-transmit these lost packets. It is in this retransmission phase that TCP actually has another HOL-blocking situation. While in a state of retransmission, the protocol will block other packets being sent until order is restored.

To mitigate this issue in TCP, a different protocol is required. However, TCP is ubiquitous and the lingua franca for stateful, reliable, bidirectional connections on the internet. Replacing something like TCP requires making changes to all the networked systems out there serving the internet. This is where the group behind QUIC chose another strategy; build this new protocol on top of UDP instead (TCP's other widely deployed, less reliable sibling).

QUIC offers secure (TLSv3 required) and persistent connections with bidirectional streaming over the existing internet's UDP fabric. Delivering the benefits of TCP, but without the HOL blocking problem.

Reverst takes advantage of these new protocols to build persistent connections, which can handle multiple concurrent requests over unreliable networks.

Wrapping Up

If we haven’t lost you by this point, thanks for sticking it out!

There are actually a lot of details I have omitted (believe it or not), including how we deploy Reverst in Kubernetes, the custom controller which operates it, how we scale it, and route connections based on TLS SNI. Don’t worry, I’ll probably bore you all with that in the next blog post.

We’re not afraid at Flipt to experiment with something new, and we’re really excited to see how this one plays out. You can sign up to Flipt Hybrid Cloud Beta and also learn more about it in our documentation.

If you want to check out the code for Reverst, you can find it on GitHub. We’d love to hear your thoughts on it, and if you have any ideas for improvements, please open an issue or a pull request.

If you want to chat with us about it in real time, come join our Discord.