March 22, 2023 11 min read

Generating the Flipt Go SDK

George MacRorie
Flipt's Go SDK

Photo by Lenny Kuhne on Unsplash

Generating the Flipt Go SDK

This blog post describes the journey we took to build our new Go SDK, and how we developed our protoc plugin in Go to achieve that. It first explores our motivations and then gets into some of the details of how we built it and the tools that go into the automation around it.

We're releasing the new Go SDK today! You should be able to fetch it using go get go.flipt.io/flipt/sdk/go.

Getting Serious About Clients

Recently, we decided to revise and improve our API client offering, starting with our support for Go.

Go is the core language used to develop Flipt's backend service. It is something the team has a long history with, so this was an obvious place for us to start.

With the recent addition of different authentication methods for Flipt, we wondered how we could better support integrating these authentication mechanisms into clients.

Our documentation explains how to manually wire static client tokens into a gRPC or HTTP client, however, we now support authenticating using Kubernetes service-account tokens. Doing so is a more involved process, with an extra API handshake in the middle, so this is something we ideally want our clients to do for you with minimal configuration.

Status Quo

Flipt’s core service and APIs are defined using gRPC and protobuf. This allows us to define our APIs as a set of RPCs and messages in the proto language and then use a set of protoc tools and plugins to scaffold domain models, servers and clients in different languages. This is a superpower for us. It allows us to reach other programming ecosystems and languages without prior knowledge or experience in them, opening Flipt up to a wider audience.

However, gRPC alone is not for everyone or every ecosystem. While it has a big reach, not every language is catered to equally. In particular, the browser hasn’t had the best support so we also wanted an equivalent HTTP API. The API should also be understandable by developers, allowing them to write their clients using an HTTP library in their language of choice.

This is where grpc-gateway comes into the mix. Flipt has used this project since the beginning to present an equivalent REST-like API using JSON as the message serialization format. The grpc-gateway tooling supports server generation (not client) in Go.

This is where we were until very recently. We have a gRPC protobuf generated client for the Go community, with no additional bells or whistles, just a thin shim around the Flipt gRPC endpoints and no support for invoking the HTTP API in Go. While we’re uncertain of the use cases, it seemed a shame to mandate gRPC when the HTTP API is also available.

Our Goals

  • A single consistent SDK API, which abstracts the details of the transport from the caller. While we want our users to have the choice, we don’t think the transport details matter when it comes to consuming the client's functionality in your application code.
  • Two implementations of the transport abstraction, one for gRPC and one for HTTP.
  • To generate as much of this as possible from the original proto definitions. We invested in gRPC and protobuf for the tooling and automation. Having our SDK advance with these definitions will keep us up to date.
  • Develop an abstraction around authenticating our SDK, which supports consumers’ needs to access Flipt’s APIs with minimal configuration.
Flipt SDK Architecture
Flipt SDK Architecture

The entry point to our new client is the SDK type. This type encapsulates the three sections of the Flipt RPC APIs known as Flipt (core API), Auth (authentication-related APIs) and Meta (server metadata APIs). All of these types are generated from the original proto definitions.

These SDK types are (mostly) abstracted away from the transport details. The specifics are hidden behind a set of Transport Go interfaces. Each of these interfaces has an equivalent implementation for both grpc and http.

type Transport interface {
	AuthClient() AuthClient
	FliptClient() flipt.FliptClient
	MetaClient() meta.MetadataServiceClient
}

For the most part, the grpc transport implementation is identical to that of the grpc-go generated client. The packages contain some thin wrapping code to glue our SDK together. The crux of the code has been implemented and the transport interfaces are very close to these concrete grpc implementations. This was an intentional decision to minimize effort and generated code.

Check out Flipt on GitHub

Like what you're reading? Please consider giving us a star on GitHub.

The http implementation on the other hand required some more work. For a start, grpc-gateway does not ship a generated HTTP Go client, so we had to implement this ourselves. To invoke the API generated by grpc-gateway, we need to produce code that can adapt our RPC messages into the appropriate HTTP requests. We also need to provide a Go implementation that conforms to the Transport interface definitions derived from gRPC.

The grpc-gateway ecosystem has a collection of additional annotations and metadata which describe how the RPC methods should be represented as HTTP requests, for example:

http:
  rules:
	# ...
	- selector: flipt.Flipt.CreateVariant
	   post: /api/v1/flags/{flag_key}/variants
	   body: "*"

Our custom protoc plugin parses this additional metadata and uses it to generate Go code which can produce an equivalent HTTP request.

func (x *FliptClient) CreateVariant(ctx context.Context, v *flipt.CreateVariantRequest, _ ...grpc.CallOption) (*flipt.Variant, error) {
	var body io.Reader
	var values url.Values
	reqData, err := protojson.Marshal(v)
	if err != nil {
		return nil, err
	}
	body = bytes.NewReader(reqData)
	req, err := http.NewRequestWithContext(ctx, http.MethodPost, x.addr+fmt.Sprintf("/api/v1/flags/%v/variants", v.FlagKey), body)
	if err != nil {
		return nil, err
	}
	req.URL.RawQuery = values.Encode()
	resp, err := x.client.Do(req)
	if err != nil {
		return nil, err
	}
	defer resp.Body.Close()
	var output flipt.Variant
	respData, err := io.ReadAll(resp.Body)
	if err != nil {
		return nil, err
	}
	if err := checkResponse(resp, respData); err != nil {
		return nil, err
	}
	if err := protojson.Unmarshal(respData, &output); err != nil {
		return nil, err
	}
	return &output, nil
}

I don’t suspect we have managed to support the full breadth of protobuf, grpc and grpc-gateway features. However, we believe we have implemented enough to cover the functionality Flipt takes advantage of.

How does one write a protobuf Go code generator?

https://pkg.go.dev/google.golang.org/genproto is a Go package designed to aid in the development of protoc plugins. It is used by grpc-gateway and grpc-go in order to parse and generate their respective client and server implementations.

Protoc

protoc is the entry point to all things protobuf generation. It parses and validates your protobuf files and invokes any configured plugins with a request to generate code on stdin.

https://pkg.go.dev/google.golang.org/protobuf/types/pluginpb#CodeGeneratorRequest

The message you are provided is a protobuf serialized structure itself. Specifically, it is a pluginpb.CodeGeneratorRequest. This contains the parsed definitions of your protobuf files collected by protoc.

It is up to the plugin to interpret this request and produce a response. The response is written to stdout as a protobuf encoded pluginpb.CodeGeneratorResponse. The response contains all the files you have produced and where they should end up (their file names).

The protogen package provides a framework which hides much of the hairy details of writing a protoc plugin. Simplifying the wiring and providing useful utilities specifically for generating Go code.

There is an excellent blog post by Rotem Tamir, which laid the foundational knowledge we needed to get started on our custom SDK wrapper and HTTP client. If building something with protogen interests you, then I highly recommend reading this first.

Some protogen pointers

I won’t repeat all the wonderful tidbits and details in that blog post, however, I do have a couple of extra pointers that I discovered along the way while developing our SDK generators.

Firstly, we use proto version 3 and in particular, we use the optional keyword on some message fields. This happens to be a feature of protobuf which was introduced after the initial release of version 3. If you attempt to use protogen on proto files using the optional keyword without first enabling it as a “supported feature” in your plugin, then you will be confronted with a warning each time your plugin is invoked:

Warning: plugin "go-flipt-sdk" does not support required features.
  Feature "proto3 optional" is required by 1 file(s):
    auth/auth.proto

It was a little unclear at first, but the trick is to set the optional feature bit on the *protogen.Plugin.SupportedFeatures:

protogen.Options{}.Run(func(gen *protogen.Plugin) error {
		gen.SupportedFeatures |= uint64(pluginpb.CodeGeneratorResponse_FEATURE_PROTO3_OPTIONAL)
		// ....
}

SupportedFeatures is a bitmask and the nice, human-readable names for the feature bits can be found at google.golang.org/protobuf/types/pluginpb.

Secondly, managing imports in your generated Go files can initially be unclear. It turns out that protogen has a really nice feature, which does a lot of heavy lifting with regard to managing the naming and qualification of those names in your generated output:

for _, f := range gen.Files {
  if !f.Generate {
    continue
  }

  filename := string(file.GoPackageName) + ".sdk.gen.go"
  g := gen.NewGeneratedFile(filename, myTargetPackageName)

  // QualifiedGoIdent will handle adding the GoImportPath
  // to the target GeneratedFile (g) in the `import` declaration
  // at the top. It even handles when you reference two packages
  // with the same name by adding a numeric suffix to distinguish them.
  httpClient := g.QualifiedGoIdent(protogen.GoIdent{
    GoImportPath: protogen.GoImportPath("net/http"),
    GoName:       "Client",
  })

  g.P("type MyClient struct {")
  g.P("client *", httpClient)
  g.P("}")
}

protogen.GeneratedFile.QualifiedGoIdent ensures that the path you’re referencing is correctly imported into the top of the resulting generated Go file. It also handles any imports with duplicate names by adding a numeric suffix (e.g. importing both text/template and html/template would resolve to template "text/template" and template1 "html/template"). Then the resulting import path is correctly prepended onto the type when the GoIdent produced by this function is passed into g.P("...", httpClient).

The example generator code above would produce something similar to:

package client

import http "net/http"

type MyClient struct {
	client *http.Client
}

Putting it altogether

So Flipt now has a single protoc plugin, which produces three target packages.

The plugin lives here and it generates the following three packages into the Flipt repository:

  • sdk/go/http containing HTTP implementations of the sdk.Transport interfaces.
  • sdk/go/grpc contained gRPC implementations of the sdk.Transport interfaces.
  • sdk/go the core of our Go SDK.

To bring all of this together, we use a tool called buf to orchestrate the invocations of the protoc toolchain. The protobuf toolchain is a fiddly beast and buf greatly simplifies this process for us.

We have a single addition to our buf.gen.yaml which drives this tool. It looks like this:

plugins:
  # ...
  - name: go-flipt-sdk
    out: sdk/go
    opt:
      - paths=source_relative
    strategy: all

Each of the entries under plugins is treated as a protoc plugin. These will be passed on invocations of the protoc tool by buf. protoc knows when it sees the name go-flipt-sdk to prepend protoc-gen- onto the front and look that binary in your PATH. In our case, it looks for protoc-gen-go-flipt-sdk.

Where next?

We’re hoping this automation keeps us on a good footing with each change to Flipt’s API, allowing us to generate a client in lockstep with the server API, which has support for all the RPC definitions we create.

Beyond just generating the client we plan to add support for the Kubernetes authentication method, reducing that down to some configuration with sensible defaults.

Finally, we have begun the work to build out a new integration test suite to ensure our SDK and Flipt are behaving as we intend. Our next blog post on this topic will explore how we’re building out a new CI pipeline using Dagger. Dagger is enabling us to define a pipeline we can execute anywhere, which is both fast and reproducible. We’re very excited about the future of Dagger and can’t wait to talk about how we plan to leverage it in the Flipt build process.

We hope you’ll find this new SDK useful, and we’re happy to answer any questions that you may have in our community Discord or via GitHub issues/discussions.

We welcome all feedback on how to improve the SDK or Flipt as a whole, so feel free to reach out! You can find us on GitHub, Twitter, Mastodon, and Discord.

Scarf