Envoy Codebase Overview

Envoy is an open-source, high-performance C++ edge and service proxy designed for cloud-native applications. Originally built at Lyft and now a CNCF-graduated project, it serves as the data plane for service mesh architectures (Istio, Consul Connect), API gateways, and ingress controllers. Envoy handles L3/L4/L7 traffic with dynamic configuration via xDS APIs, eliminating the need for restarts on config changes.

What is this project?

Envoy (envoyproxy/envoy) is a self-contained process that runs alongside every application server in a mesh topology (sidecar model) or as an edge proxy. Written in modern C++, it provides transparent network functionality including HTTP/1.1, HTTP/2, HTTP/3 (QUIC), gRPC proxying, automatic retries, circuit breaking, rate limiting, load balancing, observability (stats, tracing, logging), and TLS termination/origination. All configuration is delivered dynamically via the xDS (discovery service) protocol family.

graph TD
    A[Control Plane<br/>Istio/xDS Server] -->|xDS gRPC| B[Envoy Proxy<br/>source/server/server.cc]
    B --> C[Listener Manager<br/>source/server/listener_manager_impl.cc]
    C --> D[Filter Chains<br/>Network + HTTP Filters]
    D --> E[HTTP Connection Manager<br/>source/extensions/filters/network/http_connection_manager]
    E --> F[Router Filter<br/>source/common/router/router.cc]
    F --> G[Cluster Manager<br/>source/common/upstream/cluster_manager_impl.cc]
    G --> H[Load Balancing<br/>Round Robin/Maglev/Ring Hash]
    H --> I[Upstream Connections<br/>source/common/upstream/]
    style B fill:#f9f,stroke:#333,stroke-width:2px

Why does it exist?

Before Envoy, service-to-service communication relied on language-specific libraries (Finagle, Hystrix) or hardware load balancers, creating inconsistent observability, fragmented retry logic, and tight coupling between application code and networking concerns. Envoy solves this by extracting networking into a language-agnostic, out-of-process proxy with:

  • Dynamic configuration: xDS APIs allow control planes to push updates without proxy restarts, enabling zero-downtime deployments.
  • Uniform observability: Every request produces consistent stats, traces, and access logs regardless of application language.
  • Extensibility: A filter chain architecture lets operators compose L4/L7 behaviors (auth, rate limiting, Wasm plugins) without modifying core code.

Trade-offs: Running a sidecar per-service adds latency (~0.5-1ms per hop) and memory overhead (~30-50MB per instance). C++ provides performance but has a steeper contributor learning curve compared to Go-based alternatives. The xDS protocol is powerful but complex to implement correctly.

Who uses it?

Envoy targets platform engineers building service meshes, API gateways, and cloud infrastructure. It is deployed at massive scale by:

  • Service mesh data planes: Istio, Consul Connect, AWS App Mesh, and Gloo Mesh all use Envoy as their proxy.
  • API gateways: Envoy Gateway, Ambassador/Emissary, and Contour build on Envoy for north-south traffic.
  • Cloud providers: Google Cloud (Traffic Director, Cloud Run), AWS (App Mesh, ECS), and Azure use Envoy.
  • Large-scale deployments: Lyft, Airbnb, Stripe, Pinterest, Slack, and Salesforce run Envoy in production handling millions of RPS.

Use cases range from simple HTTP reverse proxying to complex multi-cluster service mesh topologies with mTLS, fault injection, traffic mirroring, and canary deployments.

Key Concepts

To navigate the Envoy codebase effectively, understand these foundational abstractions:

  1. Listeners & Filter Chains: A listener (source/server/listener_manager_impl.cc) binds to an address and dispatches connections through filter chains. Each chain is a pipeline of network filters (L4) and HTTP filters (L7). Filter chain matching selects chains by SNI, ALPN, destination port, etc.

  2. xDS Protocol: The discovery service family (LDS, RDS, CDS, EDS, SDS) in api/envoy/ defines how control planes push configuration. Envoy subscribes via gRPC streaming or REST polling. Delta xDS enables incremental updates. This is the key differentiator from static proxies like Nginx/HAProxy.

  3. HTTP Connection Manager (HCM): The most important network filter (source/extensions/filters/network/http_connection_manager/). Manages HTTP codec (HTTP/1.1, HTTP/2, HTTP/3), request routing, access logging, and the HTTP filter chain.

  4. Cluster Manager: Manages upstream clusters (source/common/upstream/cluster_manager_impl.cc)—groups of backend hosts with associated load balancing policies, health checking, circuit breaking, and outlier detection.

  5. Threading Model: Envoy uses a single-threaded-per-worker model with a main thread for management. Workers handle connections independently, communicating via TLS (thread-local storage) slots (source/common/thread_local/) for lock-free hot paths.

Project Structure

The monorepo is built with Bazel, with core source under source/, API definitions in api/, and extensions as pluggable components:

envoy/
├── api/                        # xDS API protobuf definitions
│   └── envoy/
│       ├── config/             # Bootstrap, listener, cluster configs
│       ├── extensions/         # Filter/transport socket configs
│       └── service/            # xDS service definitions
├── source/
│   ├── exe/                    # Main entrypoint (main.cc)
│   ├── server/                 # Server lifecycle, listener manager
│   ├── common/                 # Shared libraries
│   │   ├── upstream/           # Cluster manager, load balancing
│   │   ├── router/             # HTTP router filter
│   │   ├── http/               # HTTP codec, connection manager utils
│   │   ├── network/            # L4 connection handling
│   │   ├── config/             # xDS subscription, config utilities
│   │   ├── thread_local/       # Thread-local slot mechanism
│   │   └── event/              # libevent wrapper (event loop)
│   └── extensions/             # All pluggable filters/transports
│       ├── filters/
│       │   ├── network/        # L4 filters (HCM, TCP proxy, etc.)
│       │   └── http/           # L7 filters (router, RBAC, JWT, etc.)
│       ├── transport_sockets/  # TLS, QUIC transport
│       ├── clusters/           # Custom cluster types
│       └── health_checkers/    # HTTP, gRPC health checks
├── test/                       # Unit, integration, fuzz tests
├── tools/                      # Build tooling, code generators
└── bazel/                      # Bazel build configs, external deps

Notable patterns:

  • Extension registry: Filters register via REGISTER_FACTORY macros (source/extensions/extensions_build_config.bzl), enabling compile-time inclusion/exclusion.
  • Interface-driven: Nearly everything implements abstract interfaces in include/envoy/ (now merged into source/), allowing dependency injection and testability.
  • Protobuf-first config: All configuration is defined as protobuf messages, auto-generating C++ types and validation.

Start with source/exe/main.cc -> source/server/server.cc for initialization, then source/extensions/filters/network/http_connection_manager/ for the HTTP processing pipeline.