Open source techs
Last updated
Last updated
Architecutre
Envoy is an L7 proxy and communication bus designed for large modern service oriented architectures. The project was born out of the belief that:
The network should be transparent to applications. When network and application problems do occur it should be easy to determine the source of the problem.
In practice, achieving the previously stated goal is incredibly difficult. Envoy attempts to do so by providing the following high level features:
Out of process architecture: Envoy is a self contained process that is designed to run alongside every application server. All of the Envoys form a transparent communication mesh in which each application sends and receives messages to and from localhost and is unaware of the network topology. The out of process architecture has two substantial benefits over the traditional library approach to service to service communication:
Envoy works with any application language. A single Envoy deployment can form a mesh between Java, C++, Go, PHP, Python, etc. It is becoming increasingly common for service oriented architectures to use multiple application frameworks and languages. Envoy transparently bridges the gap.
As anyone that has worked with a large service oriented architecture knows, deploying library upgrades can be incredibly painful. Envoy can be deployed and upgraded quickly across an entire infrastructure transparently.
HTTP/3 support (currently in alpha): As of 1.19.0, Envoy now supports HTTP/3 upstream and downstream, and translating between any combination of HTTP/1.1, HTTP/2 and HTTP/3 in either direction.
Kong is built on top of Nginx as a plugin-based API gateway. It extends Nginx functionality by adding custom plugins for various features such as authentication, rate-limiting, logging, and more.
Here's how Kong typically integrates with Nginx:
OpenResty: Kong uses OpenResty, which is a distribution of Nginx bundled with additional Lua modules. OpenResty allows developers to extend Nginx's functionality using Lua scripting.
Custom Lua code: Kong's core functionality and plugins are written in Lua. These Lua scripts run within the Nginx worker processes and handle various aspects of request processing, such as routing, authentication, and response transformation.
Proxying requests: Kong acts as a reverse proxy, receiving incoming requests and forwarding them to the appropriate upstream services. It leverages Nginx's proxying capabilities to perform this task efficiently.
Plugin architecture: Kong's plugin architecture allows developers to extend its functionality by writing custom Lua plugins. These plugins can hook into different phases of the request-response lifecycle and modify the behavior of the API gateway.
Configuration management: Kong manages its configuration using a declarative configuration file or an Admin API. When changes are made to the configuration, Kong reloads Nginx to apply the new settings without downtime.
Overall, Kong builds on top of Nginx by leveraging its powerful proxying capabilities and extending its functionality using Lua scripting and custom plugins. It does not modify the Nginx source code but rather integrates with it through Lua scripts and configuration management.
Istio is an open-source service mesh platform that provides advanced networking, security, and observability features for microservices-based applications running in Kubernetes or other container orchestration platforms. It aims to simplify the management of microservices communication by adding a transparent layer of infrastructure between services.
Technically, Istio functions as a service networking layer that automates networking and communication between microservices. It is language-independent and platform-agnostic, and supports a variety of programming languages, enabling seamless intercommunication among microservices built with different technologies. In addition, Istio seamlessly integrates with Kubernetes, the most popular orchestrator for containerized applications, as well as virtual machine (VM) technology utilized by legacy applications.
Key features of Istio include:
Traffic Management: Istio allows you to control the flow of traffic between services using features like intelligent routing, load balancing, and traffic shifting. It supports various deployment strategies such as canary deployments, A/B testing, and blue-green deployments.
Security: Istio provides robust security features to protect service-to-service communication within the mesh. This includes authentication, authorization, and encryption using mutual TLS (mTLS) between services. It also offers fine-grained access control policies and secure service communication across heterogeneous environments.
Observability: Istio enhances observability by collecting telemetry data, including metrics, logs, and traces, from all services in the mesh. It integrates with popular observability tools like Prometheus, Grafana, Jaeger, and Kiali to provide insights into the performance, health, and behavior of microservices.
Policy Enforcement: Istio enables you to enforce policies for traffic management, security, and compliance across the entire service mesh. This includes rate limiting, access control, circuit breaking, and retry mechanisms to ensure reliability and resilience of microservices.
Resilience: Istio helps improve the resilience of microservices by providing features like automatic retries, timeouts, and fault injection. It allows you to configure circuit breakers to prevent cascading failures and gracefully handle errors in distributed systems.
What is Envoy
L3/L4 filter architecture: At its core, Envoy is an L3/L4 network proxy. A pluggable chain mechanism allows filters to be written to perform different TCP/UDP proxy tasks and inserted into the main server. Filters have already been written to support various tasks such as raw , , , , , , , etc.
HTTP L7 filter architecture: HTTP is such a critical component of modern application architectures that Envoy an additional HTTP L7 filter layer. HTTP filters can be plugged into the HTTP connection management subsystem that perform different tasks such as , , , sniffing Amazon’s , etc.
First class HTTP/2 support: When operating in HTTP mode, Envoy both HTTP/1.1 and HTTP/2. Envoy can operate as a transparent HTTP/1.1 to HTTP/2 proxy in both directions. This means that any combination of HTTP/1.1 and HTTP/2 clients and target servers can be bridged. The recommended service to service configuration uses HTTP/2 between all Envoys to create a mesh of persistent connections that requests and responses can be multiplexed over.
HTTP L7 routing: When operating in HTTP mode, Envoy supports a subsystem that is capable of routing and redirecting requests based on path, authority, content type, values, etc. This functionality is most useful when using Envoy as a front/edge proxy but is also leveraged when building a service to service mesh.
gRPC support: is an RPC framework from Google that uses HTTP/2 or above as the underlying multiplexed transport. Envoy all of the HTTP/2 features required to be used as the routing and load balancing substrate for gRPC requests and responses. The two systems are very complementary.
Service discovery and dynamic configuration: Envoy optionally consumes a layered set of for centralized management. The layers provide an Envoy with dynamic updates about: hosts within a backend cluster, the backend clusters themselves, HTTP routing, listening sockets, and cryptographic material. For a simpler deployment, backend host discovery can be (or even ), with the further layers replaced by static config files.
Health checking: The way of building an Envoy mesh is to treat service discovery as an eventually consistent process. Envoy includes a subsystem which can optionally perform active health checking of upstream service clusters. Envoy then uses the union of service discovery and health checking information to determine healthy load balancing targets. Envoy also supports passive health checking via an subsystem.
Advanced load balancing: among different components in a distributed system is a complex problem. Because Envoy is a self contained proxy instead of a library, it is able to implement advanced load balancing techniques in a single place and have them be accessible to any application. Currently Envoy includes support for , , via an external rate limiting service, , and . Future support is planned for request racing.
Front/edge proxy support: There is substantial benefit in using the same software at the edge (observability, management, identical service discovery and load balancing algorithms, etc.). Envoy has a feature set that makes it well suited as an edge proxy for most modern web application use cases. This includes termination, HTTP/1.1 HTTP/2 and HTTP/3 , as well as HTTP L7 .
Best in class observability: As stated above, the primary goal of Envoy is to make the network transparent. However, problems occur both at the network level and at the application level. Envoy includes robust support for all subsystems. (and compatible providers) is the currently supported statistics sink, though plugging in a different one would not be difficult. Statistics are also viewable via the port. Envoy also supports distributed via thirdparty providers.
Istio is a leading, , open source platform for service mesh, and is instrumental in managing the infrastructure for the next generation of microservices applications. Istio can help development and operations teams manage distributed, cloud native applications at large scale across hybrid cloud and multi-cloud environments.
Envoy and Kong comparison: