About this guide
This guide covers using the Kroxylicious Operator to configure, deploy, secure, and operate the Kroxylicious proxy on Kubernetes. Refer to other Kroxylicious guides for information on running the proxy outside Kubernetes or for advanced topics such as plugin development.
1. Kroxylicious Operator overview
Kroxylicious Proxy is an Apache Kafka protocol-aware ("Layer 7") proxy designed to enhance Kafka-based systems.
The Kroxylicious Operator is an operator for Kubernetes which simplifies deploying and operating the Kroxylicious Proxy.
2. API concepts
2.1. API resources used by the Kroxylicious Proxy
The operator takes these custom resources and core Kubernetes resources as inputs:
KafkaProxy
-
Defines an instance of the proxy.
VirtualKafkaCluster
-
Represents a logical Kafka cluster that will be exposed to Kafka clients.
KafkaProxyIngress
-
Configures how a virtual cluster is exposed on the network to Kafka clients.
KafkaService
-
Specifies a backend Kafka cluster for a virtual cluster.
KafkaProtocolFilter
-
Specifies filter mechanisms for use with a virtual cluster.
Secret
-
KafkaService
andKafkaProtocolFilter
resources may reference aSecret
to provide security-sensitive data such as TLS certificates or passwords. ConfigMap
-
KafkaService
andKafkaProtocolFilter
resources may reference aConfigMap
to provide non-sensitive configuration such as trusted CA certificates.
Based on the input resources just described, the operator generates the core Kubernetes resources needed to deploy the Kroxylicious proxy, such as:
ConfigMap
-
Provides the proxy configuration file mounted into the proxy container.
Deployment
-
Manages the proxy
Pod
and container. Service
-
Exposes the proxy over the network to other workloads in the same Kubernetes cluster.
The API is decomposed into multiple custom resources in a similar way to the Kubernetes Gateway API, and for similar reasons. You can make use of Kubernete’s Role-Based Access Control (RBAC) to divide responsibility for different aspects of the overall proxy functionality to different roles (people) in your organization.
For example, you might grant networking engineers the ability to configure KafkaProxy
and KafkaProxyIngress
, while giving application developers the ability to configure VirtualKafkaCluster
, KafkaService
, and KafkaProtocolFilter
resources.
2.2. Compatibility
2.2.1. Custom resource APIs
Kroxylicious custom resource definitions are packaged and deployed alongside the operator. Currently, there’s only a single version of the custom resource APIs: v1alpha1
.
Future updates to the operator may introduce new versions of the custom resource APIs. At that time the operator will be backwards compatible with older versions of those APIs and an upgrade procedure will be used to upgrade existing custom resources to the new API version.
3. Installing the operator
This section provides instructions for installing the Kroxylicious Operator.
Installation options and procedures are demonstrated using the example files included with Kroxylicious.
3.1. Install prerequisites
To install Kroxylicious, you will need the following:
-
A Kubernetes 1.31 or later cluster.
-
The
kubectl
command-line tool to be installed and configured to connect to the running cluster.
For more information on the tools available for running Kubernetes, see Install Tools in the Kubernetes documentation.
3.2. Kroxylicious release artifacts
To use YAML manifest files to install Kroxylicious, download kroxylicious-operator-0.13.zip or kroxylicious-operator-0.13.tar.gz file from the GitHub releases page, and extract the files as appropriate (for example using unzip
or tar -xzf
).
Each of these archives contains:
- Installation Files
-
In the
install
directory are the YAML manifests needed to install the operator. - Examples
-
In the
examples
directory are examples of the custom resources which can be used to deploy a proxy once the operator has been installed.
3.3. Installing the Kroxylicious Operator
This procedure shows how to install the Kroxylicious Operator in your Kubernetes cluster.
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
) resources. -
You have downloaded one of kroxylicious-operator-0.13.zip or kroxylicious-operator-0.13.tar.gz and extracted the contents into the current directory.
-
Edit the Kroxylicious installation files to use the namespace the operator is going to be installed into.
For example, in this procedure the operator is installed into the namespace
my-kroxylicious-operator-namespace
.On Linux, use:
sed -i 's/namespace: .*/namespace: my-kroxylicious-operator-namespace/' install/*.yaml
On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-kroxylicious-operator-namespace/' install/*.yaml
-
Deploy the Kroxylicious operator:
kubectl create -f install
-
Check the status of the deployment:
kubectl get deployments -n my-kroxylicious-operator-namespace
Output shows the deployment name and readinessNAME READY UP-TO-DATE AVAILABLE kroxylicious-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
4. Deploying a proxy
Deploy a basic proxy instance with a single virtual cluster exposed to Kafka clients on the same Kubernetes cluster.
4.1. Prerequisites
-
The operator must be installed in the Kubernetes cluster
-
A Kafka cluster to be proxied
4.2. The required resources
4.2.1. Proxy configuration to host virtual clusters
A KafkaProxy
resource represents an instance of the Kroxylicious Proxy.
Conceptually, it is the top-level resource that links together KafkaProxyIngress
, VirtualKafkaCluster
, KafkaService
, and KafkaProtocolFilter
resources to form a complete working proxy.
KafkaProxy
resources are referenced by KafkaProxyIngress
and VirtualKafkaCluster
resources to define how the proxy is exposed and what it proxies.
KafkaProxy
configurationkind: KafkaProxy
apiVersion: kroxylicious.io/v1alpha1
metadata:
namespace: my-proxy
name: simple
spec: {} (1)
1 | An empty spec creates a proxy with default configuration. |
4.2.2. Networking configuration for on-cluster access
A KafkaProxyIngress
resource defines the networking configuration that allows Kafka clients to connect to a VirtualKafkaCluster
.
It is uniquely associated with a single KafkaProxy
instance, but it is not uniquely associated with a VirtualKafkaCluster
; it can be used by multiple VirtualKafkaCluster
instances.
This example shows a KafkaProxyIngress
for exposing virtual clusters to Kafka clients running in the same Kubernetes cluster as the proxy.
KafkaProxyIngress
configuration.kind: KafkaProxyIngress
apiVersion: kroxylicious.io/v1alpha1
metadata:
namespace: my-proxy
name: cluster-ip
spec:
proxyRef: (1)
name: simple
clusterIP: (2)
protocol: TCP (3)
1 | The proxyRef names the KafkaProxy resource that this ingress is part of. It must be in the same namespace as the KafkaProxyIngress . |
2 | This ingress uses clusterIP networking, which uses Kubernetes Service resources with type: ClusterIP to configure Kubernetes DNS names for the virtual cluster. |
3 | The protocol is set to accept plain TCP connections. Use TLS for encrypted client-proxy communication. |
4.2.3. Configuration for proxied Kafka clusters
A proxied Kafka cluster is configured in a KafkaService
resource, which specifies how the proxy connects to the cluster.
The Kafka cluster may or may not be running in the same Kubernetes cluster as the proxy: Network connectivity is all that’s required.
This example shows a KafkaService
defining how to connect to a Kafka cluster at kafka.example.com
.
KafkaService
configurationkind: KafkaService
metadata:
# ...
spec:
bootstrapServers: kafka.example.com:9092 (1)
nodeIdRanges: (2)
- name: brokers (3)
start: 0 (4)
end: 5 (5)
# ...
1 | The bootstrapServers property is a comma-separated list of addresses in <host>:<port> format. Including multiple broker addresses helps clients connect when one is unavailable. |
2 | nodeIdRanges declares the IDs of all the broker nodes in the Kafka cluster |
3 | name is optional, but specifying it can make errors easier to diagnose. |
4 | The start of the ID range, inclusive. |
5 | The end of the ID range, inclusive. |
4.2.4. Virtual cluster configuration for in-cluster access without TLS
A VirtualKafkaCluster
resource defines a logical Kafka cluster that is accessible to clients over the network.
The virtual cluster references the following:
-
A
KafkaProxy
resource that the proxy is associated with. -
One or more
KafkaProxyIngress
resources that expose the virtual cluster to Kafka clients. -
A
KafkaService
resource that defined the backend Kafka cluster. -
Zero or more
KafkaProtocolFilter
resources that apply filters to the Kafka protocol traffic passing between clients and the backend Kafka cluster.
This example shows a VirtualKafkaCluster
, exposing it to Kafka clients running on the same Kubernetes cluster.
It uses plain TCP (as opposed to TLS) as the transport protocol.
VirtualKafkaCluster
configurationkind: VirtualKafkaCluster
apiVersion: kroxylicious.io/v1alpha1
metadata:
name: my-cluster
namespace: my-proxy
spec:
proxyRef: (1)
name: simple
targetKafkaServiceRef: (2)
name: my-cluster
ingresses:
- ingressRef: (3)
name: cluster-ip
1 | The proxyRef names the KafkaProxy hosting with this virtual cluster.
It must be in the same namespace as the VirtualKafkaCluster . |
2 | The KafkaService that is proxied by the virtual cluster.
It must be in the same namespace as the VirtualKafkaCluster . |
3 | Ingresses to expose the virtual cluster.
Each ingress names a KafkaProxyIngress which must be in the same namespace as the VirtualKafkaCluster . |
4.3. Filters
A KafkaProtocolFilter
resource represents a Kroxylicious Proxy filter.
It is not uniquely associated with a VirtualKafkaCluster
or KafkaProxy
instance; it can be used in a number of VirtualKafkaCluster
instances in the same namespace.
A KafkaProtocolFilter
is similar to one of the items in a proxy configuration’s filterDefinitions
:
-
The resource’s
metadata.name
corresponds directly to thename
of afilterDefinitions
item. -
The resource’s
spec.type
corresponds directly to thetype
of afilterDefinitions
item. -
The resource’s
spec.configTemplate
corresponds to theconfig
of afilterDefinitions
item, but is subject to interpolation by the operator.
5. Operating a proxy
Operate a deployed proxy by configuring its resource allocations.
This section assumes you have a running Kroxylicious proxy instance.
5.1. Configuring Proxy container CPU and memory resource limits and requests
When you define a KafkaProxy
resource, a number of Kubernetes Pods
are created, each with a proxy container.
Each of these containers runs a single Kroxylicious Proxy process.
By default, these proxy containers are defined without resource limits.
You can learn more about Kubernetes resource management here.
To manage CPU and memory consumption in your environment, modify the proxyContainer
section within your KafkaProxy
specification.
KafkaProxy
configuration with proxy container resource specificationkind: KafkaProxy
apiVersion: kroxylicious.io/v1alpha1
metadata:
namespace: my-proxy
name: simple
spec:
infrastructure:
proxyContainer:
resources:
requests:
cpu: '400m'
memory: '656Mi'
limits:
cpu: '500m'
memory: '756Mi'
6. Securing a proxy
Secure proxies by using TLS and storing sensitive values in external resources.
6.2. Securing the client-to-proxy connection
Secure client-to-proxy communications using TLS.
6.2.1. TLS configuration for client-to-proxy connections
This example shows a VirtualKafkaCluster
, exposing it to Kafka clients running on the same Kubernetes cluster.
It uses TLS as the transport protocol so that communication between Kafka clients and the proxy is encrypted.
VirtualKafkaCluster
configurationkind: VirtualKafkaCluster
apiVersion: kroxylicious.io/v1alpha1
metadata:
name: my-cluster
namespace: my-proxy
spec:
proxyRef: (1)
name: simple
targetKafkaServiceRef: (2)
name: my-cluster
ingresses:
- ingressRef: (3)
name: cluster-ip
tls: (4)
certificateRef:
name: server-certificate
kind: Secret
1 | The proxyRef names the KafkaProxy resource that this virtual cluster is part of.
It must be in the same namespace as the VirtualKafkaCluster . |
2 | The virtual cluster names the KafkaService to be proxied.
It must be in the same namespace as the VirtualKafkaCluster . |
3 | The virtual cluster can be exposed by one or more ingresses.
Each ingress must reference a KafkaProxyIngress in the same namespace as the VirtualKafkaCluster . |
4 | If the ingress supports TLS, the tls property configures the TLS server certificate to use. |
Within a VirtualKafkaCluster
, an ingress’s tls
property configures TLS for that ingress.
The tls.certificateRef
specifies the Secret
resource holding the TLS server certificate that the proxy uses for clients connecting through this ingress.
The referenced KafkaProxyIngress
also needs to be configured for TLS.
KafkaProxyIngress
configuration for TLSkind: KafkaProxyIngress
apiVersion: kroxylicious.io/v1alpha1
metadata:
name: cluster-ip
namespace: my-proxy
spec:
proxyRef: (1)
name: simple
clusterIP: (2)
protocol: TLS (3)
1 | The ingress must reference a KafkaProxy in the same namespace as the KafkaProxyIngress . |
2 | Exposes the proxy to Kafka clients inside the same Kubernetes cluster using a ClusterIP service. |
3 | The ingress uses TLS as the transport protocol. |
6.2.2. Mutual TLS configuration for client-to-proxy connections
You can configure a virtual cluster ingress to request or require Kafka clients to authenticate to the proxy using TLS. This configuration is known as mutual TLS (mTLS), because both the client and the proxy authenticate each other using TLS.
VirtualKafkaCluster
configuration requiring clients to present a trusted certificatekind: VirtualKafkaCluster
metadata:
# ...
spec:
# ...
ingresses:
- ingressRef:
name: cluster-ip
tls:
certificateRef:
# ...
trustAnchorRef: (1)
kind: ConfigMap (2)
name: trusted-cas (3)
key: trusted-cas.pem (4)
tlsClientAuthentication: REQUIRED (5)
1 | References a separate Kubernetes resource containing the trusted CA certificates. |
2 | The kind is optional and defaults to ConfigMap . |
3 | Name of the resource of the given kind , which must exist in the same namespace as the VirtualKafkaCluster . |
4 | Key identifying the entry in the given resource. The corresponding value must be a set of CA certificates. Supported formats for the bundle are: PEM , PKCS#12 , and JKS . |
5 | Specifies whether client authentication is required (REQUIRED ), requested (REQUESTED ), or disabled (NONE ). If a trustAnchorRef is specified, the default is REQUIRED . |
6.2.3. TLS version configuration for client-to-proxy connections
Some older versions of TLS (and SSL before it) are now considered insecure. These versions remain enabled by default in order to maximize interoperability between TLS clients and servers that only support older versions.
If the Kafka cluster than you want to connect to supports newer TLS versions, you can disable the proxy’s support for older, insecure versions. For example, if the Kafka cluster supports TLSv1.1, TLSv1.2 and TLSv1.3 you might choose to enable only TLSv1.3 support. This would reduce the susceptibility to a TLS downgrade attack.
It is good practice to disable insecure protocol versions. |
You can restrict which TLS protocol versions the proxy supports for client-to-proxy connections by configuring the protocols
property.
VirtualKafkaCluster
with restricted TLS protocol versionskind: VirtualKafkaCluster
metadata:
# ...
spec:
# ...
ingresses:
- ingressRef:
name: cluster-ip
tls:
certificateRef:
# ...
protocols: (1)
allow: (2)
- TLSv1.3
1 | Configures the TLS protocol versions used by the proxy. |
2 | Lists the protocol versions explicitly allowed for TLS negotiation. |
Alternatively, you can use deny
to specify protocol versions to exclude.
The names of the TLS protocol versions supported depend on the JVM in the proxy container image. See https://docs.oracle.com/en/java/javase/17/docs/specs/security/standard-names.html#sslcontext-algorithms.
6.2.4. TLS cipher suite configuration for client-to-proxy connections
A cipher suite is a set of cryptographic algorithms that together provide the security guarantees offered by TLS. During TLS negotiation, a server and client agree on a common cipher suite that they both support.
Some older cipher suites are now considered insecure, but may be enabled on the Kafka cluster to allow older clients to connect.
The cipher suites enabled by default in the proxy depend on the JVM used in the proxy image and the TLS protocol version that is negotiated.
To prevent TLS downgrade attacks, you can disable cipher suites known to be insecure or no longer recommended. However, the proxy and the cluster must support at least one cipher suite in common.
It is good practice to disable insecure cipher suites. |
You can restrict which TLS cipher suites the proxy uses when negotiating client-to-proxy connections by configuring the cipherSuites
property.
VirtualKafkaCluster
configuration using cipherSuites to allow specific cipherskind: VirtualKafkaCluster
metadata:
# ...
spec:
# ...
ingresses:
- ingressRef:
name: cluster-ip
tls:
certificateRef:
# ...
cipherSuites: (1)
allow: (2)
- TLS_AES_128_GCM_SHA256
- TLS_AES_256_GCM_SHA384
1 | Configures the cipher suites used by the proxy. |
2 | Lists the cipher suites explicitly allowed for TLS negotiation. |
Alternatively, you can use deny
to specify cipher suites to exclude.
The names of the cipher suites supported depend on the JVM in the proxy container image. See https://docs.oracle.com/en/java/javase/21/docs/specs/security/standard-names.html#jsse-cipher-suite-names.
6.3. Securing the proxy-to-broker connection
Secure proxy-to-broker communication using TLS.
6.3.1. TLS trust configuration for proxy-to-cluster connections
By default, the proxy uses the platform’s default trust store when connecting to the proxied cluster over TLS. This works if the cluster’s TLS certificates are signed by a well-known public Certificate Authority (CA), but fails if they’re signed by a private CA instead.
It is good practice to configure trust explicitly, even when proxied cluster’s TLS certificates are signed by a public CA. |
This example configures a KafkaService
to trust TLS certificates signed by any Certificate Authority (CA) listed in the trusted-cas.pem
entry of the ConfigMap
named trusted-cas
.
KafkaService
configuration for trusting certificates.kind: KafkaService
metadata:
# ...
spec:
bootstrapServers: kafka.example.com:9092
tls:
trustAnchorRef: (1)
kind: ConfigMap (2)
name: trusted-cas (3)
key: trusted-cas.pem (4)
# ...
1 | The trustAnchorRef property references a separate Kubernetes resource which contains the CA certificates to be trusted |
2 | The kind is optional and defaults to ConfigMap . |
3 | The name of the resource of the given kind . This resource must exist in the same namespace as the KafkaService |
4 | The key identifies the entry in the given resource. The corresponding value must be a PEM-encoded set of CA certificates. |
6.3.2. TLS authentication to proxied Kafka clusters
Some Kafka clusters require mutual TLS (mTLS) authentication.
You can configure the proxy to present a TLS client certificate using the KafkaService
resource.
The TLS client certificate you provide must have been issued by a Certificate Authority (CA) that’s trusted by the proxied cluster.
This example configures a KafkaService
to use a TLS client certificate stored in a Secret
named tls-cert-for-kafka.example.com
.
KafkaService
configuration with TLS client authentication.kind: KafkaService
metadata:
# ...
spec:
bootstrapServers: kafka.example.com:9092
tls:
trustAnchorRef:
kind: ConfigMap
name: trusted-cas
key: trusted-cas.pem
certificateRef: (1)
kind: Secret (2)
name: tls-cert-for-kafka.example.com (3)
# ...
1 | The certificateRef property identifies the TLS client certificate to use. |
2 | The kind is optional and defaults to Secret . The Secret should have type: kubernetes.io/tls . |
3 | The name is the name of the resource of the given kind . This resource must exist in the same namespace as the KafkaService |
6.3.3. TLS version configuration for proxy-to-cluster connections
Some older versions of TLS (and SSL before it) are now considered insecure. These versions remain enabled by default in order to maximize interoperability between TLS clients and servers that only support older versions.
If the Kafka cluster than you want to connect to supports newer TLS versions, you can disable the proxy’s support for older, insecure versions. For example, if the Kafka cluster supports TLSv1.1, TLSv1.2 and TLSv1.3 you might choose to enable only TLSv1.3 support. This would reduce the susceptibility to a TLS downgrade attack.
It is good practice to disable insecure protocol versions. |
This example configures a KafkaService
to allow only TLS v1.3 when connecting to kafka.example.com
.
KafkaService
with restricted TLS protocol versions.kind: KafkaService
metadata:
# ...
spec:
bootstrapServers: kafka.example.com:9092
tls:
# ...
protocols: (1)
allow: (2)
- TLSv1.3
1 | The protocols property configures the TLS protocol versions |
2 | allow lists the versions of TLS which are permitted. |
The protocols
property also supports deny
, if you prefer to list the versions to exclude instead.
The names of the TLS protocol versions supported depend on the JVM in the proxy container image. See https://docs.oracle.com/en/java/javase/17/docs/specs/security/standard-names.html#sslcontext-algorithms.
6.3.4. TLS cipher suite configuration for proxy-to-cluster connections
A cipher suite is a set of cryptographic algorithms that together provide the security guarantees offered by TLS. During TLS negotiation, a server and client agree on a common cipher suite that they both support.
Some older cipher suites are now considered insecure, but may be enabled on the Kafka cluster to allow older clients to connect.
The cipher suites enabled by default in the proxy depend on the JVM used in the proxy image and the TLS protocol version that is negotiated.
To prevent TLS downgrade attacks, you can disable cipher suites known to be insecure or no longer recommended. However, the proxy and the cluster must support at least one cipher suite in common.
It is good practice to disable insecure cipher suites. |
KafkaService
configured so that the proxy will negotiate TLS connection using only the listed ciphers.kind: KafkaService
metadata:
# ...
spec:
bootstrapServers: kafka.example.com:9092
tls:
# ...
cipherSuites: (1)
allow: (2)
- TLS_AES_128_GCM_SHA256
- TLS_AES_256_GCM_SHA384
1 | The cipherSuites object configures the cipher suites. |
2 | allow lists the cipher suites which are permitted. |
The cipherSuites
property also supports deny
, if you prefer to list the cipher suites to exclude instead.
The names of the cipher suites supported depend on the JVM in the proxy container image. See https://docs.oracle.com/en/java/javase/21/docs/specs/security/standard-names.html#jsse-cipher-suite-names.
6.4. Securing filters
Secure filters by using the security features provided by each filter and storing sensitive values in external resources such as a Kubernetes Secret
.
6.4.1. Security-sensitive values in filter resources
Template use and value interpolation
Interpolation is supported in spec.configTemplate
for the automatic substitution of placeholder values at runtime.
This allows security-sensitive values, such as passwords or keys, to be specified in Kubernetes Secret
resources rather than directly in the KafkaProtocolFilter
resource.
Likewise, things like trusted CA certificates can be defined in ConfigMap
resources.
The operator determines which Secret
and ConfigMap
resources are referenced by a KafkaProtocolFilter
resource and declares them as volumes
in the proxy Pod
, mounted into the proxy container.
This example shows how to configure the RecordEncryptionFilter
using a Vault KMS deployed in the same Kubernetes cluster.
KafkaProtocolFilter
configurationkind: KafkaProtocolFilter
metadata:
# ...
spec:
type: RecordEncryption (1)
configTemplate: (2)
kms: VaultKmsService
kmsConfig:
vaultTransitEngineUrl: http://vault.vault.svc.cluster.local:8200/v1/transit
vaultToken:
password: ${secret:vault:token} (3)
selector: TemplateKekSelector
selectorConfig:
template: "$(topicName)" (4)
1 | The type is the Java class name of the proxy filter. If the unqualified name is ambiguous, it must be qualified by the filter package name. |
2 | The KafkaProtocolFilter requires a configTemplate , which supports interpolation references. |
3 | The password uses an interpolation reference, enclosed by ${ and } instead of a literal value. The operator supplies the value at runtime from the specified Secret . |
4 | The selector template is interpreted by the proxy. It uses different delimiters, $( and ) , than the interpolation reference. |
Structure of interpolation references
Let’s look at the example interpolation reference ${secret:vault:token}
in more detail.
It starts with ${
and ends with }
. Between these, it is broken into three parts, separated by colons (:
):
-
secret
is a provider. Supported providers aresecret
andconfigmap
(note the use of lower case). -
vault
is a path. The interpretation of the path depends on the provider. -
token
is a key. The interpretation of the key also depends on the provider.
For both secret
and configmap
providers:
-
The path is interpreted as the name of a
Secret
orConfigMap
resource in the same namespace as theKafkaProtocolFilter
resource. -
The key is interpreted as a key in the
data
property of theSecret
orConfigMap
resource.
7. Monitoring
Kroxylicious supports key observability features to help you understand the performance and health of your proxy instances.
The Kroxylicious Proxy and Kroxylicious Operator generate metrics for real-time monitoring and alerting, as well as logs that capture their actions and behavior. You can integrate these metrics with a monitoring system like Prometheus for ingestion and analysis, while configuring log levels to control the granularity of logged information.
7.1. Overview of proxy metrics
The proxy provides metrics for both connections and messages. These metrics are categorized into downstream (client-side) and upstream (broker-side) groups They allow users to assess the impact of the proxy and its filters on their Kafka system.
-
Connection metrics count the connections made from the downstream (incoming connections from the clients) and the connection made by the proxy to upstream (outgoing connections to the Kafka brokers).
-
Message metrics count the number of Kafka protocol request and response messages that flow through the proxy.
7.1.1. Connection metrics
Connection metrics count the TCP connections made from the client to the proxy (kroxylicious_client_to_proxy_request_total
) and from the proxy to the broker (kroxylicious_proxy_to_server_connections_total
).
These metrics count connection attempts, so the connection count is incremented even if the connection attempt ultimately fails.
In addition to the count metrics, there are error metrics.
* If an error occurs whilst the proxy is accepting a connection from the client the kroxylicious_client_to_proxy_errors_total
metric is incremented by one.
* If an error occurs whilst the proxy is attempting a connection to a broker the kroxylicious_proxy_to_server_errors_total
metric is incremented by one.
Connection and connection error metrics include the following labels: virtual_cluster
(the virtual cluster’s name) and node_id
(the broker’s node ID).
When the client connects to the boostrap endpoint of the virtual cluster, a node ID value of bootstrap
is recorded.
The kroxylicious_client_to_proxy_errors_total
metric also counts connection errors that occur before a virtual cluster has been identified.
For these specific errors, the virtual_cluster
and node_id
labels are set to an empty string ("").
Error conditions signaled within the Kafka protocol response (such as RESOURCE_NOT_FOUND or UNKNOWN_TOPIC_ID ) are not classed as errors by these metrics.
|
Metric Name | Type | Labels | Description |
---|---|---|---|
|
Counter |
|
Incremented by one every time a connection is accepted from a client by the proxy. |
|
Counter |
|
Incremented by one every time a connection is closed due to any downstream error. |
|
Counter |
|
Incremented by one every time a connection is made to the server from the proxy. |
|
Counter |
|
Incremented by one every time a connection is closed due to any upstream error. |
7.1.2. Message metrics
Message metrics count, and record the sizes of, the Kafka protocol requests and responses that flow through the proxy.
Use these metrics to help understand: * the number of messages flowing through the proxy. * the overall volume of data through the proxy. * the effect the filters are having on the messages.
-
Downstream metrics
-
kroxylicious_client_to_proxy_request_total
counts requests as they arrive from the client. -
kroxylicious_proxy_to_client_response_total
counts responses as they are returned to the client. -
kroxylicious_client_to_proxy_request_size_bytes
is incremented by the size of each request as it arrives from the client. -
kroxylicious_proxy_to_client_response_size_bytes
is incremented by the size of each response as it is returned to the client.
-
-
Upstream metrics
-
kroxylicious_proxy_to_server_request_total
counts requests as they go to the broker. -
kroxylicious_proxy_to_server_response_total
counts responses as they are returned by the broker. -
kroxylicious_proxy_to_server_request_size_bytes
is incremented by the size of each request as it goes to the broker. -
kroxylicious_proxy_to_server_response_size_bytes
is incremented by the size of each response as it is returned by the broker.
-
The size recorded is the encoded size of the protocol message. It includes the 4 byte message size.
Filters can alter the flow of messages through the proxy or the content of the message. This is apparent through the metrics. * If a filter sends a short-circuit, or closes a connection the downstream message counters will exceed the upstream counters. * If a filter changes the size of the message, the downstream size metrics will be different to the upstream size metrics.
Message metrics include the following labels: virtual_cluster
(the virtual cluster’s name), node_id
(the broker’s node ID), api_key
(the message type), api_version
, and decoded
(a flag indicating if the message was decoded by the proxy).
When the client connects to the boostrap endpoint of the virtual cluster, metrics are recorded with a node ID value of bootstrap
.
Metric Name | Type | Labels | Description |
---|---|---|---|
|
Counter |
|
Incremented by one every time a request arrives at the proxy from a client. |
|
Counter |
|
Incremented by one every time a request goes from the proxy to a server. |
|
Counter |
|
Incremented by one every time a response arrives at the proxy from a server. |
|
Counter |
|
Incremented by one every time a response goes from the proxy to a client. |
|
Distribution |
|
Incremented by the size of the message each time a request arrives at the proxy from a client. |
|
Distribution |
|
Incremented by the size of the message each time a request goes from the proxy to a server. |
|
Distribution |
|
Incremented by the size of the message each time a response arrives at the proxy from a server. |
|
Distribution |
|
Incremented by the size of the message each time a response goes from the proxy to a client. |
7.2. Overview of operator metrics
The Kroxylicious Operator is implemented using the Java Operator SDK. The Java Operator SDK exposes metrics that allow its behavior to be understood. These metrics are enabled by default in the Kroxylicious Operator.
Refer to the Java Operator SDK metric documentation to learn more about metrics.
7.3. Ingesting metrics
Metrics from the Kroxylicious Proxy and Kroxylicious Operator can be ingested into your Prometheus instance.
The proxy and the operator each expose an HTTP endpoint for Prometheus metrics at the /metrics
address.
The endpoint does not require authentication.
For the Proxy, the port that exposes the scrape endpoint is named management
.
For the Operator, the port is named http
.
Prometheus can be configured to ingest the metrics from the scrape endpoints.
This guide assumes you are using the Prometheus Operator to configure Prometheus.
7.3.1. Ingesting operator metrics
This procedure describes how to ingest metrics from the Kroxylicious Operator into Prometheus.
-
Kroxylicious Operator is installed.
-
Prometheus Operator is installed, and a Prometheus instance has been created using the
Prometheus
custom resource.
-
Apply the PodMonitor configuration:
apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: proxy spec: selector: matchLabels: app.kubernetes.io/name: kroxylicious app.kubernetes.io/component: operator podMetricsEndpoints: - path: /metrics port: http
The Prometheus Operator reconfigures Prometheus automatically. Prometheus begins to regularly to scrape the Kroxylicious Operator’s metric.
-
Check the metrics are being ingested using a PromQL query such as:
operator_sdk_reconciliations_queue_size_kafkaproxyreconciler{kind="KafkaProxy", group="kroxylicious.io"}
7.3.2. Ingesting proxy metrics
This procedure describes how to ingest metrics from the Kroxylicious Proxy into Prometheus.
-
Kroxylicious Operator is installed.
-
Prometheus Operator is installed, and a Prometheus instance has been created using the
Prometheus
custom resource. -
An instance of Kroxylicious deployed by the operator.
-
Apply the PodMonitor configuration:
apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: proxy spec: selector: matchLabels: app.kubernetes.io/application: kroxylicious app.kubernetes.io/component: proxy podMetricsEndpoints: - path: /metrics port: management
The Prometheus Operator reconfigures Prometheus automatically. Prometheus begins to regularly to scrape the proxy’s metric.
-
Check the metrics are being ingested using a PromQL query such as:
kroxylicious_build_info
7.4. Setting log levels
You can independently control the logging level of both the Kroxylicious Operator and the Kroxylicious Proxy.
In both cases, logging levels are controlled using two environment variables:
-
KROXYLICIOUS_APP_LOG_LEVEL
controls the logging of the application (io.kroxylicious
loggers). It defaults toINFO
. -
KROXYLICIOUS_ROOT_LOG_LEVEL
controls the logging level at the root. It defaults toWARN
.
When trying to diagnose a problem, start first by raising the logging level of KROXYLICIOUS_APP_LOG_LEVEL
.
If more detailed diagnostics are required, try raising the KROXYLICIOUS_ROOT_LOG_LEVEL
. Both the proxy and operator
use Apache Log4J2 and use logging levels understood by it: TRACE
, DEBUG
, INFO
, WARN
, and ERROR
.
WARNING: Running the operator or the proxy at elevated logging levels, such as DEBUG or TRACE , can generate a large volume of logs, which may consume significant storage and affect performance.
Run at these levels only as long as necessary.
|
7.4.1. Overriding proxy logging levels
This procedure describes how to override the logging level of the Kroxylicious Proxy.
-
An instance of Kroxylicious deployed by the Kroxylicious Operator.
-
Apply the
KROXYLICIOUS_APP_LOG_LEVEL
orKROXYLICIOUS_ROOT_LOG_LEVEL
environment variable to the proxy’s KubernetesDeployment
resource:kubectl set env -n <namespace> deployment <deployment_name> KROXYLICIOUS_APP_LOG_LEVEL=DEBUG
The
Deployment
resource has the same name as theKafkaProxy
.Kubernetes recreates the proxy pod automatically.
-
Verify that the new logging level has taken affect:
kubectl logs -f -n <namespace> deployment/<deployment_name>
Reverting proxy logging levels
This procedure describes how to revert the logging level of the Kroxylicious Proxy back to its defaults.
-
An instance of Kroxylicious deployed by the Kroxylicious Operator.
-
Remove the
KROXYLICIOUS_APP_LOG_LEVEL
orKROXYLICIOUS_ROOT_LOG_LEVEL
environment variable from the proxy’s Kubernetes Deployment:kubectl set env -n <namespace> deployment <deployment_name> KROXYLICIOUS_APP_LOG_LEVEL-
Kubernetes recreates the proxy pod automatically.
-
Verify that the logging level has reverted to its default:
kubectl logs -f -n <namespace> deployment/<deployment_name>
7.4.2. Overriding the operator logging level (operator installed by bundle)
This procedure describes how to override the logging level of the Kroxylicious Operator. It applies when the operator was installed from the YAML bundle.
-
Kroxylicious Operator installed from the YAML bundle.
-
Apply the
KROXYLICIOUS_APP_LOG_LEVEL
orKROXYLICIOUS_ROOT_LOG_LEVEL
environment variable to the operator’s Kubernetes Deployment:kubectl set env -n kroxylicious-operator deployment kroxylicious-operator KROXYLICIOUS_APP_LOG_LEVEL=DEBUG
Kubernetes recreates the operator pod automatically.
-
Verify that the new logging level has taken affect:
kubectl logs -f -n kroxylicious-operator deployment/kroxylicious-operator
Reverting operator logging levels
This procedure describes how to revert the logging level of the Kroxylicious Operator back to its defaults.
-
Kroxylicious Operator installed from the YAML bundle.
-
Remove the
KROXYLICIOUS_APP_LOG_LEVEL
orKROXYLICIOUS_ROOT_LOG_LEVEL
environment variable from the proxy’s Kubernetes Deployment:kubectl set env -n kroxylicious-operator deployment kroxylicious-operator KROXYLICIOUS_APP_LOG_LEVEL-
Kubernetes recreates the operator pod automatically
-
Verify that the logging level has reverted to its default:
kubectl logs -f -n kroxylicious-operator deployment/kroxylicious-operator
8. Glossary
- API
-
Application Programmer Interface.
- CA
-
Certificate Authority. An organization that issues certificates.
- CR
-
Custom Resource. An instance resource of a CRD. In other words, a resource of a kind that is not built into Kubernetes.
- CRD
-
Custom Resource Definition. A Kubernetes API for defining Kubernetes API extensions.
- KMS
-
Key Management System. A dedicated system for controlling access to cryptographic material, and providing operations which use that material.
- mTLS
-
Mutual Transport Layer Security. A configuration of TLS where the client presents a certificate to a server, which the server authenticates.
- TLS
-
The Transport Layer Security. A secure transport protocol where a server presents a certificate to a client, which the client authenticates. TLS was previously known as the Secure Sockets Layer (SSL).
- TCP
-
The Transmission Control Protocol.
9. Trademark notice
-
Apache Kafka is a registered trademark of The Apache Software Foundation.
-
Kubernetes is a registered trademark of The Linux Foundation.
-
Prometheus is a registered trademark of The Linux Foundation.
-
Strimzi is a trademark of The Linux Foundation.
-
Hashicorp Vault is a registered trademark of HashiCorp, Inc.
-
AWS Key Management Service is a trademark of Amazon.com, Inc. or its affiliates.
-
Fortanix and Data Security Manager are trademarks of Fortanix, Inc.