About this guide
This guide covers using the Kroxylicious Operator to configure, deploy, secure, and operate the Kroxylicious proxy on Kubernetes. Refer to other Kroxylicious guides for information on running the proxy outside Kubernetes or for advanced topics such as plugin development.
1. Kroxylicious Operator overview
Kroxylicious Proxy is an Apache Kafka protocol-aware ("Layer 7") proxy designed to enhance Kafka-based systems.
The Kroxylicious Operator is an operator for Kubernetes which simplifies deploying and operating the Kroxylicious Proxy.
2. API concepts
2.1. API resources used by the Kroxylicious Proxy
The operator takes these custom resources and core Kubernetes resources as inputs:
KafkaProxy
-
Defines an instance of the proxy.
VirtualKafkaCluster
-
Represents a logical Kafka cluster that will be exposed to Kafka clients.
KafkaProxyIngress
-
Configures how a virtual cluster is exposed on the network to Kafka clients.
KafkaService
-
Specifies a backend Kafka cluster for a virtual cluster.
KafkaProtocolFilter
-
Specifies filter mechanisms for use with a virtual cluster.
Secret
-
KafkaService
andKafkaProtocolFilter
resources may reference aSecret
to provide security-sensitive data such as TLS certificates or passwords. ConfigMap
-
KafkaService
andKafkaProtocolFilter
resources may reference aConfigMap
to provide non-sensitive configuration such as trusted CA certificates.
Based on the input resources just described, the operator generates the core Kubernetes resources needed to deploy the Kroxylicious proxy, such as:
ConfigMap
-
Provides the proxy configuration file mounted into the proxy container.
Deployment
-
Manages the proxy
Pod
and container. Service
-
Exposes the proxy over the network to other workloads in the same Kubernetes cluster.
The API is decomposed into multiple custom resources in a similar way to the Kubernetes Gateway API, and for similar reasons. You can make use of Kubernete’s Role-Based Access Control (RBAC) to divide responsibility for different aspects of the overall proxy functionality to different roles (people) in your organization.
For example, you might grant networking engineers the ability to configure KafkaProxy
and KafkaProxyIngress
, while giving application developers the ability to configure VirtualKafkaCluster
, KafkaService
, and KafkaProtocolFilter
resources.
2.2. Compatibility
2.2.1. Custom resource APIs
Kroxylicious custom resource definitions are packaged and deployed alongside the operator. Currently, there’s only a single version of the custom resource APIs: v1alpha1
.
Future updates to the operator may introduce new versions of the custom resource APIs. At that time the operator will be backwards compatible with older versions of those APIs and an upgrade procedure will be used to upgrade existing custom resources to the new API version.
3. Installing the operator
This section provides instructions for installing the Kroxylicious Operator.
Installation options and procedures are demonstrated using the example files included with Kroxylicious.
3.1. Install prerequisites
To install Kroxylicious, you will need the following:
-
A Kubernetes 1.31 or later cluster. For development purposes, Minikube may be used.
-
The
kubectl
command-line tool to be installed and configured to connect to the running cluster.
For more information on the tools available for running Kubernetes, see Install Tools in the Kubernetes documentation.
3.2. Kroxylicious release artifacts
To use YAML manifest files to install Kroxylicious, download kroxylicious-operator-0.13.zip or kroxylicious-operator-0.13.tar.gz file from the GitHub releases page, and extract the files as appropriate (for example using unzip
or tar -xzf
).
Each of these archives contains:
- Installation Files
-
In the
install
directory are the YAML manifests needed to install the operator. - Examples
-
In the
examples
directory are examples of the custom resources which can be used to deploy a proxy once the operator has been installed.
3.3. Installing the Kroxylicious Operator
This procedure shows how to install the Kroxylicious Operator in your Kubernetes cluster.
-
You need an account with permission to create and manage
CustomResourceDefinition
and RBAC (ClusterRole
) resources. -
You have downloaded one of kroxylicious-operator-0.13.zip or kroxylicious-operator-0.13.tar.gz and extracted the contents into the current directory.
-
Edit the Kroxylicious installation files to use the namespace the operator is going to be installed into.
For example, in this procedure the operator is installed into the namespace
my-kroxylicious-operator-namespace
.On Linux, use:
sed -i 's/namespace: .*/namespace: my-kroxylicious-operator-namespace/' install/*.yaml
On MacOS, use:
sed -i '' 's/namespace: .*/namespace: my-kroxylicious-operator-namespace/' install/*.yaml
-
Deploy the Kroxylicious operator:
kubectl create -f install
-
Check the status of the deployment:
kubectl get deployments -n my-kroxylicious-operator-namespace
Output shows the deployment name and readinessNAME READY UP-TO-DATE AVAILABLE kroxylicious-operator 1/1 1 1
READY
shows the number of replicas that are ready/expected. The deployment is successful when theAVAILABLE
output shows1
.
4. Deploying a proxy
Deploy a basic proxy instance with a single virtual cluster exposed to Kafka clients on the same Kubernetes cluster.
4.1. Prerequisites
-
The Kroxylicious Operator is installed in the Kubernetes cluster.
-
A Kafka cluster is available to be proxied.
-
TLS certificate generation capability is available for ingress configurations that require TLS.
-
DNS management access is available for ingress configurations that require off-cluster access.
4.2. The required resources
4.2.1. Proxy configuration to host virtual clusters
A KafkaProxy
resource represents an instance of the Kroxylicious Proxy.
Conceptually, it is the top-level resource that links together KafkaProxyIngress
, VirtualKafkaCluster
, KafkaService
, and KafkaProtocolFilter
resources to form a complete working proxy.
KafkaProxy
resources are referenced by KafkaProxyIngress
and VirtualKafkaCluster
resources to define how the proxy is exposed and what it proxies.
KafkaProxy
configurationkind: KafkaProxy
apiVersion: kroxylicious.io/v1alpha1
metadata:
namespace: my-proxy
name: simple
spec: {} (1)
1 | An empty spec creates a proxy with default configuration. |
4.2.2. Networking configuration
A KafkaProxyIngress
resource defines the networking configuration that allows Kafka clients to connect to a VirtualKafkaCluster
.
It is uniquely associated with a single KafkaProxy
instance, but it is not uniquely associated with a VirtualKafkaCluster
and can be used by multiple VirtualKafkaCluster
instances.
The KafkaProxyIngress
resource supports the following ingress types to configure networking access to the virtual cluster:
-
clusterIP
exposes the virtual cluster to applications running inside the same Kubernetes cluster as the proxy. -
loadBalancer
exposes the virtual cluster to applications running outside the Kubernetes cluster.
The clusterIP
ingress types support both TCP (plain) and TLS connections.
The loadBalancer
type exclusively supports TLS.
When using TLS, you specify a TLS server certificate in the ingress configuration of the VirtualKafkaCluster
resource.
When using loadBalancer
, changes to your DNS may be required.
The following table summarizes the supported ingress types.
Ingress Type | Use case | Supported Transport | Requires DNS changes? |
---|---|---|---|
|
On-cluster applications |
TCP/TLS |
No |
|
Off-cluster applications |
TLS only |
Yes |
TLS is recommended when connecting applications in a production environment. |
clusterIP
ingress type
The clusterIP
ingress type exposes virtual clusters to Kafka clients running in the same Kubernetes cluster as the proxy.
It supports both TCP (plain) and TLS connections.
The clusterIP
ingress type uses Kubernetes Service
resources of type ClusterIP
to enable on-cluster access.
KafkaProxyIngress
configuration for clusterIP
with TCPkind: KafkaProxyIngress
apiVersion: kroxylicious.io/v1alpha1
metadata:
namespace: my-proxy
name: cluster-ip
spec:
proxyRef: (1)
name: simple
clusterIP: (2)
protocol: TCP (3)
1 | Identifies the KafkaProxy resource that this ingress is part of. |
2 | Specifies clusterIP networking. |
3 | Defines the connection protocol as plain TCP . Use TLS to enable encrypted communication between clients and the proxy. |
KafkaProxyIngress
configuration for clusterIP
with TLSkind: KafkaProxyIngress
apiVersion: kroxylicious.io/v1alpha1
metadata:
namespace: my-proxy
name: cluster-ip
spec:
proxyRef: (1)
name: simple
clusterIP: (2)
protocol: TLS (3)
When using TLS, specify a TLS server certificate in the ingress configuration of the VirtualKafkaCluster
resource using a certificateRef
.
loadBalancer
ingress type
The loadBalancer
ingress type allows applications running off-cluster to connect to the virtual cluster.
TLS must be used with this ingress type.
The loadBalancer
ingress type uses Kubernetes Service
resources of type LoadBalancer
to enable off-cluster access.
When using a loadBalancer ingress, the proxy uses SNI (Server Name Indication) to match the client’s requested host name to the correct virtual cluster and broker within the proxy. This means that every virtual cluster and every broker within the virtual cluster must be uniquely identifiable within DNS. To accomplish this, the following configuration must be provided:
-
A unique
bootstrapAddress
. This is the address that the clients initially use to connect to the virtual cluster. -
An
advertisedBrokerAddressPattern
that generates unique broker addresses which clients use to connect to individual brokers.
You decide how to formulate the bootstrapAddress
and the advertisedBrokerAddressPattern
to best fit the networking conventions of your organization.
The advertisedBrokerAddressPattern
must contain the token $(nodeId)
.
The proxy replaces this token with the broker’s node ID.
This ensures that client connections are correctly routed to the intended broker.
Both bootstrapAddress
and advertisedBrokerAddressPattern
may contain the token $(virtualClusterName)
.
If this is present, it is replaced by the virtual cluster’s name.
This token is necessary when the KafkaProxyIngress
is being shared by many virtual clusters.
One possible scheme is to use the virtual cluster’s name as a subdomain within your organisation’s domain name:
$(virtualClusterName).kafkaproxy.example.com
You can then use a further subdomain for each broker:
broker-$(nodeId).$(virtualClusterName).kafkaproxy.example.com
You can use other naming schemes, as long as each address remains unique.
KafkaProxyIngress
configuration for loadBalancer
kind: KafkaProxyIngress
apiVersion: kroxylicious.io/v1alpha1
metadata:
namespace: my-proxy
name: load-balancer
spec:
proxyRef: (1)
name: simple
loadBalancer: (2)
bootstrapAddress: "$(virtualClusterName).kafkaproxy.example.com" (3)
advertisedBrokerAddressPattern: "broker-$(nodeId).$(virtualClusterName).kafkaproxy.example.com" (4)
1 | Identifies the KafkaProxy resource that this ingress is part of. |
2 | Specifies loadBalancer networking. |
3 | The bootstrap address for clients to connect to the virtual cluster. |
4 | The advertised broker address used by the proxy to generate the individual broker addresses presented to the client. |
When using TLS, specify a TLS server certificate in the ingress configuration of the VirtualKafkaCluster
resource using a certificateRef
.
You must also configure DNS so that the bootstrap and broker address resolve from the network used by the applications.
4.2.3. Configuration for proxied Kafka clusters
A proxied Kafka cluster is configured in a KafkaService
resource, which specifies how the proxy connects to the cluster.
The Kafka cluster may or may not be running in the same Kubernetes cluster as the proxy: Network connectivity is all that’s required.
This example shows a KafkaService
defining how to connect to a Kafka cluster at kafka.example.com
.
KafkaService
configurationkind: KafkaService
metadata:
# ...
spec:
bootstrapServers: kafka.example.com:9092 (1)
nodeIdRanges: (2)
- name: brokers (3)
start: 0 (4)
end: 5 (5)
# ...
1 | The bootstrapServers property is a comma-separated list of addresses in <host>:<port> format. Including multiple broker addresses helps clients connect when one is unavailable. |
2 | nodeIdRanges declares the IDs of all the broker nodes in the Kafka cluster |
3 | name is optional, but specifying it can make errors easier to diagnose. |
4 | The start of the ID range, inclusive. |
5 | The end of the ID range, inclusive. |
4.2.4. Virtual cluster configuration
A VirtualKafkaCluster
resource defines a logical Kafka cluster that is accessible to clients over the network.
The virtual cluster references the following resources, which must be in the same namespace:
-
A
KafkaProxy
resource that the proxy is part of. -
One or more
KafkaProxyIngress
resources that expose the virtual cluster to Kafka clients and provide virtual-cluster-specific configuration to the ingress (such as TLS certificates and other parameters). -
A
KafkaService
resource that defines the backend Kafka cluster. -
Zero or more
KafkaProtocolFilter
resources that apply filters to the Kafka protocol traffic passing between clients and the backend Kafka cluster.
This example shows a VirtualKafkaCluster
, exposing it to Kafka clients running on the same Kubernetes cluster.
It uses plain TCP (as opposed to TLS) as the transport protocol.
VirtualKafkaCluster
configuration with single clusterIP
ingresskind: VirtualKafkaCluster
apiVersion: kroxylicious.io/v1alpha1
metadata:
name: my-cluster
namespace: my-proxy
spec:
proxyRef: (1)
name: simple
targetKafkaServiceRef: (2)
name: my-cluster
ingresses:
- ingressRef: (3)
name: cluster-ip
1 | Identifies the KafkaProxy resource that this virtual cluster is part of. |
2 | The KafkaService that defines the Kafka cluster proxied by the virtual cluster. |
3 | Ingresses that expose the virtual cluster.
Each ingress references a KafkaProxyIngress by name. |
This example shows a VirtualKafkaCluster
, exposing it to Kafka clients running both on and off-cluster, both using TLS.
Because TLS is used, the ingress configuration must reference a TLS server certificate.
VirtualKafkaCluster
configuration with two ingresses using TLSkind: VirtualKafkaCluster
apiVersion: kroxylicious.io/v1alpha1
metadata:
name: my-cluster
namespace: my-proxy
spec:
proxyRef:
name: simple
targetKafkaServiceRef:
name: my-cluster
ingresses:
- ingressRef:
name: cluster-ip
certificateRef:
name: 'cluster-ip-server-cert' (1)
kind: Secret
- ingressRef:
name: load-balancer
certificateRef:
name: 'external-server-cert' (2)
kind: Secret
1 | Reference to a secret containing the server certificate for the clusterIP ingress. |
2 | Reference to a secret containing the server certificate for the loadBalancer ingress. |
Generating TLS certificates for clusterIP
ingress type
When using the clusterIP
ingress type with the TLS
protocol, you must provide suitable TLS certificates to secure communication.
The basic steps are as follows:
-
Generate a TLS server certificate that covers the service names assigned to the virtual cluster by the ingress.
-
Provide the certificate to the virtual cluster using a Kubernetes
Secret
of typekubernetes.io/tls
.
The exact procedure for generating the certificate depends on the tooling and processes used by your organization.
The certificate must meet the following criteria:
-
The certificate needs to be signed by a CA that is trusted by the on-cluster applications that connect to the virtual cluster.
-
The format of the certificate must be PKCS#8 encoded PEM (Privacy Enhanced Mail). It must not be password protected.
-
The certificate must use SANs (Subject Alternate Names) to list all service names or use a wildcard TLS certificate that covers them all. Assuming a virtual cluster name of
my-cluster
, an ingress name ofcluster-ip
, and a Kafka cluster using node IDs (0-2), the following SANs must be listed in the certificate:my-cluster-cluster-ip-bootstrap.<namespace>.svc.cluster.local my-cluster-cluster-ip-0.<namespace>.svc.cluster.local my-cluster-cluster-ip-1<namespace>.svc.cluster.local my-cluster-cluster-ip-2<namespace>.svc.cluster.local
Create a secret for the certificate using the following command:
kubectl create secret tls <secret-name> --namespace <namespace> --cert=<path/to/cert/file> \ --key=<path/to/key/file>
<secret-name>
is the name of the secret to be created, <namespace>
is the name of the namespace where the proxy is to be deployed, and <path/to/cert/file>
and <path/to/key/file>
are the paths to the certificate and key files.
Generating TLS certificates for loadBalancer
ingress type
When using loadBalancer
ingress type, you must provide suitable TLS certificates to secure communication.
The basic steps are as follows:
-
Generate a TLS server certificate that covers the bootstrap and broker names assigned to the virtual cluster by the ingress.
-
Provide the certificate to the virtual cluster using a Kubernetes
Secret
of typekubernetes.io/tls
.
The exact procedure for generating the certificate depends on the tooling and processes used by your organization.
The certificate must meet the following criteria:
-
The certificate needs to be signed by a CA that is trusted by the off-cluster applications that connect to the virtual cluster.
-
The format of the certificate must be PKCS#8 encoded PEM (Privacy Enhanced Mail). It must not be password protected.
-
The certificate must use SANs (Subject Alternate Names) to list the bootstrap and all the broker names or use a wildcard TLS certificate that covers them all. Assuming a
bootstrapAddress
of$(virtualClusterName).kafkaproxy.example.com
, anadvertisedBrokerAddressPattern
ofbroker-$(nodeId).$(virtualClusterName).kafkaproxy.example.com
, a Kafka cluster using node IDs (0-2), and a virtual cluster name ofmy-cluster
, the following SANs must be listed in the certificate:mycluster.kafkaproxy.example.com broker-0.mycluster.kafkaproxy.example.com broker-1.mycluster.kafkaproxy.example.com broker-2.mycluster.kafkaproxy.example.com
Create a secret for the certificate using the following command:
kubectl create secret tls <secret-name> --namespace <namespace> --cert=<path/to/cert/file> \ --key=<path/to/key/file>
<secret-name>
is the name of the secret to be created, <namespace>
is the name of the namespace where the proxy is to be deployed, and <path/to/cert/file>
and <path/to/key/file>
are the paths to the certificate and key files.
Configuring DNS for load balancer ingress
When using the loadBalancer
ingress type, you must ensure that both the bootstrapAddress
and the names generated from advertisedBrokerAddressPattern
resolve to the external address of the Kubernetes Service
underlying the load balancer on the network where the off-cluster applications run.
-
The Kroxylicious Operator is installed.
-
KafkaProxy
,VirtualKafkaCluster
, andKafkaProxyIngress
resources are deployed. -
The
VirtualKafkaCluster
andKafkaProxyIngress
resources are configured to use aloadBalancer
ingress. -
DNS can be configured on the network where the off-cluster applications run.
-
Network traffic can to flow from the application network run to the external addresses provided by the Kubernetes cluster.
-
If using Minikube as your Kubernetes environment, enable the Minikube Load Balancer tunnel by running the following command. Use a separate console window to do this as the command needs to stay running for the tunnel to work.
minikube tunnel
-
Run the following command to discover the external address being used by the load balancer:
kubectl get service -n <namespace> <proxy-name>-sni -o=jsonpath='{.status.loadBalancer.ingress[0]}'
Replace
<namespace>
with the name of the Kubernetes namespace where the resources are deployed and replace<proxy-name>
with the name of theKafkaProxy
resource.Depending on your Kubernetes environment, the command returns an IP address or a hostname. This is the external address of the load balancer.
-
Configure your DNS so that the bootstrap and broker names resolve to the external address.
Assuming a
bootstrapAddress
of$(virtualClusterName).kafkaproxy.example.com
, anadvertisedBrokerAddressPattern
ofbroker-$(nodeId).$(virtualClusterName).kafkaproxy.example.com
, a Kafka cluster uses node IDs (0-2), and a virtual cluster name ofmy-cluster
, the following DNS mappings are listed:my-cluster.kafkaproxy.example.com => <external address> broker-0.my-cluster.kafkaproxy.example.com => <external address> broker-1.my-cluster.kafkaproxy.example.com => <external address> broker-2.my-cluster.kafkaproxy.example.com => <external address>
The exact steps vary by environment and network setup.
-
Confirm that the names resolve from the application network:
nslookup mycluster.kafkaproxy.example.com nslookup broker-0.mycluster.kafkaproxy.example.com
4.3. Filters
A KafkaProtocolFilter
resource represents a Kroxylicious Proxy filter.
It is not uniquely associated with a VirtualKafkaCluster
or KafkaProxy
instance; it can be used in a number of VirtualKafkaCluster
instances in the same namespace.
A KafkaProtocolFilter
is similar to one of the items in a proxy configuration’s filterDefinitions
:
-
The resource’s
metadata.name
corresponds directly to thename
of afilterDefinitions
item. -
The resource’s
spec.type
corresponds directly to thetype
of afilterDefinitions
item. -
The resource’s
spec.configTemplate
corresponds to theconfig
of afilterDefinitions
item, but is subject to interpolation by the operator.
5. Operating a proxy
Operate a deployed proxy by configuring its resource allocations.
This section assumes you have a running Kroxylicious proxy instance.
5.1. Configuring Proxy container CPU and memory resource limits and requests
When you define a KafkaProxy
resource, a number of Kubernetes Pods
are created, each with a proxy container.
Each of these containers runs a single Kroxylicious Proxy process.
By default, these proxy containers are defined without resource limits.
You can learn more about Kubernetes resource management here.
To manage CPU and memory consumption in your environment, modify the proxyContainer
section within your KafkaProxy
specification.
KafkaProxy
configuration with proxy container resource specificationkind: KafkaProxy
apiVersion: kroxylicious.io/v1alpha1
metadata:
namespace: my-proxy
name: simple
spec:
infrastructure:
proxyContainer:
resources:
requests:
cpu: '400m'
memory: '656Mi'
limits:
cpu: '500m'
memory: '756Mi'
6. Securing a proxy
Secure proxies by using TLS and storing sensitive values in external resources.
6.1. Prerequisites
-
A running Kroxylicious proxy instance
6.2. Securing the client-to-proxy connection
Secure client-to-proxy communications using TLS.
6.2.1. TLS configuration for client-to-proxy connections
This example shows a VirtualKafkaCluster
, exposing it to Kafka clients running on the same Kubernetes cluster.
It uses TLS as the transport protocol so that communication between Kafka clients and the proxy is encrypted.
VirtualKafkaCluster
configurationkind: VirtualKafkaCluster
apiVersion: kroxylicious.io/v1alpha1
metadata:
name: my-cluster
namespace: my-proxy
spec:
proxyRef: (1)
name: simple
targetKafkaServiceRef: (2)
name: my-cluster
ingresses:
- ingressRef: (3)
name: cluster-ip
tls: (4)
certificateRef:
name: server-certificate
kind: Secret
1 | Identifies the KafkaProxy resource that this virtual cluster is part of.
It must be in the same namespace as the VirtualKafkaCluster . |
2 | The virtual cluster names the KafkaService to be proxied.
It must be in the same namespace as the VirtualKafkaCluster . |
3 | The virtual cluster can be exposed by one or more ingresses.
Each ingress must reference a KafkaProxyIngress in the same namespace as the VirtualKafkaCluster . |
4 | If the ingress supports TLS, the tls property configures the TLS server certificate to use. |
Within a VirtualKafkaCluster
, an ingress’s tls
property configures TLS for that ingress.
The tls.certificateRef
specifies the Secret
resource holding the TLS server certificate that the proxy uses for clients connecting through this ingress.
The referenced KafkaProxyIngress
also needs to be configured for TLS.
KafkaProxyIngress
configuration for TLSkind: KafkaProxyIngress
apiVersion: kroxylicious.io/v1alpha1
metadata:
name: cluster-ip
namespace: my-proxy
spec:
proxyRef: (1)
name: simple
clusterIP: (2)
protocol: TLS (3)
1 | The ingress must reference a KafkaProxy in the same namespace as the KafkaProxyIngress . |
2 | Exposes the proxy to Kafka clients inside the same Kubernetes cluster using a ClusterIP service. |
3 | The ingress uses TLS as the transport protocol. |
6.2.2. Mutual TLS configuration for client-to-proxy connections
You can configure a virtual cluster ingress to request or require Kafka clients to authenticate to the proxy using TLS. This configuration is known as mutual TLS (mTLS), because both the client and the proxy authenticate each other using TLS.
VirtualKafkaCluster
configuration requiring clients to present a trusted certificatekind: VirtualKafkaCluster
metadata:
# ...
spec:
# ...
ingresses:
- ingressRef:
name: cluster-ip
tls:
certificateRef:
# ...
trustAnchorRef: (1)
kind: ConfigMap (2)
name: trusted-cas (3)
key: trusted-cas.pem (4)
tlsClientAuthentication: REQUIRED (5)
1 | References a separate Kubernetes resource containing the trusted CA certificates. |
2 | The kind is optional and defaults to ConfigMap . |
3 | Name of the resource of the given kind , which must exist in the same namespace as the VirtualKafkaCluster . |
4 | Key identifying the entry in the given resource. The corresponding value must be a set of CA certificates. Supported formats for the bundle are: PEM , PKCS#12 , and JKS . |
5 | Specifies whether client authentication is required (REQUIRED ), requested (REQUESTED ), or disabled (NONE ). If a trustAnchorRef is specified, the default is REQUIRED . |
6.2.3. TLS version configuration for client-to-proxy connections
Some older versions of TLS (and SSL before it) are now considered insecure. These versions remain enabled by default in order to maximize interoperability between TLS clients and servers that only support older versions.
If the Kafka cluster than you want to connect to supports newer TLS versions, you can disable the proxy’s support for older, insecure versions. For example, if the Kafka cluster supports TLSv1.1, TLSv1.2 and TLSv1.3 you might choose to enable only TLSv1.3 support. This would reduce the susceptibility to a TLS downgrade attack.
It is good practice to disable insecure protocol versions. |
You can restrict which TLS protocol versions the proxy supports for client-to-proxy connections by configuring the protocols
property.
VirtualKafkaCluster
with restricted TLS protocol versionskind: VirtualKafkaCluster
metadata:
# ...
spec:
# ...
ingresses:
- ingressRef:
name: cluster-ip
tls:
certificateRef:
# ...
protocols: (1)
allow: (2)
- TLSv1.3
1 | Configures the TLS protocol versions used by the proxy. |
2 | Lists the protocol versions explicitly allowed for TLS negotiation. |
Alternatively, you can use deny
to specify protocol versions to exclude.
The names of the TLS protocol versions supported depend on the JVM in the proxy container image. See https://docs.oracle.com/en/java/javase/17/docs/specs/security/standard-names.html#sslcontext-algorithms.
6.2.4. TLS cipher suite configuration for client-to-proxy connections
A cipher suite is a set of cryptographic algorithms that together provide the security guarantees offered by TLS. During TLS negotiation, a server and client agree on a common cipher suite that they both support.
Some older cipher suites are now considered insecure, but may be enabled on the Kafka cluster to allow older clients to connect.
The cipher suites enabled by default in the proxy depend on the JVM used in the proxy image and the TLS protocol version that is negotiated.
To prevent TLS downgrade attacks, you can disable cipher suites known to be insecure or no longer recommended. However, the proxy and the cluster must support at least one cipher suite in common.
It is good practice to disable insecure cipher suites. |
You can restrict which TLS cipher suites the proxy uses when negotiating client-to-proxy connections by configuring the cipherSuites
property.
VirtualKafkaCluster
configuration using cipherSuites to allow specific cipherskind: VirtualKafkaCluster
metadata:
# ...
spec:
# ...
ingresses:
- ingressRef:
name: cluster-ip
tls:
certificateRef:
# ...
cipherSuites: (1)
allow: (2)
- TLS_AES_128_GCM_SHA256
- TLS_AES_256_GCM_SHA384
1 | Configures the cipher suites used by the proxy. |
2 | Lists the cipher suites explicitly allowed for TLS negotiation. |
Alternatively, you can use deny
to specify cipher suites to exclude.
The names of the cipher suites supported depend on the JVM in the proxy container image. See https://docs.oracle.com/en/java/javase/21/docs/specs/security/standard-names.html#jsse-cipher-suite-names.
6.3. Securing the proxy-to-broker connection
Secure proxy-to-broker communication using TLS.
6.3.1. TLS trust configuration for proxy-to-cluster connections
By default, the proxy uses the platform’s default trust store when connecting to the proxied cluster over TLS. This works if the cluster’s TLS certificates are signed by a well-known public Certificate Authority (CA), but fails if they’re signed by a private CA instead.
It is good practice to configure trust explicitly, even when proxied cluster’s TLS certificates are signed by a public CA. |
This example configures a KafkaService
to trust TLS certificates signed by any Certificate Authority (CA) listed in the trusted-cas.pem
entry of the ConfigMap
named trusted-cas
.
KafkaService
configuration for trusting certificates.kind: KafkaService
metadata:
# ...
spec:
bootstrapServers: kafka.example.com:9092
tls:
trustAnchorRef: (1)
kind: ConfigMap (2)
name: trusted-cas (3)
key: trusted-cas.pem (4)
# ...
1 | The trustAnchorRef property references a separate Kubernetes resource which contains the CA certificates to be trusted |
2 | The kind is optional and defaults to ConfigMap . |
3 | The name of the resource of the given kind . This resource must exist in the same namespace as the KafkaService |
4 | The key identifies the entry in the given resource. The corresponding value must be a PEM-encoded set of CA certificates. |
6.3.2. TLS authentication to proxied Kafka clusters
Some Kafka clusters require mutual TLS (mTLS) authentication.
You can configure the proxy to present a TLS client certificate using the KafkaService
resource.
The TLS client certificate you provide must have been issued by a Certificate Authority (CA) that’s trusted by the proxied cluster.
This example configures a KafkaService
to use a TLS client certificate stored in a Secret
named tls-cert-for-kafka.example.com
.
KafkaService
configuration with TLS client authentication.kind: KafkaService
metadata:
# ...
spec:
bootstrapServers: kafka.example.com:9092
tls:
trustAnchorRef:
kind: ConfigMap
name: trusted-cas
key: trusted-cas.pem
certificateRef: (1)
kind: Secret (2)
name: tls-cert-for-kafka.example.com (3)
# ...
1 | The certificateRef property identifies the TLS client certificate to use. |
2 | The kind is optional and defaults to Secret . The Secret should have type: kubernetes.io/tls . |
3 | The name is the name of the resource of the given kind . This resource must exist in the same namespace as the KafkaService |
6.3.3. TLS version configuration for proxy-to-cluster connections
Some older versions of TLS (and SSL before it) are now considered insecure. These versions remain enabled by default in order to maximize interoperability between TLS clients and servers that only support older versions.
If the Kafka cluster than you want to connect to supports newer TLS versions, you can disable the proxy’s support for older, insecure versions. For example, if the Kafka cluster supports TLSv1.1, TLSv1.2 and TLSv1.3 you might choose to enable only TLSv1.3 support. This would reduce the susceptibility to a TLS downgrade attack.
It is good practice to disable insecure protocol versions. |
This example configures a KafkaService
to allow only TLS v1.3 when connecting to kafka.example.com
.
KafkaService
with restricted TLS protocol versions.kind: KafkaService
metadata:
# ...
spec:
bootstrapServers: kafka.example.com:9092
tls:
# ...
protocols: (1)
allow: (2)
- TLSv1.3
1 | The protocols property configures the TLS protocol versions |
2 | allow lists the versions of TLS which are permitted. |
The protocols
property also supports deny
, if you prefer to list the versions to exclude instead.
The names of the TLS protocol versions supported depend on the JVM in the proxy container image. See https://docs.oracle.com/en/java/javase/17/docs/specs/security/standard-names.html#sslcontext-algorithms.
6.3.4. TLS cipher suite configuration for proxy-to-cluster connections
A cipher suite is a set of cryptographic algorithms that together provide the security guarantees offered by TLS. During TLS negotiation, a server and client agree on a common cipher suite that they both support.
Some older cipher suites are now considered insecure, but may be enabled on the Kafka cluster to allow older clients to connect.
The cipher suites enabled by default in the proxy depend on the JVM used in the proxy image and the TLS protocol version that is negotiated.
To prevent TLS downgrade attacks, you can disable cipher suites known to be insecure or no longer recommended. However, the proxy and the cluster must support at least one cipher suite in common.
It is good practice to disable insecure cipher suites. |
KafkaService
configured so that the proxy will negotiate TLS connection using only the listed ciphers.kind: KafkaService
metadata:
# ...
spec:
bootstrapServers: kafka.example.com:9092
tls:
# ...
cipherSuites: (1)
allow: (2)
- TLS_AES_128_GCM_SHA256
- TLS_AES_256_GCM_SHA384
1 | The cipherSuites object configures the cipher suites. |
2 | allow lists the cipher suites which are permitted. |
The cipherSuites
property also supports deny
, if you prefer to list the cipher suites to exclude instead.
The names of the cipher suites supported depend on the JVM in the proxy container image. See https://docs.oracle.com/en/java/javase/21/docs/specs/security/standard-names.html#jsse-cipher-suite-names.
6.4. Securing filters
Secure filters by using the security features provided by each filter and storing sensitive values in external resources such as a Kubernetes Secret
.
6.4.1. Security-sensitive values in filter resources
Template use and value interpolation
Interpolation is supported in spec.configTemplate
for the automatic substitution of placeholder values at runtime.
This allows security-sensitive values, such as passwords or keys, to be specified in Kubernetes Secret
resources rather than directly in the KafkaProtocolFilter
resource.
Likewise, things like trusted CA certificates can be defined in ConfigMap
resources.
The operator determines which Secret
and ConfigMap
resources are referenced by a KafkaProtocolFilter
resource and declares them as volumes
in the proxy Pod
, mounted into the proxy container.
This example shows how to configure the RecordEncryptionFilter
using a Vault KMS deployed in the same Kubernetes cluster.
KafkaProtocolFilter
configurationkind: KafkaProtocolFilter
metadata:
# ...
spec:
type: RecordEncryption (1)
configTemplate: (2)
kms: VaultKmsService
kmsConfig:
vaultTransitEngineUrl: http://vault.vault.svc.cluster.local:8200/v1/transit
vaultToken:
password: ${secret:vault:token} (3)
selector: TemplateKekSelector
selectorConfig:
template: "$(topicName)" (4)
1 | The type is the Java class name of the proxy filter. If the unqualified name is ambiguous, it must be qualified by the filter package name. |
2 | The KafkaProtocolFilter requires a configTemplate , which supports interpolation references. |
3 | The password uses an interpolation reference, enclosed by ${ and } instead of a literal value. The operator supplies the value at runtime from the specified Secret . |
4 | The selector template is interpreted by the proxy. It uses different delimiters, $( and ) , than the interpolation reference. |
Structure of interpolation references
Let’s look at the example interpolation reference ${secret:vault:token}
in more detail.
It starts with ${
and ends with }
. Between these, it is broken into three parts, separated by colons (:
):
-
secret
is a provider. Supported providers aresecret
andconfigmap
(note the use of lower case). -
vault
is a path. The interpretation of the path depends on the provider. -
token
is a key. The interpretation of the key also depends on the provider.
For both secret
and configmap
providers:
-
The path is interpreted as the name of a
Secret
orConfigMap
resource in the same namespace as theKafkaProtocolFilter
resource. -
The key is interpreted as a key in the
data
property of theSecret
orConfigMap
resource.
7. Monitoring
Kroxylicious supports key observability features to help you understand the performance and health of your proxy instances.
The Kroxylicious Proxy and Kroxylicious Operator generate metrics for real-time monitoring and alerting, as well as logs that capture their actions and behavior. You can integrate these metrics with a monitoring system like Prometheus for ingestion and analysis, while configuring log levels to control the granularity of logged information.
7.1. Overview of proxy metrics
The proxy provides metrics for both connections and messages. These metrics are categorized into downstream (client-side) and upstream (broker-side) groups They allow users to assess the impact of the proxy and its filters on their Kafka system.
-
Connection metrics count the connections made from the downstream (incoming connections from the clients) and the connection made by the proxy to upstream (outgoing connections to the Kafka brokers).
-
Message metrics count the number of Kafka protocol request and response messages that flow through the proxy.
7.1.1. Connection metrics
Connection metrics count the TCP connections made from the client to the proxy (kroxylicious_client_to_proxy_request_total
) and from the proxy to the broker (kroxylicious_proxy_to_server_connections_total
).
These metrics count connection attempts, so the connection count is incremented even if the connection attempt ultimately fails.
In addition to the count metrics, there are error metrics.
* If an error occurs whilst the proxy is accepting a connection from the client the kroxylicious_client_to_proxy_errors_total
metric is incremented by one.
* If an error occurs whilst the proxy is attempting a connection to a broker the kroxylicious_proxy_to_server_errors_total
metric is incremented by one.
Connection and connection error metrics include the following labels: virtual_cluster
(the virtual cluster’s name) and node_id
(the broker’s node ID).
When the client connects to the boostrap endpoint of the virtual cluster, a node ID value of bootstrap
is recorded.
The kroxylicious_client_to_proxy_errors_total
metric also counts connection errors that occur before a virtual cluster has been identified.
For these specific errors, the virtual_cluster
and node_id
labels are set to an empty string ("").
Error conditions signaled within the Kafka protocol response (such as RESOURCE_NOT_FOUND or UNKNOWN_TOPIC_ID ) are not classed as errors by these metrics.
|
Metric Name | Type | Labels | Description |
---|---|---|---|
|
Counter |
|
Incremented by one every time a connection is accepted from a client by the proxy. |
|
Counter |
|
Incremented by one every time a connection is closed due to any downstream error. |
|
Counter |
|
Incremented by one every time a connection is made to the server from the proxy. |
|
Counter |
|
Incremented by one every time a connection is closed due to any upstream error. |
7.1.2. Message metrics
Message metrics count, and record the sizes of, the Kafka protocol requests and responses that flow through the proxy.
Use these metrics to help understand: * the number of messages flowing through the proxy. * the overall volume of data through the proxy. * the effect the filters are having on the messages.
-
Downstream metrics
-
kroxylicious_client_to_proxy_request_total
counts requests as they arrive from the client. -
kroxylicious_proxy_to_client_response_total
counts responses as they are returned to the client. -
kroxylicious_client_to_proxy_request_size_bytes
is incremented by the size of each request as it arrives from the client. -
kroxylicious_proxy_to_client_response_size_bytes
is incremented by the size of each response as it is returned to the client.
-
-
Upstream metrics
-
kroxylicious_proxy_to_server_request_total
counts requests as they go to the broker. -
kroxylicious_proxy_to_server_response_total
counts responses as they are returned by the broker. -
kroxylicious_proxy_to_server_request_size_bytes
is incremented by the size of each request as it goes to the broker. -
kroxylicious_proxy_to_server_response_size_bytes
is incremented by the size of each response as it is returned by the broker.
-
The size recorded is the encoded size of the protocol message. It includes the 4 byte message size.
Filters can alter the flow of messages through the proxy or the content of the message. This is apparent through the metrics. * If a filter sends a short-circuit, or closes a connection the downstream message counters will exceed the upstream counters. * If a filter changes the size of the message, the downstream size metrics will be different to the upstream size metrics.
Message metrics include the following labels: virtual_cluster
(the virtual cluster’s name), node_id
(the broker’s node ID), api_key
(the message type), api_version
, and decoded
(a flag indicating if the message was decoded by the proxy).
When the client connects to the boostrap endpoint of the virtual cluster, metrics are recorded with a node ID value of bootstrap
.
Metric Name | Type | Labels | Description |
---|---|---|---|
|
Counter |
|
Incremented by one every time a request arrives at the proxy from a client. |
|
Counter |
|
Incremented by one every time a request goes from the proxy to a server. |
|
Counter |
|
Incremented by one every time a response arrives at the proxy from a server. |
|
Counter |
|
Incremented by one every time a response goes from the proxy to a client. |
|
Distribution |
|
Incremented by the size of the message each time a request arrives at the proxy from a client. |
|
Distribution |
|
Incremented by the size of the message each time a request goes from the proxy to a server. |
|
Distribution |
|
Incremented by the size of the message each time a response arrives at the proxy from a server. |
|
Distribution |
|
Incremented by the size of the message each time a response goes from the proxy to a client. |
7.2. Overview of operator metrics
The Kroxylicious Operator is implemented using the Java Operator SDK. The Java Operator SDK exposes metrics that allow its behavior to be understood. These metrics are enabled by default in the Kroxylicious Operator.
Refer to the Java Operator SDK metric documentation to learn more about metrics.
7.3. Ingesting metrics
Metrics from the Kroxylicious Proxy and Kroxylicious Operator can be ingested into your Prometheus instance.
The proxy and the operator each expose an HTTP endpoint for Prometheus metrics at the /metrics
address.
The endpoint does not require authentication.
For the Proxy, the port that exposes the scrape endpoint is named management
.
For the Operator, the port is named http
.
Prometheus can be configured to ingest the metrics from the scrape endpoints.
This guide assumes you are using the Prometheus Operator to configure Prometheus.
7.3.1. Ingesting operator metrics
This procedure describes how to ingest metrics from the Kroxylicious Operator into Prometheus.
-
Kroxylicious Operator is installed.
-
Prometheus Operator is installed, and a Prometheus instance has been created using the
Prometheus
custom resource.
-
Apply the PodMonitor configuration:
apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: operator spec: selector: matchLabels: app.kubernetes.io/name: kroxylicious app.kubernetes.io/component: operator podMetricsEndpoints: - path: /metrics port: http
The Prometheus Operator reconfigures Prometheus automatically. Prometheus begins to regularly to scrape the Kroxylicious Operator’s metric.
-
Check the metrics are being ingested using a PromQL query such as:
operator_sdk_reconciliations_queue_size_kafkaproxyreconciler{kind="KafkaProxy", group="kroxylicious.io"}
7.3.2. Ingesting proxy metrics
This procedure describes how to ingest metrics from the Kroxylicious Proxy into Prometheus.
-
Kroxylicious Operator is installed.
-
Prometheus Operator is installed, and a Prometheus instance has been created using the
Prometheus
custom resource. -
An instance of Kroxylicious deployed by the operator.
-
Apply the PodMonitor configuration:
apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: proxy spec: selector: matchLabels: app.kubernetes.io/name: kroxylicious app.kubernetes.io/component: proxy podMetricsEndpoints: - path: /metrics port: management
The Prometheus Operator reconfigures Prometheus automatically. Prometheus begins to regularly to scrape the proxy’s metric.
-
Check the metrics are being ingested using a PromQL query such as:
kroxylicious_build_info
7.4. Setting log levels
You can independently control the logging level of both the Kroxylicious Operator and the Kroxylicious Proxy.
In both cases, logging levels are controlled using two environment variables:
-
KROXYLICIOUS_APP_LOG_LEVEL
controls the logging of the application (io.kroxylicious
loggers). It defaults toINFO
. -
KROXYLICIOUS_ROOT_LOG_LEVEL
controls the logging level at the root. It defaults toWARN
.
When trying to diagnose a problem, start first by raising the logging level of KROXYLICIOUS_APP_LOG_LEVEL
.
If more detailed diagnostics are required, try raising the KROXYLICIOUS_ROOT_LOG_LEVEL
. Both the proxy and operator
use Apache Log4J2 and use logging levels understood by it: TRACE
, DEBUG
, INFO
, WARN
, and ERROR
.
WARNING: Running the operator or the proxy at elevated logging levels, such as DEBUG or TRACE , can generate a large volume of logs, which may consume significant storage and affect performance.
Run at these levels only as long as necessary.
|
7.4.1. Overriding proxy logging levels
This procedure describes how to override the logging level of the Kroxylicious Proxy.
-
An instance of Kroxylicious deployed by the Kroxylicious Operator.
-
Apply the
KROXYLICIOUS_APP_LOG_LEVEL
orKROXYLICIOUS_ROOT_LOG_LEVEL
environment variable to the proxy’s KubernetesDeployment
resource:kubectl set env -n <namespace> deployment <deployment_name> KROXYLICIOUS_APP_LOG_LEVEL=DEBUG
The
Deployment
resource has the same name as theKafkaProxy
.Kubernetes recreates the proxy pod automatically.
-
Verify that the new logging level has taken affect:
kubectl logs -f -n <namespace> deployment/<deployment_name>
Reverting proxy logging levels
This procedure describes how to revert the logging level of the Kroxylicious Proxy back to its defaults.
-
An instance of Kroxylicious deployed by the Kroxylicious Operator.
-
Remove the
KROXYLICIOUS_APP_LOG_LEVEL
orKROXYLICIOUS_ROOT_LOG_LEVEL
environment variable from the proxy’s Kubernetes Deployment:kubectl set env -n <namespace> deployment <deployment_name> KROXYLICIOUS_APP_LOG_LEVEL-
Kubernetes recreates the proxy pod automatically.
-
Verify that the logging level has reverted to its default:
kubectl logs -f -n <namespace> deployment/<deployment_name>
7.4.2. Overriding the operator logging level (operator installed by bundle)
This procedure describes how to override the logging level of the Kroxylicious Operator. It applies when the operator was installed from the YAML bundle.
-
Kroxylicious Operator installed from the YAML bundle.
-
Apply the
KROXYLICIOUS_APP_LOG_LEVEL
orKROXYLICIOUS_ROOT_LOG_LEVEL
environment variable to the operator’s Kubernetes Deployment:kubectl set env -n kroxylicious-operator deployment kroxylicious-operator KROXYLICIOUS_APP_LOG_LEVEL=DEBUG
Kubernetes recreates the operator pod automatically.
-
Verify that the new logging level has taken affect:
kubectl logs -f -n kroxylicious-operator deployment/kroxylicious-operator
Reverting operator logging levels
This procedure describes how to revert the logging level of the Kroxylicious Operator back to its defaults.
-
Kroxylicious Operator installed from the YAML bundle.
-
Remove the
KROXYLICIOUS_APP_LOG_LEVEL
orKROXYLICIOUS_ROOT_LOG_LEVEL
environment variable from the proxy’s Kubernetes Deployment:kubectl set env -n kroxylicious-operator deployment kroxylicious-operator KROXYLICIOUS_APP_LOG_LEVEL-
Kubernetes recreates the operator pod automatically
-
Verify that the logging level has reverted to its default:
kubectl logs -f -n kroxylicious-operator deployment/kroxylicious-operator
8. Glossary
- API
-
Application Programmer Interface.
- CA
-
Certificate Authority. An organization that issues certificates.
- CR
-
Custom Resource. An instance resource of a CRD. In other words, a resource of a kind that is not built into Kubernetes.
- CRD
-
Custom Resource Definition. A Kubernetes API for defining Kubernetes API extensions.
- KMS
-
Key Management System. A dedicated system for controlling access to cryptographic material, and providing operations which use that material.
- mTLS
-
Mutual Transport Layer Security. A configuration of TLS where the client presents a certificate to a server, which the server authenticates.
- TLS
-
The Transport Layer Security. A secure transport protocol where a server presents a certificate to a client, which the client authenticates. TLS was previously known as the Secure Sockets Layer (SSL).
- TCP
-
The Transmission Control Protocol.
9. Trademark notice
-
Apache Kafka is a registered trademark of The Apache Software Foundation.
-
Kubernetes is a registered trademark of The Linux Foundation.
-
Prometheus is a registered trademark of The Linux Foundation.
-
Strimzi is a trademark of The Linux Foundation.
-
Hashicorp Vault is a registered trademark of HashiCorp, Inc.
-
AWS Key Management Service is a trademark of Amazon.com, Inc. or its affiliates.
-
Fortanix and Data Security Manager are trademarks of Fortanix, Inc.