## Keycloak Server Architecture
### Realm-Based Multi-Tenancy
Keycloak organizes identity management into isolated units called **Realms**. Each Realm is an independent authentication domain that manages its own:
- Users and credentials
- Clients (applications)
- Roles and permissions
- Identity providers and federation
- Authentication flows and policies
The `master` Realm is the top-level administrative Realm used to manage other Realms and the Keycloak instance itself.
### Session and Cache Management
Keycloak maintains distributed session and cache data across cluster nodes. In multi-instance deployments, session state is shared using an embedded Infinispan (JGroups) cluster. The cache configuration can be customized through a `ConfigMap` referenced by the `Keycloak` CR.
### Authentication Flows
Keycloak supports fully customizable authentication flows. A flow is a sequence of authentication steps and sub-flows (for example, username/password check followed by OTP verification). Flows can be configured per Realm and per client.
## Operator-Based Lifecycle Management
### Custom Resource Definitions
Two CRDs define the Kubernetes API surface for Alauda Build of Keycloak:
| CRD | Kind | Purpose |
|:----|:-----|:--------|
| `keycloaks.k8s.keycloak.org` | `Keycloak` | Defines and manages a Keycloak server instance |
| `keycloakrealmimports.k8s.keycloak.org` | `KeycloakRealmImport` | Declaratively imports a Realm configuration into a running Keycloak instance |
### Reconciliation Loop
The Operator continuously reconciles the actual state of Keycloak resources with the desired state declared in the CRs:
1. User creates or updates a `Keycloak` CR.
2. The Operator detects the change and computes the required cluster state (Deployment, Service, Ingress, Secrets).
3. The Operator applies the necessary Kubernetes resources.
4. Health checks (liveness and readiness probes) verify the instance is running.
5. For `KeycloakRealmImport`, the Operator triggers an import Job that loads the Realm configuration into the Keycloak server.
## High Availability
For production deployments, Alauda Build of Keycloak supports high-availability configurations:
- **Multiple Replicas**: Set `spec.instances` to 2 or more to run multiple Keycloak Pods. Kubernetes distributes traffic across all healthy replicas.
- **Session Sharing**: Infinispan cluster communication ensures user sessions are shared across all replicas. If one Pod fails, users are transparently routed to another replica without re-authentication.
- **Database HA**: The database is the single source of truth for persistent data. Use a highly available PostgreSQL setup (for example, with replication or a managed database service) to eliminate the database as a single point of failure.
- **Pod Scheduling**: Use `spec.scheduling` to configure node affinity, tolerations, and topology spread constraints to distribute Keycloak Pods across failure domains (nodes, availability zones).
## Network and Security
### TLS
Keycloak supports two TLS modes:
- **HTTPS on the Keycloak Service**: Configure `spec.http.tlsSecret` to terminate TLS at the Keycloak Pod. Suitable when direct pod-to-client encrypted communication is required.
- **TLS at the Ingress**: Configure `spec.ingress.tlsSecret` for TLS termination at the Ingress controller. Keycloak itself can serve HTTP internally when `spec.http.httpEnabled: true`.
### Network Policy
The `spec.networkPolicy` field controls which sources are allowed to reach the Keycloak HTTP, HTTPS, and management ports. This enables fine-grained ingress traffic control at the Kubernetes network layer.
### Security Contexts
Alauda Build of Keycloak enforces security hardening through Pod security contexts:
- Running as non-root
- Dropping all Linux capabilities
- Applying `RuntimeDefault` seccomp profiles