Document tested and supported configurations for single-cluster deployments

Closes #42304

Signed-off-by: Ryan Emerson <remerson@ibm.com>
Signed-off-by: Alexander Schwartz <aschwart@redhat.com>
Co-authored-by: Alexander Schwartz <aschwart@redhat.com>
This commit is contained in:
Ryan Emerson 2025-09-09 20:49:22 +01:00 committed by GitHub
parent 4382072d89
commit a3c95a2a34
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
8 changed files with 95 additions and 48 deletions

View File

@ -2,7 +2,7 @@
== Upgrading {project_name}
This guide describes how to upgrade {project_name}. Use the following procedures in this order:
This {section} describes how to upgrade {project_name}. Use the following procedures in this order:
. Review the migration changes from the previous version of {project_name}.
. Upgrade the {project_name} server.

View File

@ -12,7 +12,7 @@ for your deployments.
== Architectures
The following architectures are supported by {project_name}.
This document describes two architectures to deploy {project_name}: Single-cluster deployments and multi-cluster deployments.
=== Single-cluster deployments
@ -20,7 +20,19 @@ Deploy {project_name} in a single cluster, optionally across multiple availabili
Advantages::
* No external dependencies
<@profile.ifProduct>
* Deployment in a single {kubernetes} cluster
</@profile.ifProduct>
<@profile.ifCommunity>
* Deployment in a single {kubernetes} cluster or a set of virtual machines with transparent networking
</@profile.ifCommunity>
* Tolerate availability-zone failures if deployed to multiple availability zones
Disadvantages::
@ -49,7 +61,7 @@ Disadvantages::
=== Next Steps
To learn more about the different high-availability architectures, please consult the individual guides.
To learn more about the different high-availability architectures and their supported configurations, please consult the individual {sections}.
<@profile.ifCommunity>
* <@links.ha id="single-cluster-introduction" />

View File

@ -6,7 +6,7 @@ title="Deploying {project_name} for HA with the Operator"
summary="Deploy {project_name} for high availability with the {project_name} Operator as a building block."
tileVisible="false" >
This guide describes advanced {project_name} configurations for {kubernetes} which are load tested and will recover from single Pod failures.
This {section} describes advanced {project_name} configurations for {kubernetes} which are load tested and will recover from single Pod failures.
These instructions are intended for use with the setup described in the <@links.ha id="multi-cluster-concepts"/> {section}.
Use it together with the other building blocks outlined in the <@links.ha id="multi-cluster-building-blocks"/> {section}.

View File

@ -7,7 +7,7 @@ title="Multi-cluster deployments"
summary="Connect multiple {project_name} deployments in independent {kubernetes} clusters" >
{project_name} supports deployments that consist of multiple {project_name} instances that connect to each other using its embedded Infinispan caches. Load balancers can distribute the load evenly across those instances.
Those setups are intended for a transparent networks, see <@links.ha id="single-cluster-introduction" /> for more details.
Those setups are intended for transparent networks, see <@links.ha id="single-cluster-introduction" /> for more details.
A multi-cluster setup adds additional components, which allows non-transparent networks to be bridged,
in order to provide additional high availability that may be needed for some environments.
@ -27,44 +27,51 @@ AWS Region or an equivalent low-latency setup.
* Fit within a defined user and request count.
* Can accept the impact of periodic outages.
<@profile.ifCommunity>
[#multi-cluster-tested-configuration]
== Tested Configuration
We regularly test {project_name} with the following configuration:
</@profile.ifCommunity>
<@profile.ifProduct>
[#multi-cluster-supported-configuration]
== Supported Configuration
</@profile.ifProduct>
* Two OpenShift single-AZ clusters, in the same AWS Region
** Provisioned with https://www.redhat.com/en/technologies/cloud-computing/openshift/aws[Red Hat OpenShift Service on AWS] (ROSA),
<@profile.ifProduct>
either ROSA HCP or ROSA classic.
</@profile.ifProduct>
<@profile.ifCommunity>
using ROSA HCP.
</@profile.ifCommunity>
** Each OpenShift cluster has all its workers in a single Availability Zone.
** OpenShift version
** All worker nodes reside in a single Availability Zone.
** OpenShift version 4.17.
* Amazon Aurora PostgreSQL database
** High availability with a primary DB instance in one availability zone, and a synchronously replicated reader in the second availability zone
** Version ${properties["aurora-postgresql.version"]}
* AWS Global Accelerator, sending traffic to both ROSA clusters
* AWS Lambda triggered by ROSA's Prometheus and Alert Manager to automate failover
[#multi-cluster-supported-configuration]
== Supported Configuration
The following configurations are supported:
* Two {kubernetes} single-AZ clusters, in the same AWS Region
** Provisioned with https://www.redhat.com/en/technologies/cloud-computing/openshift/aws[Red Hat OpenShift Service on AWS] (ROSA),
either ROSA HCP or ROSA classic.
** Each {kubernetes} cluster has all its workers in a single Availability Zone.
<@profile.ifProduct>
** OpenShift version
4.17 (or later).
</@profile.ifProduct>
<@profile.ifCommunity>
4.17.
** Kubernetes version 1.30
</@profile.ifCommunity>
* Amazon Aurora PostgreSQL database
** High availability with a primary DB instance in one Availability Zone, and a synchronously replicated reader in the second Availability Zone
** High availability with a primary DB instance in one availability zone, and a synchronously replicated reader in the second availability zone
** Version ${properties["aurora-postgresql.version"]}
* AWS Global Accelerator, sending traffic to both ROSA clusters
* AWS Lambda
<@profile.ifCommunity>
triggered by ROSA's Prometheus and Alert Manager
</@profile.ifCommunity>
to automate failover
* AWS Lambda to automate failover
<#include "/high-availability/partials/configuration-disclaimer.adoc" />

View File

@ -1,5 +1,5 @@
<@profile.ifProduct>
Any deviation from the configuration above is not supported and any issue must be replicated in that environment for support.
Any deviation from the configuration above is not supported and any issue must be replicated in a supported environment for support.
</@profile.ifProduct>
<@profile.ifCommunity>
While equivalent setups should work, you will need to verify the performance and failure behavior of your environment.

View File

@ -1,5 +1,6 @@
<#import "/templates/guide.adoc" as tmpl>
<#import "/templates/links.adoc" as links>
<#import "/templates/profile.adoc" as profile>
<@tmpl.guide
title="Concepts for single-cluster deployments"
@ -12,11 +13,21 @@ It outlines the requirements of the high availability architecture and describes
[#single-cluster-when-to-use]
== When to use this setup
Use this setup to provide {project_name} deployments that are deployed to a setup with transparent networking.
<@profile.ifProduct>
Use this setup to deploy {project_name} to an {kubernetes} cluster.
</@profile.ifProduct>
<@profile.ifCommunity>
Use this setup to deploy {project_name} to a setup with transparent networking.
To provide a more concrete example, the following chapter assumes a deployment contained within a single {kubernetes} cluster.
The same concepts could be applied to a set of virtual or physical machines and a manual or scripted deployment.
</@profile.ifCommunity>
== Single or multiple availability-zones
The behaviour and high-availability guarantees of the {project_name} deployment are ultimately determined by the configuration of

View File

@ -6,7 +6,7 @@ title="Deploying {project_name} across multiple availability-zones with the Oper
summary="Deploy {project_name} for high availability with the {project_name} Operator as a building block."
tileVisible="false" >
This guide describes advanced {project_name} configurations for {kubernetes} which are load tested and will recover availability-zone
This {section} describes advanced {project_name} configurations for {kubernetes} which are load tested and will recover availability-zone
failures.
These instructions are intended for use with the setup described in the <@links.ha id="single-cluster-concepts"/> {section}.

View File

@ -23,39 +23,56 @@ AWS Region or an equivalent low-latency setup.
* Fit within a defined user and request count.
* Can accept the impact of periodic outages.
<@profile.ifCommunity>
[#single-cluster-tested-configuration]
== Tested Configuration
We regularly test {project_name} with the following configuration:
</@profile.ifCommunity>
<@profile.ifProduct>
[#single-cluster-supported-configuration]
== Supported Configuration
</@profile.ifProduct>
* An OpenShift cluster deployed across three availability-zones
* An OpenShift cluster deployed across three availability zones
** Provisioned with https://www.redhat.com/en/technologies/cloud-computing/openshift/aws[Red Hat OpenShift Service on AWS] (ROSA),
<@profile.ifProduct>
either ROSA HCP or ROSA classic.
</@profile.ifProduct>
<@profile.ifCommunity>
using ROSA HCP.
</@profile.ifCommunity>
** At least one worker node for each availability-zone
** OpenShift version
<@profile.ifProduct>
4.17 (or later).
</@profile.ifProduct>
<@profile.ifCommunity>
4.17.
</@profile.ifCommunity>
** OpenShift version 4.17.
* Amazon Aurora PostgreSQL database
** High availability with a primary DB instance in one Availability Zone, and synchronously replicated readers in the other Availability Zones
** High availability with a primary DB instance in one availability zone, and synchronously replicated readers in the other availability zones
** Version ${properties["aurora-postgresql.version"]}
[#single-cluster-supported-configuration]
== Supported Configurations
The following configurations are supported:
<@profile.ifProduct>
* {project_name} deployed on an OpenShift cluster version 4.17 or later
** For cloud setups, Pods can be scheduled across up to three availability zones within the same region
if OpenShift supports spanning multiple availability zones in that environment and {project_name}'s latency requirements are met.
** For on-premise setups, Pods can be scheduled across up to three datacenters
if OpenShift supports spanning multiple datacenters in that environment and {project_name}'s latency requirements are met.
</@profile.ifProduct>
<@profile.ifCommunity>
* {project_name} deployed on a {kubernetes} cluster
** For cloud setups, Pods can be scheduled across multiple availability zones within the same region
if {project_name}'s latency requirements are met.
** For on-premise setups, Pods can be scheduled across multiple datacenters
if {project_name}'s latency requirements are met.
* {project_name} deployed on a virtual machines or bare metal
** Instances can be scheduled across multiple availability zones within the same cloud-provider region or multiple datacenters if {project_name}'s latency requirements are met.
</@profile.ifCommunity>
* Deployments require a P50 round-trip latency of less than 10 ms between {project_name} instances.
* Database
** For a list of supported databases, see <@links.server id="db"/>.
** Deployments spanning multiple availability zones must utilize a database that can tolerate zone failures
and synchronously replicates data between replicas.
<#include "/high-availability/partials/configuration-disclaimer.adoc" />
Read more on each item in the <@links.ha id="single-cluster-building-blocks" /> {section}.