1 Introduction
This document describes the network details of an Ericsson Centralized User Database (CUDB) node, and gives an overview of the interwork among the CUDB nodes comprising the CUDB system.
1.1 Document Purpose and Scope
The purpose of this document is to describe the general network infrastructure of a CUDB system, from the internal components to the node integration with the customer network environment. Most of the network details covered by this document are independent of the available CUDB infrastructure types.
The document covers the following topics in detail:
The following topics are, however, out of the scope of this document.
1.2 Revision Information
Other than editorial changes, this document has been revised as follows:
BSP 8100 Software for CUDB Systems Deployed on Native BSP 8100: Added GEP7L where GEP5 is referred to.
General Overview: Updated Figure 3 CUDB Node on BSP 8100 Hardware, General Network Overview by replacing the phrase "GEP3/GEP5 blades" with "GEP3/GEP5/GEP7L blades".
Internal Networks: Added GEP7L where GEP5 is referred to.
External Connectivity for CUDB Systems Deployed on Native BSP 8100: Added GEP7L where GEP5 is referred to.
1.3 Target Groups
1.4 Typographic Conventions
Typographic Conventions can be found in the following document:
2 Solution Overview
This section describes the networking principles applicable to any CUDB system. Mapping these general principles to actual deployments is out of the scope of this document.
2.1 CUDB System Overview
CUDB is a distributed database system, exposed as a Lightweight Directory Access Protocol (LDAP) directory, physically made up of network-connected (and interconnected) CUDB nodes spread over the operator network. Logically and length-wise, the CUDB system is divided into two internal layers, depending on the provided function: Processing Layer (PL) and Data Store (DS) Layer.
Refer to CUDB Technical Product Description for more information on node components. For CUDB systems deployed on native BSP 8100, refer to CUDB Node Hardware Description for more information on CUDB hardware.
2.2 Network Overview
This section provides an overview of the hardware and software components used to configure the CUDB network, as well as a general network description depending on the type of deployment.
For CUDB systems deployed on native BSP 8100, the following elements are covered:
For CUDB systems deployed on a cloud infrastructure, the following elements are covered:
2.2.1 Hardware Components
This section describes the hardware components of the CUDB network for CUDB systems deployed on native BSP 8100.
2.2.1.1 CMX
The CMX supplies 10 Gigabit Ethernet (GbE) Switching to all the backplane ports, used for connectivity of all Generic Ericsson Processor (GEP) blades in the EGEM2 subrack, 10GbE/40GbE to four front ports, 10GbE to another four front ports, and 1GbE to four front ports, in compliance with applicable parts of the current IEEE 802.3, IEEE 802.1D and IEEE 802.1Q standards. The CMX resides on a Component Main Switch Board version 3 (CMXB3) hardware component.
The CMX software runs on the CMXB3, and provides 1GbE/10GbE/40GbE switching and routing solution for nodes based on EGEM2-10 and EGEM2-40 shelves, and Cabinet Aggregation Switch (CAX) 2.0 units in Cabinet Aggregation Shelves (CAS). In addition to L2 switching and L3 routing functions, the CMX also provides a comprehensive set of various operation and maintenance (OAM) functions, such as board management of HWM, SWM, SYS, PM, CM, SYNC, and logging.
The CMX switch is designed to run in a network environment with separated management and payload networks, or in other words, on separate control and data planes. This means that some of the interfaces are dedicated for control traffic, while others are for data traffic.
The control interfaces are divided into two categories:
2.2.1.2 SCX
The SCX is an Ethernet switch with hardware and software functions. The backplane contains 26 1GbE ports. The front ports, at the same time, are used for the inter-System Control Switch Board (SCXB) connectivity, and connection towards the CMX.
The SCXB has seven front ports: four 10GbE ports (E1-E4) and three 1GbE ports (GE1-GE3). In the backplane, the two SCXBs of the same subrack are connected by two backplane ports for DMX monitoring. Also, the two SCXBs in the lower magazine are connected by a cable in one front port for redundancy purposes.
SCX provides functions for blade Preboot eXecution Environment (PXE) or Dynamic Host Configuration Protocol (DHCP) boot infrastructure and blade hardware equipment management in an EGEM2 subrack, such as power, fan control, supervision, and other operation and management tasks.
The SCXs are internal CUDB node control plane switches only, and are not connected to external virtual networks.
Although there is an interconnection between the Ethernet bridges of the SCXBs through the backplane, it is reserved for control operations.
2.2.2 Software Components
This section lists the software components of the CUDB network.
2.2.2.1 BSP 8100 Software for CUDB Systems Deployed on Native BSP 8100
BSP 8100 is an infrastructure management product that provides the backbone for an Ericsson multi-application site solution. It offers services and resources to streamline user interface and implementation for common infrastructure tasks for different Ericsson Applications. Applications that are running on BSP are implemented as BSP Tenants.
The main objectives of BSP 8100 as a product are to provide hardware and software decoupling and also decoupling of External Network and Tenant Network Connectivity.
The BSP software consists of the following:
2.2.2.2 eVIP
Traffic distribution is provided by the eVIP framework. Some of the main terms related to eVIP are explained in the following list:
| Abstract Load Balancer (ALB) |
The ALB is a logical container for eVIP addresses. The ALB concept is comparable to the VR concept available in commercial routers. Load balancer functions are used to distribute traffic (for example, TCP connections) according to a distribution policy (for example, round robin) among a defined set of targets. Such a set of targets are referred to as a "pool of targets". The ALB also serves as a structuring entity that compartmentalizes scalable Server Load Balancing (SLB) resources and external interfaces. An ALB, therefore, can be viewed as the equivalent of a commercial SLB box which is embedded in the cluster using eVIP. However, an important difference is that commercial SLB boxes are deployed as external appliance boxes in the Customer Premises Equipment (CPE): they are individually installed as other independent network appliance boxes, such as routers and firewall boxes. |
|
| Front-End Element |
The Front-End Elements handle the communication with the external Data Communication Network and are responsible for routing traffic towards and from VIP addresses. |
|
| Gateway Routers |
Any component capable of doing Equal Cost Multi Path (ECMP). For CUDB systems deployed on native BSP 8100, this task is carried out by CMX routers. For CUDB systems deployed on a cloud infrastructure, Gateway Routers are external entities, not part of the Virtualized Network Function (VNF). |
|
| Load Balancer |
IP Servers where load distribution is done. IP Servers are transparent, hiding all the eVIP details from applications. |
|
| Security Element |
The Security Element is a part of the IPsec implementation. It applies security policies on the traffic flows, and encrypts or decrypts traffic if necessary. |
|
As a consequence of using eVIP software, a set of additional virtual networks are required. These virtual networks are for internal purposes and not routable from the customer network.
Details about eVIP and its architecture can be found in eVIP Management Guide.
2.2.3 Network
2.2.3.1 General Overview
For CUDB systems deployed on native BSP 8100, Figure 3 provides a general overview of how previously described components connect to each other to shape the BSP 8100 hardware network. The aim of this figure is to help understand the rest of the topics in this document, it is not an exhaustive or detailed figure.
For CUDB systems deployed on a cloud infrastructure, Figure 4 provides a general overview of the virtual network layout. The aim of this figure is to help understand the rest of the topics in this document, it is not an exhaustive or detailed figure.
2.2.3.2 Network Description
This section gives an overview of the different internal networks configured within a CUDB node. These internal networks are used to separate different types of traffic.
Additionally, the CUDB node can be connected to several outer networks defined in the external network site infrastructure. These networks are either defined to separate OAM, provisioning, inter-CUDB, and application FE traffic, or to fulfill other reasons, like ensuring better manageability, traffic flow control, and applying QoS.
Therefore, available CUDB virtual networks are classified into the following categories, depending on their use and the region they belong to:
As an example, Figure 5 shows a simplified CUDB deployment with three sites. The figure details only a CUDB node and one site, and does not show specific but required non-CUDB equipment. The application FEs and external systems can be connected to public virtual networks as described in the following sections.
Table 1 in Address Plan Summary provides a summary of the information included in the following sections.
2.2.3.2.1 Internal Networks
Internal networks provide internal blade or VM cluster connectivity, and therefore exist in each CUDB node, having exactly the same addresses in each of them. These virtual networks are configured in LDE and (when applicable) in switches and traffic distribution components. All of them are mandatory.
If a Provisioning Gateway (PG) is collocated with a CUDB node, other PG-specific networks must be configured.
Due to the different naming requirements, the virtual networks can be identified using different names in LDE configuration, in switches, and in documentation. The following sections explain the virtual networks in detail.
This section describes the cluster internal network and, for CUDB systems deployed on native BSP 8100, the boot networks.
LDE Cluster Internal Network
The LDE cluster internal network is dedicated to internal LDE cluster traffic, and is therefore used exclusively by LDE.
In LDE, it is mandatory to name the internal network internal.
The IP range of this virtual network is always on the 192.168.0.0/24 network. It is the same in all CUDB nodes, and is neither visible, nor routable outside the CUDB nodes. Therefore, the following line defining the internal network exists in the cluster.conf LDE configuration file:
network internal 192.168.0.0/24
Boot Networks for CUDB Systems Deployed on Native BSP 8100
Boot networks are specific networks for CUDB nodes based on GEP3/GEP5/GEP7L blades.
The purpose of these networks is to use 1GbE blade ports instead of the 10GbE ones when booting the second System Controller (SC) and all payload blades on the left and right backplanes.
| Note: |
The 10GbE ports of the
GEP5/GEP7L
blades are not DHCP/PXE bootable. Therefore, use the 1GbE ports to load LDE images with
the bootL and bootR networks. |
The IP range of the bootL virtual network is always on the 192.168.10.0/24 network. In case of bootR, the virtual network IP range is always on the 192.168.11.0/24 network. These IP ranges are the same in all CUDB nodes, and are neither visible nor routable outside the CUDB nodes.
In the cluster.conf LDE configuration file, these networks are defined with the following lines:
network bootL 192.168.10.0/24 network bootR 192.168.11.0/24
This network is used for CUDB application-specific traffic. The network transfers the following traffic types from the traffic distribution components to the cluster blade or VMs:
The IP range of this virtual network is always on the 10.22.0.0/24 network. This is the same in all CUDB nodes, and is neither visible, nor routable outside the CUDB nodes.
In the cluster.conf LDE configuration file, this network is defined with the application_internal name in its definition:
network application_internal 10.22.0.0/24
The purpose of this network is to transfer the following traffic types:
The IP range of this virtual network is always on the 10.1.1.0/24 network. This is the same in all CUDB nodes, and is neither visible, nor routable outside the CUDB nodes.
In the cluster.conf LDE configuration file, this network is defined with the following line:
network ndb_private 10.1.1.0/24
2.2.3.2.2 Infrastructure Networks
For CUDB nodes, infrastructure networks provide traffic connectivity, which CUDB supports using the VIP software component.
Traffic separation is managed by the VIP component using separate virtual networks.
| Note: |
To improve the readability of this
document, from now on, the relevant infrastructure networks will be
referred to as OAM, PROVISIONING, FE, and SITE. |
For CUDB systems deployed on native BSP 8100, they refer to Cudb_oam_fee, Cudb_prov_fee, Cudb_fe_fee, and Cudb_site_fee, respectively.
For CUDB systems deployed on a cloud infrastructure, they refer to vCUDB_OAM, vCUDB_PROVISIONING, vCUDB_FE, and vCUDB_SITE, respectively.
2.2.3.2.3 VIP Addresses
2.2.3.2.4 SysMGMT Network
SysMGMT is a mandatory network used for system administration purposes. It provides low-level access with root or administrator privileges to the different components of a CUDB node.
The direct management ports of the CUDB node infrastructure are usually disconnected in normal CUDB operation, and are used only when the CUDB node is installed the first time. In that sense, it is possible to extend the default OAM interface by directly connecting these node infrastructure device management ports to an external network infrastructure, and then build a dedicated system OAM network for better manageability, or for emergency access when the normal OAM access is unavailable for some reason.
The SysMGMT network is connected by using separate links, has regular IP connectivity, and no VIP. The network can be either separated, or included in the customer OAM network during integration. However, take into account that in the latter case, the system addresses become exposed to customer premises. The proper routes to access the SysMGMT network must be applied in the customer network. In any case, it is strongly advised to use the proper security measures.
| Note: |
For CUDB systems deployed on native BSP 8100, the BSP-NBI
and SYSMGM virtual networks provide dedicated management access to
BSP subsystem and CUDB. |
2.2.3.3 External Connectivity for CUDB Systems Deployed on Native BSP 8100
For CUDB systems deployed on native BSP 8100 with GEP3 blades, the supported options for CMX external connectivity are as follows:
In both options, each type of traffic can be configured with any of the available ports. CMXs can be connected either to the Active Patch Panel (APP) of the cabinet to convert from the copper cabling used internally within the cabinet to the optical cabling used externally with the site switches or routers (10GbE ports) or directly to the site switches or routers over copper cabling (1GbE ports).
Figure 6 shows the standard default configuration for 3 uplink-ports, where OAM (om_cn_sp1) and Provisioning (om_cn_sp2) traffic share the same physical links and use the same port on each CMX, and an example of a 4 uplink-ports configuration, where each type of traffic uses a different link and port.
For CUDB systems deployed on native BSP 8100 with GEP5/GEP7L blades, the supported options for CMX external connectivity are as follows:
As Figure 7 shows, OAM (om_cn_sp1) and Provisioning (om_cn_sp2) share the same physical links and use the same port on each CMX. The two types of traffic share the port, but use different virtual networks and tags. FE (sig_data_sync_sp1) and SITE (sig_data_sync_sp2) traffic share the same physical links and use the same port on each CMX as well.
| Note: |
Option 1 or Option 2 must be chosen depending on the configuration. |
2.2.3.3.1 Uplink Networks for CUDB Systems Deployed on Native BSP 8100
The placement of the CUDB nodes on the customer network must follow traffic separation principles. Traffic separation is used to isolate various traffic types from each other: for example, OAM traffic and user-related traffic must be strictly separated, mostly due to security and overlapping private IP address ranges, among other reasons. With this approach, the networks described below must exist in the network site where the CUDB node is connected.
Additionally, other networks could also appear related to the specific integration environment of the operator network infrastructure. Such networks are optional, and are not part of the default configuration of a CUDB node. Therefore, they must be deployed as part of an integration project for the customer.
The networks below are defined as virtual networks on the interfaces connected to the external site infrastructure. Assigned IP addresses within these networks are allocated from the operator network IP addressing plan.
None of these virtual networks exist internally in CUDB nodes, so these are not included in the LDE configuration:
CUDB_OAM is a mandatory network, used to separate OAM traffic from regular application FE traffic. The NMS uses this network to get access to the OAM interfaces of the CUDB nodes. By default, CUDB provides an OAM interface to manage CUDB node components through a VIP address that redirects OAM traffic to one of the SCs. Once an OAM session is established on the SCs, it is possible to hop on to any blade or VM of the LDE cluster. All OAM tasks are then centralized on the SCs.
CUDB_PROVISIONING is a mandatory network, used to separate provisioning traffic from regular application FE traffic. Take into account that provisioning traffic is usually OAM-related traffic, but it must be logically or physically separated. In that sense, a devoted network at site level is included to connect PG nodes and CUDB nodes.
CUDB_SITE is a mandatory network used to provide connectivity between the CUDB nodes and any other equipment deployed on the site. Additionally, this network contains the site gateway-router that the CUDB nodes use to route traffic to other nodes in other sites. Therefore, all traffic exchanged among CUDB nodes is transferred by this network, and can traverse its gateway if destination is a CUDB node in a remote site.
2.2.3.4 BFD and VRRP for CUDB Systems Deployed on Native BSP 8100
To guarantee the service, a redundancy mechanism is required at CMX level (towards the customer network). When deployed on native BSP 8100, this redundancy mechanism can be configured either at uplink level, that is, independently for each uplink network, or at CUDB node level, that is, with the same redundancy mechanism for all networks. CUDB supports two protocols that can provide redundancy, VRRP and BFD.
| VRRP |
This protocol is based on a shared IP address and both CMX routers agree which one of them owns the shared IP at a certain time. The CMX pair is configured in Active-Standby configuration (as preferred configuration) sharing the same IP external addresses by means of VRRP. Nevertheless, VRRP between both routers can be configured to share two IP addresses and get an Active-Active scheme in case the traffic from the site is using ECMP or other mechanism to load balancing traffic over the eVIP gateway routers. In this case, each router will have a primary IP address and also the primary IP of the other router as backup (second IP with less priority in VRRP configuration). |
|
| BFD |
Contrary to VRRP, the BFD protocol is not based on a shared IP address and the CMX routers do not need to communicate with each other. A different mechanism is used; each router monitors its next hop on the L3 network independently. This next hop usually corresponds to a router in the customer network. Some configuration is required in customer routers to properly configure BFD. In the CUDB, there are two possible BFD configuration alternatives: |
|
From the CUDB point of view, there is no significant difference between choosing one protocol or the other. Both VRRP and BFD are able to detect failures in less than a second. Consequently, both of them provide CUDB with a suitable redundancy mechanism. The choice depends on customer requirements.
| Note: |
This document provides the instructions to configure BFD
on the CUDB side only. The required actions and configurations in
the customer network are not described here. |
2.2.3.5 Internal L2 Connectivity for CUDB Systems Deployed on Native BSP 8100
All blades in a subrack are connected to both CMX switches in the subrack through the backplane links.
SCXs are cross link L2 connected through the backplane links.
All blade links use Active-Standby L2 resiliency mechanism based on Linux bonding driver.
The way to connect the CMX routers is to connect both E3 10GbE front ports of the first subrack SCXs to the E5 10GbE front ports of the first subrack CMX routers, as shown in Figure 8.
2.2.4 Other Considerations for CUDB Systems Deployed on Native BSP 8100
This section provides some considerations regarding hardware alarms and naming conventions for CUDB systems deployed on native BSP 8100.
2.2.4.1 Hardware Alarms
2.2.4.2 Hardware Naming
Throughout this document, the following naming conventions are used for hardware:
| SCX blades: |
SCX-0-0, left SCXB in subrack 0 (SC_2_1, with BSP 8100 software). SCX-0-25, right SCXB in subrack 0 (SC_2_2, with BSP 8100 software). SCX-1-0, left SCXB in subrack 1 (blade_1_0). SCX-1-25, right SCXB in subrack 1 (blade_1_25). SCX-2-0, left SCXB in subrack 2 (blade_2_0). SCX-2-25, right SCXB in subrack 2 (blade_2_25). |
|
| Switches and routers: |
CMX-0-26, left switch/router in subrack 0 (blade_0_26). CMX-1-26, left switch/router in subrack 1 (blade_1_26). CMX-2-26, left switch/router in subrack 2 (blade_2_26). CMX-0-28, right switch/router in subrack 0 (blade_0_28). CMX-1-28, right switch/router in subrack 1 (blade_1_28). CMX-2-28, right switch/router in subrack 2 (blade_2_28). |
|
2.3 VIP Address Allocation
The following sections describe how to assign the Virtual IP addresses to each CUDB node within each network. These VIPs are used as the communication point from the specified public networks. All of them are network addresses, combined with a protocol and port providing the access point to the CUDB node services provided for customer equipment and other nodes. Some of them are optional, while others correspond to a minimum set of VIPs that each CUDB node requires to be able to provide its basic service. The conditions are explained in the below sections, along with further details.
Table 1 in Address Plan Summary gives a summary of the information included in the following subsections.
2.3.1 SITE_VIP
SITE_VIP is a mandatory address, corresponding to the CUDB node VIP that each node in the CUDB system uses to communicate with the specific node. Therefore, each CUDB node is assigned to one SITE_VIP, and all nodes know their own SITE_VIP as well as the SITE_VIPs assigned to all other nodes.
Any address can be assigned by the operator as long as it is unique across the whole CUDB system, and is routable within the site and among other sites. This routing must be configured in the CUDB_SITE network gateway. Therefore, provided that it is properly setup, it is not required for the SITE_VIP to belong to any network.
Additionally, any outgoing traffic started by the CUDB node towards the SITE_VIP in another node must carry the origin node SITE_VIP as source address. The only exception to this are the multiple SITE_VIPs, where the origin address is any of the SITE_VIPs.
2.3.1.1 Multiple SITE_VIPs
Steps
Each CUDB node needs to be configured with the adequate number of SITE_VIP addresses for communication with other CUDB nodes. Each node has a primary SITE_VIP address, and depending on the size of the CUDB system (at least 10 nodes), can also have secondary and source-equivalent SITE_VIP addresses, as well. The formula to calculate the number of IPs needed is as follows:
- Decrease the number of nodes in the system by 1.
- Divide the result of the previous step by 8.
- Finally, round up the result to the next larger integer.
2.3.2 OAM_VIP
OAM_VIP is a mandatory address. This address corresponds to the VIP within the CUDB_OAM network that the external CUDB_OAM equipment uses to communicate with the specific CUDB node.
The address must be allocated from the customer network plan, and must belong to the OAM operator network in the site. Any outgoing traffic started by a CUDB node towards the CUDB_OAM network carries the origin node OAM_VIP as the source address. Also, any packet sent from a node towards its OAM_VIP returns to the node.
2.3.3 FE_VIP
FE_VIP is a mandatory address. This address corresponds to the VIP within the CUDB_FE network that the external traffic application FEs use to send and receive LDAP traffic to and from the specific CUDB node.
The address must be allocated from the customer network plan and must belong to the CUDB_FE operator network in the site. Any outgoing traffic started by a CUDB node towards the CUDB_FE network carries the origin node FE_VIP as the source address. Also, any packet sent from a node towards its FE_VIP returns to the node.
2.3.4 PROVISIONING_VIP
PROVISIONING_VIP is a mandatory address. This address corresponds to the VIP within CUDB_PROVISIONING network that the external provisioning equipment use to send and receive LDAP provisioning traffic to and from the specific CUDB node.
The address must be allocated from the customer network plan, and must belong to the PROVISIONING operator network in the site.
2.3.5 VIP Address Mapping with Configuration Model Attributes
The CudbLocalNode and CudbRemoteNode configuration model classes (described in CUDB Node Configuration Data Model Description) include several attributes that are related to the VIP addresses configured in the load balancing solution at every node. These attributes are as follows:
2.4 Address Plan Summary
Table 1 collects the most relevant information for CUDB systems deployed on native BSP 8100.
|
Network Common Name |
BSP 8100 Virtual Network Name |
CUDB Node Internal Configuration Identifier |
Purpose |
Type |
Net/Mask |
Allocated VIPs |
|---|---|---|---|---|---|---|
|
lotc |
cudb_lde_sp |
internal |
LDE cluster management. |
Private |
192.168.0.0/24 |
N/A |
|
application_internal |
cudb_application_internal_sp |
application_internal |
Provides CUDB node connectivity to handle internal CUDB traffic. |
Private |
10.22.0.0/24 |
N/A |
|
ndb_private |
cudb_ndb_private_sp |
ndb_private |
Specific network required for NDB data nodes traffic and management. |
Private |
10.1.1.0/24 |
N/A |
|
Cudb_oam_fee(1) |
cudb_om_sp2 |
N/A |
Publishing OAM_VIP |
Infrastructure |
192.168.218.0/28(2) |
One OAM_VIP per CUDB node in the site. |
|
Cudb_prov_fee(1) |
cudb_om_sp3 |
N/A |
Publishing PROVISIONING_VIP |
Infrastructure |
192.168.226.0/28(2) |
One PROVISIONING_VIP per CUDB node the site |
|
Cudb_fe_fee(1) |
cudb_sig_sp1 |
N/A |
Publishing FE_VIP |
Infrastructure |
192.168.216.0/28(2) |
One FE_VIP per CUDB node in the site. |
|
Cudb_site_fee(1) |
cudb_sig_sp2 |
N/A |
Publishing SITE_VIP |
Infrastructure |
192.168.217.0/28(2) |
One primary SITE_VIP(3) per CUDB node in the site. It is possible to have multiple SITE_VIPs(3) . |
|
sig_data_sync_sp1 |
N/A |
Uplink |
Assigned from the customer network plan. |
N/A |
||
|
om_cn_sp1 |
N/A |
OAM network |
Uplink |
Assigned from the customer network plan. |
N/A |
|
|
om_cn_sp2 |
N/A |
Provisioning related traffic. |
Uplink |
Assigned from the customer network plan. |
N/A |
|
|
sig_data_sync_sp2 |
N/A |
Connects all CUDB nodes and other solution equipment in the same site, and also includes gateways to other sites. |
Uplink |
Assigned from the customer network plan. |
N/A |
|
|
SysMGMT |
BSP-NBI and SYSMGMT(1) |
sysmgmt |
Mandatory network for system administration purposes. Due to BSP limitation, BSP-NBI can be only IPv4 network. |
SYSMGMT |
Assigned from the customer network plan. |
None |
|
cudb_Control-L and cudb_Control-R |
ntp_pdl and ntp_pdr |
NTP virtual network |
Private |
192.168.12.0/24 and 192.168.13.0/24 |
None |
Table 2 collects the most relevant information for CUDB systems deployed on a cloud infrastructure.
|
Network Common Name |
CUDB Node Internal Configuration Identifier |
Purpose |
Type |
Net/Mask |
Allocated VIPs |
|---|---|---|---|---|---|
|
lotc |
internal |
LDE cluster management. |
Private |
192.168.0.0/24 |
N/A |
|
application_internal |
application_internal |
Provides CUDB node connectivity to handle internal CUDB traffic. |
Private |
10.22.0.0/24 |
N/A |
|
ndb_private |
ndb_private |
Specific network required for NDB data nodes traffic and management. |
Private |
10.1.1.0/24 |
N/A |
|
vCUDB_FE(5) |
N/A |
Publishing FE_VIP |
Infrastructure |
IPv4 example: 192.168.134.0/26 IPv6 example: fd00:1b70:82c8:aaaa::/64 |
One FE_VIP per CUDB node in the site. |
|
vCUDB_OAM(5) |
N/A |
Publishing OAM_VIP |
Infrastructure |
IPv4 example: 192.168.100.0/28 IPv6 example: fd00:1b70:82c8:bbbb::/64 |
One OAM_VIP per CUDB node in the site. |
|
vCUDB_PROVISIONING(5) |
N/A |
Publishing PROVISIONING_VIP |
Infrastructure |
IPv4 example: 192.168.117.0/26 IPv6 example: fd00:1b70:82c8:cccc::/64 |
One PROVISIONING_VIP per CUDB node in the site |
|
vCUDB_SITE(6) |
N/A |
Publishing SITE_VIP |
Infrastructure |
IPv4 example: 192.168.151.0/26 IPv6 example: fd00:1b70:82c8:dddd::/64 |
One primary SITE_VIP(6) per CUDB node in the site. It is possible to have multiple SITE_VIPs(6). |
|
SysMGMT |
sysmgmt |
Mandatory network for system administration purposes. |
SYSMGMT |
Assigned from the customer network plan. |
None |
IPv6 Support
IPv6 configuration is supported for SITE_VIP(s), OAM_VIP, PROVISIONING_VIP, FE_VIP, and SYSMGMT addresses. IPv6 is supported only for new installations. Upgrade from IPv4 deployment and dual stack, the combination of IPv4 and IPv6 addresses, are not supported, therefore all addresses must be of the same type. For more information about IPv6 configuration, refer to CUDB Node Configuration Data Model Description.
2.5 Time Synchronization
The time synchronization of the blades or VMs in the CUDB node is essential: it enables the use of proper timestamps in database operations, and helps to correlate log files in case of (fault) tracing activities.
Besides the internal time synchronization of single CUDB node components, all the CUDB nodes in the system must be synchronized as well. Therefore, the configured NTP servers must be the same in all CUDB nodes. In case the availability of the NTP servers is limited to specific CUDB node site locations, all local NTP servers on each CUDB site must be synchronized with each other to facilitate the full CUDB system synchronization.
The configuration of the external NTP servers is performed with the cluster.conf LDE configuration file, and is used then by all the blades or VMs in a CUDB node.
| Note: |
For CUDB systems deployed on native BSP 8100, two internal
NTP virtual networks are needed (NTP Left and NTP Right). The IP range
of these virtual networks is always on the 192.168.12.0/24 and 192.168.13.0/24
networks. This is the same in all CUDB nodes, and are neither visible,
nor routable outside the CUDB nodes. |
2.6 Traffic Flows
The following sections describe the traffic flows managed by a CUDB node. The sections contain only general information, as specific details can vary depending on the capabilities provided by the routing and switching solutions of the infrastructure used. Therefore, the section does not cover the following topics:
2.6.1 Incoming Traffic
This section details traffic flows coming from external entities and other CUDB nodes across a CUDB system. For each traffic flow, the following information is provided:
2.6.1.1 Incoming OAM Traffic
Table 3 shows different incoming OAM traffic types.
|
Traffic Type |
Description |
Access Point |
Enabled on Networks |
Target Pool |
Distribution Policy |
|---|---|---|---|---|---|
|
SSH access to the SCs from the OAM customer network for OAM purposes. The accessed SC corresponds to the one holding the OAM North Bound Interface (NBI) platform. This interface is managed by the Common Operation and Maintenance (COM) CBA component that ensures that the configuration model console (or Command Line Interface, CLI) can be executed in the accessed SC. Also, any other OAM operation not specifically related with the configuration model can be executed in the accessed SC. |
TCP port 22(1) |
|
SCs_rr |
Round-robin. |
|
|
TCP port 830(1) |
|
SCs_rr |
Round-robin. |
||
|
SCs_rr |
Round-robin. |
|||
|
Access to BSP configuration interface. |
TCP port 2024 |
BSP-NBI |
N/A |
N/A |
|
|
Access to CMX (only in case of troubleshooting). |
TCP port 22 |
BSP-NBI |
N/A |
N/A |
2.6.1.2 Incoming LDAP Traffic
Table 4 shows different incoming LDAP traffic types.
|
Traffic Type |
Description |
Access Point |
Enabled on Networks |
Target Pool |
Distribution Policy |
|---|---|---|---|---|---|
|
CUDB_FE LDAP |
Point of access to LDAP FEs to applications from the CUDB_FE customer network. |
TCP port 389(1) |
FE |
PLs_lc |
Least connection |
|
CUDB_FE LDAPS |
Point of access to LDAP FEs to applications from the CUDB_FE customer network, using secure LDAP protocol. |
TCP port 636(1) |
FE |
PLs_lc |
Least connection |
|
Point of access to LDAP FEs from provisioning equipment through the CUDB_PROVISIONING customer network. |
TCP port 389(2) |
PROVISIONING |
PLs_lc |
Least connection |
|
|
Point of access to LDAP FEs from provisioning equipment through the CUDB_PROVISIONING customer network, using secure LDAP protocol. |
TCP port 636(2) |
PROVISIONING |
PLs_lc |
Least connection |
|
|
Point of access for LDAP proxy traffic from local and other CUDB nodes. |
TCP port 389(3) |
SITE |
PLs_rr |
Round-robin |
|
|
Point of access for LDAP proxy traffic from local and other CUDB nodes, using LDAP with StartTLS. |
TCP port 389(3) |
SITE |
PLs_rr |
Round-robin |
2.6.1.3 Incoming BC Traffic
Table 5 shows different incoming Blackboard Coordinator (BC) traffic types.
|
Traffic Type |
Description |
Access Point |
Enabled on Networks |
Target Pool |
Distribution Policy |
|---|---|---|---|---|---|
|
Handling traffic towards BC server processes running in the node. |
TCP port range 9511:9513(1) |
SITE |
BCServers_pool_rr |
Round-robin |
|
|
Handling traffic from the BC server processes acting as followers towards the BC server process acting as leader running in the node. |
TCP port range 4511:4513(1) |
SITE |
BCServers_pool_rr |
Round-robin |
|
|
Handling traffic for the leader election mechanism of the BC cluster. |
TCP port range 5511:-5513(1) |
SITE |
BCServers_pool_rr |
Round-robin |
2.6.1.4 Incoming PLDB Traffic
Table 6 shows different incoming PLDB traffic types.
|
Traffic Type |
Description |
Access Point |
Enabled on Networks |
Target Pool |
Distribution Policy |
|---|---|---|---|---|---|
|
Handling traffic towards the database cluster access servers that belong to the PLDB cluster in the node. |
TCP port 15000(1) This is a recommended value for the CudbPlGroup.accessPort attribute (refer to CUDB Node Configuration Data Model Description). If this rule is not followed, correspondence must be ensured between the pool value and the configuration model. |
SITE |
PLs_rr_pldb |
Round-robin |
|
|
Handling traffic towards the database cluster master replication server for the first replication channel that belongs to the PLDB cluster in the node. |
TCP port 15001(1) This is the recommended value for the CudbPlGroup.masterReplicationChannel1Port attribute (refer to CUDB Node Configuration Data Model Description). If this is not followed, correspondence must be ensured between the pool value and the configuration model. |
SITE |
PLs_rr_pldb |
Round-robin |
|
|
Handling traffic towards the database cluster master replication server for the second replication channel that belongs to the PLDB cluster in the node. |
TCP port 15002.(1) This is the recommended value for the CudbPlGroup.masterReplicationChannel2Port (refer to CUDB Node Configuration Data Model Description). If this is not followed, correspondence must be ensured between the pool value and the configuration model. |
SITE |
PLs_rr_pldb |
Round-robin |
2.6.1.5 Incoming DSG Traffic
Table 7 shows different incoming Data Store Unit Group (DSG) traffic types.
|
Traffic Type |
Description |
Access Point |
Enabled on Networks |
Target Pool |
Distribution Policy |
|---|---|---|---|---|---|
|
Handling traffic towards the database cluster access servers belonging to a local data store, provided that the node contains a data store hosting data for DSG number <g> . One or more of this type is defined corresponding to the CudbDsGroup objects defined in the configuration data model (refer to CUDB Node Configuration Data Model Description). |
TCP port1 following the formula: 15000 + ( <g> * 10)(1) This is the recommended value for the CudbDsGroup.accessPort attribute (refer to CUDB Node Configuration Data Model Description). If this is not followed, correspondence must be ensured between the pool value and the configuration model. |
SITE |
PLs_rr_ds <g> |
Round-robin |
|
|
Handling traffic towards the database cluster access server acting as the master for the first replication channel belonging to a local data store, provided that the node contains a data store hosting data for DSG number <g> . One or more of this type is defined corresponding to the CudbDsGroup objects defined in the configuration data model (refer to CUDB Node Configuration Data Model Description). |
TCP port following the formula: 15000 + ( <g> * 10) + 1(1) This is the recommended value stated in the configuration data model description for the CudbDsGroup.masterReplicationChannel1Port attribute (refer to CUDB Node Configuration Data Model Description). If this is not followed, correspondence must be ensured between the pool value and the configuration model. |
SITE |
PLs_rr_ds <g> |
Round-robin |
|
|
Handling traffic towards the database cluster access server acting as the master of the second replication channel belonging to a local data store, provided the node contains a data store hosting data for DSG number <g> . One or more of this type is defined corresponding to the CudbDsGroup objects defined in the configuration data model (refer to CUDB Node Configuration Data Model Description). |
TCP port following the formula: 15000 + ( <g> * 10) + 2(1) This is the recommended value stated in the configuration data model description for the CudbDsGroup.masterReplicationChannel2Port attribute (refer to CUDB Node Configuration Data Model Description). If this is not followed, correspondence must be ensured between the pool value and the configuration model. |
SITE |
PLs_rr_ds <g> |
Round-robin |
2.6.2 Outgoing Traffic
This section describes the details of the traffic originated in a CUDB node towards external entities and other CUDB nodes. Each subsection depends on the type of traffic originated from CUDB and on the outgoing gateway used.
For each traffic flow, the following information is provided:
2.6.2.1 Outgoing OAM Traffic
Table 8 shows different outgoing OAM traffic types.
|
Traffic Type |
Description |
Destination IP Address |
Destination Port |
Source Address |
Gateway |
|---|---|---|---|---|---|
|
SNMP traffic generated from the node towards the NMS acting as trap collector. |
IP address stated in the ESA V3 trap forwarding table configuration file (/home/cudb/oam/faultMgmt/config/masterAgent/trapDestCfg.xml). (1) |
Port stated in ESA V3 trap forwarding table configuration file /home/cudb/oam/faultMgmt/config/masterAgent/trapDestCfg.xml (1). |
OAM_VIP |
OAM network |
|
|
NTP requests |
NTP servers configured in the /cluster/etc/cluster.conf file, and with the ntp parameter. |
UDP port 123 |
OAM_VIP |
OAM network |
|
|
Authentication LDAP queries sent to an external LDAP server.(2) |
IP addresses stated in the primaryServer and secondaryServer attributes in the corresponding CudbExternalAuthServer model class instance. |
TCP ports 389 or 636 |
OAM_VIP |
OAM network |
2.6.2.2 Outgoing PG Traffic
Table 9 shows outgoing PG traffic types.
|
Traffic Type |
Description |
Destination IP Address |
Destination Port |
Source Address |
Gateway |
|---|---|---|---|---|---|
|
Outgoing provisioning JMX notifications towards the provisioning gateways. There is an outgoing traffic flow of this type per port in the CudbProvisioningGatewayConfig configuration model class instance, in the pgNodeIpAddresses multi-valued attribute. For more information, refer to CUDB Node Configuration Data Model Description. |
Each IP address configured in the configuration model. |
Each port configured in the configuration model. |
OAM_VIP |
OAM network |
|
|
CUDB Out Provisioning Assurance <port> |
Outgoing HTTP traffic towards the provisioning gateways. There is an outgoing traffic flow of this type per port in the CudbProvGatewayEndPoint configuration model class instance, in the replayRequestURL and replayStatusURL attributes. For more information, refer to CUDB Node Configuration Data Model Description. |
Each IP address configured in the configuration model. |
Each port configured in the configuration model. |
OAM_VIP |
OAM network |
2.6.2.3 Outgoing FE Traffic
Table 10 shows outgoing FE traffic.
|
Traffic Type |
Description |
Destination IP Address |
Destination Port |
Source Address |
Gateway |
|---|---|---|---|---|---|
|
CUDB Out NOTIFICATIONS <port> |
Outgoing traffic notifications towards application FEs. There is an outgoing flow of this type per port in the CudbNotificationEndPoint configuration model class instance. |
IP address or host name stated in the URI attribute in the corresponding CudbNotificationEndPoint class instance. |
Port in the URI attribute in the corresponding CudbNotificationEndPoint class instance. |
FE_VIP |
FE network |
2.6.2.4 Outgoing LDAP Traffic
Table 11 shows different outgoing LDAP traffic types.
|
Traffic Type |
Description |
Destination IP Address |
Destination Port |
Source Address |
Gateway |
|---|---|---|---|---|---|
|
Any IP (0.0.0.0). The specific destination is a remote node SITE_VIP. |
TCP port 389 |
SITE_VIP (1) |
SITE network |
||
|
Any IP (0.0.0.0). The specific destination is a remote node SITE_VIP. |
TCP port 389 |
SITE_VIP (1) |
SITE network |
2.6.2.5 Outgoing BC Traffic
Table 12 shows different outgoing BC traffic types.
|
Traffic Type |
Description |
Destination IP Address |
Destination Port |
Source Address |
Gateway |
|---|---|---|---|---|---|
|
Outgoing traffic towards the BC server processes running in other CUDB nodes. |
Any IP (0.0.0.0). The specific destination is a remote node SITE_VIP. |
TCP port range 9511:9513 |
SITE_VIP (1) |
SITE network |
|
|
Outgoing traffic from the BC servers processes acting as followers towards the BC server process acting as leader running in other CUDB node. |
Any IP (0.0.0.0). The specific destination is a remote node SITE_VIP. |
TCP port range 4511:4513 |
SITE_VIP (1) |
SITE network |
|
|
Outgoing traffic for the leader election mechanism of the BC cluster. |
Any IP (0.0.0.0). The specific destination is a remote node SITE_VIP. |
TCP port range 5511:5513 |
SITE_VIP (1) |
SITE network |
2.6.2.6 Outgoing SSH Traffic
2.6.2.7 Outgoing PLDB Traffic
Table 14 shows different outgoing PLDB traffic types.
|
Traffic Type |
Description |
Destination IP Address |
Destination Port |
Source Address |
Gateway |
|---|---|---|---|---|---|
|
Outgoing connections towards the database cluster access servers belonging to the PLDB in other CUDB nodes. |
Any IP (0.0.0.0). The specific destination is a remote node SITE_VIP. |
TCP port 15000 |
SITE_VIP(1) |
SITE network |
|
|
Outgoing connections towards the database cluster master server for the first PLDB replication channel in other CUDB nodes. |
Any IP (0.0.0.0). The specific destination is a remote node SITE_VIP. |
TCP port 15001 |
SITE_VIP (1) |
SITE network |
|
|
Outgoing connections towards the database cluster master server for the second PLDB replication channel in other CUDB nodes. |
Any IP (0.0.0.0). The specific destination is a remote node SITE_VIP. |
TCP port 15002 |
SITE_VIP (1) |
SITE network |
2.6.2.8 Outgoing DSG Traffic
Table 15 shows different outgoing DSG traffic types.
|
Traffic Type |
Description |
Destination IP Address |
Destination Port |
Source Address |
Gateway |
|---|---|---|---|---|---|
|
Outgoing connections towards the database cluster access servers belonging to data stores in other CUDB nodes hosting data for DSG <g> . |
Any IP (0.0.0.0). The specific destination is a remote node SITE_VIP. |
TCP port following the formula: 15000 + ( <g> * 10) |
SITE_VIP (1) |
SITE network |
|
|
Outgoing connections towards the database cluster server in other CUDB nodes acting as the first replication channel for DSG <g> . |
Any IP (0.0.0.0). The specific destination is a remote node SITE_VIP. |
TCP port following the formula: 15000 + ( <g> * 10) + 1 |
SITE_VIP (1) |
SITE network |
|
|
Outgoing connections towards the database cluster server in other CUDB nodes acting as the second replication channel for DSG <g> . |
Any IP (0.0.0.0). The specific destination is a remote node SITE_VIP. |
TCP port following the formula: 15000 + ( <g> * 10) + 2 |
SITE_VIP (1) |
SITE network |
2.6.2.9 CUDB OAM Centralized Security Event Logging Traffic
Table 16 shows the outgoing traffic type of the CUDB OAM Centralized Security Event Logging function.
|
Traffic Type |
Description |
Destination IP Address |
Destination Port |
Source Address |
Gateway |
|
Security logging events sent to an external server. |
IP address stated in the externalLogServerIp attribute in the corresponding CudbExternalLogServer model class instance. |
TCP port stated in the externalLogServerPort attribute in the corresponding CudbExternalLogServer model class instance. |
CUDB OAM |
OAM Network |
2.7 Routing
Routing is configured in the gateways belonging to each network existing in the site. It is out of the scope of this document to define the low-level details of this configuration, but the following constrains must be taken into account:
2.8 Firewall Configuration
When an external firewall is used, it must allow both incoming and outgoing traffic flows to traverse any firewall (as described in Incoming Traffic and Outgoing Traffic, respectively).
Besides the above traffic, the proprietary protocols used in CUDB for supervision and replication flows carried over the CUDB system, and the external VLAN/VPN must also be allowed in the external firewalls protecting the CUDB external domain.
| Note: |
SOAP-based CUDB notifications traffic must also be allowed
in the external firewalls. This traffic does not use a fixed port,
but the ones configured in the CUDB node. Refer to
CUDB Node Configuration Data Model Description for further information. |
Depending on the application, more protocols may be required to pass the firewall.
Table 17 shows the traffic and ports that must be allowed in the external firewalls.
|
Traffic |
Port |
Networks |
|---|---|---|
|
LDAP/LDAPS |
TCP/389 (In/Out) TCP/636 (In/Out) |
CUDB_FE CUDB_SITE (1) CUDB_OAM CUDB_PROVISIONING |
|
UDP/123 (Out) |
CUDB_OAM |
|
|
Database Cluster |
TCP/15000-17552 (In/Out)(2) |
CUDB_SITE |
|
TCP/22 (In/Out) |
CUDB_SITE CUDB_OAM |
|
|
TCP/830 (In) |
CUDB_SITE CUDB_OAM |
|
|
BC Servers Access |
TCP/9511-9513 (In/Out) |
CUDB_SITE |
|
BC Servers Followers |
TCP/4511-4513 (In/Out) |
CUDB_SITE |
|
BC Servers Leader Election |
TCP/5511/5513 (In/Out) |
CUDB_SITE |
|
UDP/ <nms_port> (Out)(3) |
CUDB_OAM |
|
|
Notifications to PG (JMX protocol) |
TCP/ <pg_ports> (Out)(4) |
CUDB_OAM (5) |
|
Provisioning Assurance towards PG |
TCP/ <pg_ports> (Out)(4) |
CUDB_OAM (5) |
|
Notifications to applications (SOAP protocol) |
TCP/ <defined EP ports> (Out)(6) |
CUDB_FE |
|
IGMP protocols |
All networks |
3 Appendix: Quality of Service
By default, no Quality of Service (QoS) criteria is applied in CUDB to any network traffic flow, but it is provided optionally on request. In order to use QoS, traffic is marked with the proper Differentiated Services Code Point (DSCP). Based on this marking, network elements can prioritize traffic.
For the recommended DSCP marking values, see Table 18.
|
Network |
Traffic Flow |
Port |
DSCP Value |
|---|---|---|---|
|
SYSMGMT OAM_VIP |
CUDB_OAM Out NTP |
UDP port 123 |
48 |
|
SYSMGMT OAM_VIP SITE_VIP |
CUDB_OAM In/Out SSH |
TCP port 22 |
16 |
|
OAM_VIP |
CUDB_OAM In SNMP CUDB_OAM Out SNMP |
UDP port 60 UDP port 162 |
32 |
|
OAM_VIP SITE_VIP |
CUDB_OAM In NETCONF |
TCP port 830 |
16 |
|
OAM_VIP |
CUDB_OAM Out PG Notifications |
TCP ports 8994/8099 |
40 |
|
FE_VIP |
CUDB_FE In LDAP CUDB_FE In LDAPS |
TCP ports 389 TCP port 636 |
40 |
|
FE_VIP |
CUDB_FE Out SOAP |
TCP port <defined EP ports> |
40 |
|
SITE_VIP |
CUDB_FE In/Out Database Cluster |
TCP port range 15000:17552 |
8 |
|
SITE_VIP |
CUDB_SITE In/Out LDAP CUDB_SITE In/Out LDAPS |
TCP port 389 TCP port 636 |
40 |
|
SITE_VIP |
CUDB_SITE In/Out BC Access CUDB_SITE In/Out Followers CUDB_SITE In/Out LeaderElection |
TCP port range 9511:9513 TCP port range 4511:4513 TCP port range 5511:5513 |
48 |
|
PROVISIONING_VIP |
CUDB_PROVISIONING In LDAP CUDB_PROVISIONING In LDAPS |
TCP port 389 TCP port 636 |
40 |
| Note: |
In CUDB, DSCP code is not applied to any traffic not included
in Table 18. |

Contents