CUDB Glossary of Terms and Acronyms

Contents

1Introduction
1.1Scope
1.2Revision Information

2

Terms

Glossary

1   Introduction

This glossary defines the terms, definitions, acronyms and abbreviations used in the Customer Product Information (CPI) of the Ericsson Centralized User Database (CUDB).

1.1   Scope

This document defines the terms, definitions, acronyms and abbreviations in alphabetical order. Some acronyms may have more than one definition. In such cases, all definitions are listed.

1.2   Revision Information


Rev. A
Rev. B
Rev. C
Rev. D
Rev. E
Rev. F
Rev. G
Rev. H

Other than editorial changes, this document has been revised as follows:

2   Terms

Association Linking Multi Service Consumers (MSC) to each other. Associations are supported in CUDB with Lightweight Directory Access Protocol (LDAP) entries with the following structure used to contain information on a set of MSCs:

assocId=<assocId>,ou=associations,<CUDB base DN>

Asymmetrical Partition Situation A system split where the number of visible sites differs from the number of non-visible sites. There are two types of asymmetrical split situations for a group:
  • Majority Situation if the number of visible sites is higher than the number of non-visible sites.
  • Minority Situation if the number of non-visible sites is higher than the number of visible sites.
Auto Removed Site A site is marked as auto removed (AR) by the nodes of the sites part of a majority partition when it is considered non-reachable (that is, not being part of the majority).

This can occur either because all the nodes in the auto removed site have failed, or because the communication links towards it are down.

Binary Large Object Binary Large Objects (BLOBs) are collections of binary data stored as a single entity.
CUDB Node The CUDB node is the building unit by which the CUDB system is divided. A CUDB node holds one Processing Layer (PL) Database (PLDB), and one or several Data Store (DS) Units accessed through an LDAP resource layer.
CUDB Site A CUDB site is a set of CUDB nodes interconnected by a virtual network infrastructure. The nodes of a CUDB site share the same network components used to connect the Internet Protocol (IP) backbone (switches, routers, and so on) that links the CUDB site with other CUDB sites.
CUDB System The CUDB system is a system consisting of interconnected CUDB nodes. CUDB is a distributed database system exposed as an LDAP directory, and made up of network-connected CUDB nodes spread over the operator network. Logically and lengthwise, the CUDB system is divided to two internal tiers depending on the function: Processing Layer (PL) and Data Store (DS) Layer.
Data Distribution Data distribution means the distribution of subscriber data in different DSGs. CUDB stores all the information of a specific subscriber in just one DSG, but the data of different subscribers can be allocated to different DSGs.
Data Reconciliation Data reconciliation is a system-wide process that fixes potential inconsistencies between the PLDB and the DSGs. Such inconsistencies can occur, for example, after mastership changes in the PLDB or the DSG. During data reconciliation, offending data is deleted from the DSG where it was hosted in CUDB.
Data Store Layer The Data Store Layer holds all subscriber information split in disjoint partitions, that is DSGs. Physically, DSGs are a set of replicated data repositories that can be accessed by the PLDB. DSGs can not be directly accessed by the CUDB clients.
Default Zone The default zone is a configurable, optional zone assigned to subscribers that are provisioned without a zone in the provisioning request. If no specific default zone is configured in the CUDB system, the Implicit Zone will be used as the default zone.
Distributed Search Distributed search is a search query that encompasses several (potentially all) distributed storage units.
Distribution Entry

A Distribution Entry (DE) is an LDAP entry stored in the PLDB containing distribution information. Its first-level children entries and below are stored in DSGs. CUDB has two types of built-in DEs:

  • Multi-Service Consumers that match the following DN pattern: mscId=<mscId>, ou=multiSCs,<CUDB Root>
  • Associations that match the following DN pattern: assocId=<assocId>,ou=associations,<CUDB Root>.

CUDB also allows the definition of custom DEs.

Double Geographical Redundancy (1+1 Redundancy) Double geographical redundancy (also known as 1+1 redundancy) is an optional redundancy configuration where each piece of subscriber data is stored in two different database clusters, hosted in different CUDB nodes of the CUDB system.

Consistency among database cluster replicas is achieved through asynchronous replication in a master-slave setup.

DS Data DS data is the data stored in the Data Store Unit Groups (DSG).
DS Entry A DS entry is an LDAP entry stored in the DSGs.
DSG Master Replica The DSG master replica (or simply DS master) is a particular instance of a DSG which is processing all traffic. DSG masters are dynamically assigned in the CUDB system. Internally, CUDB reaches the appropriate master replica during any operation.
DSG Slave Replica Every instance of a DSG that is not the DSG master replica is a DSG slave replica (or simply DS slave). Slave replicas can process traffic related to massive search operations, subscriber-specific queries, dirty-reads in split situations, and can also perform counter collection. However, slave replicas do not receive regular subscriber traffic. DSG slave replicas are synchronized with the DSG master by replication.
DS Unit A DS Unit is a clustered database system in which all data in-memory store a replica of the data belonging to a subscriber partition.
Degraded DS Unit A DS Unit is degraded if one of the two database processes in the DS Unit fails.
DS Unit Group A DS Unit Group (DSG) is a subscriber partition, the basic distribution unit of subscriber data. It contains the DSG master replica, and optionally one or two DSG slave replica.
Entry An entry is a piece of data stored in the databases of the CUDB system that comprises an entity.
Fixed Entry A fixed entry is an LDAP entry provided by CUDB during installation as part of the basic Directory Information Tree (DIT) for CUDB.

For example, the CUDB root entry is a fixed entry of the system: ou=associations,<CUDB Root Entry>

Geographical Redundancy Geographical redundancy is the level of DSG redundancy configured on a CUDB system. It specifies the number of DSG slaves replicas configured for each DSG. In terms of redundancy level, the following two configurations are supported:
  • Double geographical redundancy (each DSG has 1 slave replica)
  • Triple geographical redundancy (each DSG has 2 slave replicas)
High Availability The High Availability feature ensures that the CUDB system is prepared to protect and recover data from interruptions or failures automatically, and in a short time.
Identity Identity is the identifier of a subscriber. Identities in CUDB are supported by special LDAP alias entries located under the ou=identities,<CUDB base DN> branch that points to the corresponding subscriber entry.

Identities are optimized for efficient memory occupation and to provide fast access to the subscriber data.

Implicit Zone An implicit zone is a reserved and non-configurable zone in a CUDB system. Implicit zones are assigned to CUDB nodes that are not assigned to any particular zone when created (or if they were created previously to feature activation).
Inconsistency Inconsistency in CUDB means that a DSG has an orphan entry. An orphan entry is an entry that has no matching parent entry in the PLDB.
LDAP Directory Information Tree The LDAP Directory Information Tree (also called directory tree, or DIT) is a hierarchically organized collection of LDAP entries.

Information in an LDAP directory is organized into one or several hierarchical structures. The top of the hierarchy contains the base entry, while the rest of the entries are organized beneath the base entry in a tree-like structure. Each node in the hierarchical structure is an entry, defined with a Distinguished Name (DN) and several attributes.

LDAP Front End The LDAP Front End (FE) is an element in a CUDB node that provides LDAP processing. It is in charge of handling all LDAP requests received from application FEs, from the provisioning system, and from other CUDB nodes received through proxy.
Majority Situation In a System Split situation, a CUDB site is in majority split situation if the number of visible sites from the affected CUDB site is greater than the half of the total number of CUDB sites in the CUDB system.
Note:  
Auto removed sites are not counted as part of the total number of CUDB sites.

Massive Search Massive searches are search queries to the CUDB system that return multiple (potentially a multitude of) entries from a single, several, or potentially all database units.
Minority Situation In a System Split situation, a CUDB site is in minority split situation if the number of visible sites from the affected CUDB site is less than the half of the total number of CUDB sites in the CUDB system.
Note:  
Auto removed sites are not counted as part of the total number of CUDB sites.

Multi Service Consumer The Multi Service Consumer (MSC) is a network entity that contains a set of services from the network. The MSC is identified by one or several identities for each service. In some cases, these identities are shared to contain more than one service, or linked by a specific relation. MSCs are supported with LDAP Subscriber Entries with the following structure: mscId=<assocId>,ou=multiSCs,<CUDB base DN>.
NETCONF The Network Configuration Protocol (NETCONF) is a network management protocol developed and standardized by the IETF. NETCONF provides mechanisms to install, manipulate, and delete the configuration of network devices.
Notification Notifications are messages sent from the CUDB system to another system whenever a monitored data element stored in CUDB has changed.
Optimized Search Optimized search is a search query that spans several (potentially all) distributed storage units.
Payload Blade In native deployments on BSP 8100, blades that provide storage capacity for PLDB and DSG, as well as LDAP processing in a CUDB node. They boot over the network and depend on the services provided by the System Controllers (SCs).

Every blade in a CUDB system that runs unit SW processes (as opposed to the SCs) is considered a payload blade.

Payload Virtual Machine In cloud deployments, Virtual Machines (VMs) that provide storage capacity for PLDB and DSG, as well as LDAP processing in a CUDB node. They boot over the network and depend on the services provided by the System Controllers (SCs).

Every VM in a CUDB system that runs unit SW processes (as opposed to the SCs) is considered a payload VM.

Processing Layer Logically, the Processing Layer (PL) is the northbound CUDB layer, processing client LDAP requests with the ability to locate and retrieve any subscriber information requested over the entire Data Store Layer. The Processing Layer on each CUDB node contains LDAP protocol handling information and a replicated copy of the PLDB storing the relationship of all subscriber identities, and the physical location of application FE profiles across the CUDB Data Store Layer.
Processing Layer Database The Processing Layer Database (PLDB) is a clustered database replicated in all CUDB nodes containing the indexing information for the data stored in the DSGs. It also stores the complete set of subscriber identities and can be used to store common subscriber or application FE data, as well as any other data.
PLDB Data PLDB data is the data stored in the PLDB. Such data includes the DEs, administrative data, and the common data of all CUDB nodes.
PLDB Entry PLDB entries are LDAP entries stored in the PLDB.
Reallocation Reallocation is the process of moving stored data from one DSG to another DSG to provide administrative management of the location of subscriber or resource data. Reallocation is also known as geographical subscription and resource management.
Redundancy Level The redundancy level is defined as the number of LDAP FEs that can be taken down without the CUDB node losing its required level of performance.
Replication Channel Replication channels are the communication lines established between the replication servers in each DSG. Master and slave replicas have their own replication channels.
RESTful In computing, Representational State Transfer (REST) is the software architectural style of the World Wide Web. To the extent that systems conform to the constraints of REST they can be called RESTful.
Safekeeping Safekeeping means that data is stored in such a physical or logical way (or both) that minimizes the chance of unexpected events resulting in data corruption or the complete loss of data.
Scale-out In distributed database terms, scale-out refers to increasing the overall data storage capacity of a database by adding new infrastructure components.
Search Index Search indexes are indexes used to facilitate massive search queries.
Symmetrical Split Situation Also known as split brain situation. In a System Split situation, a CUDB site is in symmetrical split situation if the number of visible sites and non-visible sites from the affected CUDB site is the same.
Note:  
Auto removed sites are not counted as part of the total number of CUDB sites.

System Controller The System Controllers (SCs) centralize the management functions of the Linux cluster in each CUDB node as well as Operation and Maintenance (OAM) and other auxiliary software processes. Each node contains two SCs for redundancy purposes. The SCs boot from the disk storage system and run services to support the payloads in the CUDB node.
System Split Situation A CUDB system can suffer either network, node, or site failures that can split the system in different subdivisions or partitions, or cause single site failures. All the CUDB nodes are associated with a site. A site is considered visible from other sites as long as other sites can connect and report to the site. Site visibility is always perceived from one of the CUDB nodes of a site which is the site leader. Whenever there is one or a group of sites which is not visible, the leader considers to be in a Split Situation. Once the split situation is detected, the number of sites in the group is compared with the number of sites in the group of non-visible sites and the type of split situation of the visible group is determined depending on this comparison.
Triple Geographical Redundancy (1+1+1 Redundancy) Triple geographical redundancy (also known as 1+1+1 redundancy) is an optional redundancy configuration where three copies of the same data partition are replicated to different CUDB nodes in the CUDB system.

One of these copies is considered the master replica and receives all read and write operations. The other two copies are considered slave replicas, and are replicated asynchronously in case the master replica is updated.

Visibility A CUDB node located in one site has visibility of another site if it can communicate with at least one CUDB node located in this other site.

Zone The zone is a geographical area in which a set of CUDB nodes are physically located.

Glossary

3GPP
Third Generation Partnership Project
 
3PP
Third Party Product
 
AAA
Authentication, Authorization and Accounting
 
ACL
Access Control List
 
ADL
Active DS List
 
ALB
Abstract Load Balancer
 
AMC
Automatic Mastership Change
 
AMF
Availability Management Framework
 
aPBF
Advanced Policy-Based Forwarding
 
API
Application Programming Interface
 
AR
Auto Removed
 
ARP
Address Resolution Protocol
 
ASCII
American Standard Code for Information Interchange
 
AuC
Authentication Center
 
BC
Blackboard Coordination
 
BE
Back End
 
BFD
Bidirectional Forwarding Detection
 
BIOS
Basic Input/Output System
 
BLOB
Binary Large Object
 
BSP
Blade Server Platform
 
CA
Certification Authority
 
CAS
Customer Administration System
 
CBA
Component Based Architecture
 
CDC
Collision Detection Counter
 
CEE
Cloud Execution Environment
 
CLI
Command Line Interface
 
CLM
Cluster Management
 
CM
Configuration Management
 
CMW
Core Middleware
 
CMX
Component Main Switch
 
CMXB
Component Main Switch Board
 
CMXB3
Component Main Switch Board, version 3
 
CPI
Customer Product Information
 
CS
Cluster Supervisor
 
CSN
Change Sequence Number
 
CUDB
Ericsson Centralized User Database
 
DAC
Data Availability Coordination
 
DB
Database
 
DE
Distribution Entry
 
DHCP
Dynamic Host Configuration Protocol
 
DIT
Directory Information Tree
 
DL
Data Layer
 
DLA
Data Layered Architecture
 
DM
Data Model
 
DN
Distinguished Name
 
DNS
Domain Name System
 
DS
Data Store
 
DSCP
Differentiated Services Code Point
 
DSG
DS Unit Group
 
ECIM
Ericsson Common Information Model
 
ECMP
Equal-Cost Multipath
 
EGEM2
Enhanced Generic Ericsson Magazine version 2
 
EIR
Equipment Identity Register
 
ENUM
E.164 Number Mapping
 
EPC
Evolved Packet Core
 
ESA
Ericsson SNMP Agent
 
ETSI
European Telecommunication Standard Institute
 
eVIP
Evolved Virtual IP
 
FC
Fault Code
 
FE
Front End
 
FM
Fault Management
 
FQDN
Fully Qualified Domain Name
 
FTP
File Transport Protocol
 
GbE
Gigabit Ethernet
 
GEP
Generic Ericsson Processor board
 
GEP3
Generic Ericsson Processor version 3
 
GEP5
Generic Ericsson Processor version 5
 
GEP7L
Generic Ericsson Processor version 7, Low Power
 
GPRS
General Packet Radio Service
 
GUI
Graphical User Interface
 
HA
High Availability
 
HLR
Home Location Register
 
HLR-FE
HLR Front End
 
HOT
Heat Orchestration Template
 
HSS
Home Subscriber Server
 
HTTP
Hypertext Transfer Protocol
 
HTTPS
Hypertext Transfer Protocol Secure
 
HW
Hardware
 
IANA
Internet Assigned Numbers Authority
 
ICMP
Internet Control Message Protocol
 
IETF
Internet Engineering Task Force
 
IMS
IP Multimedia Subsystem
 
IMSI
International Mobile Subscriber Identity
 
IoT
Internet of Things
 
IP
Internet Protocol
 
IPMI
Intelligent Platform Management Interface
 
IPSec
IP Security
 
ISDN
Integrated Services Digital Network
 
ISP
In-Service Performance
 
ITU
International Telecommunication Union
 
JMX
Java Management Extension
 
KPI
Key Performance Indicator
 
L3
Layer 3
 
LAG
Link Aggregation
 
LAN
Local Area Network
 
LAND
Local Area Network Denial
 
LDAP
Lightweight Directory Access Protocol
 
LDAP FE
LDAP Front End
 
LDAPS
LDAP over SSL
 
LDE
Linux® Distribution Extension (This component can also appear in the documentation as LOTC and LDEwS.)
 
LDEwS
Linux® Distribution Extension with SLES (This component can also appear in the documentation as LOTC and LDE.)
 
LDIF
LDAP Data Interchangeable Format
 
LOTC
Linux Open Telecom Cluster (This component can also appear in the documentation as LDE and LDEwS.)
 
LM
License Manager
 
M2M
Machine-to-Machine
 
MAC
Media Access Control
 
MNP
Mobile Number Portability
 
MSC
Multi Service Consumer
 
MSISDN
Mobile Station ISDN Number
 
MW
Middleware
 
NBI
North Bound Interface
 
NDB
Network Database
 
NE
Network Element
 
NETCONF
Network Configuration Protocol
 
NFS
Network File System
 
NIR
Network Impact Report
 
NMS
Network Management System
 
NTP
Network Time Protocol
 
OAM
Operation and Maintenance
 
OID
Object Identifier
 
OPI
Operating Instruction
 
OS
Operating System
 
OSPF
Open Shortest Path First
 
OU
Organizational Unit
 
PBIST
Programmable Built-In Self-Test
 
PCRF
Policy and Charging Rules Function
 
PDU
Power Distribution Unit
 
PG
Provisioning Gateway
 
PL
Processing Layer
 
PLDB
Processing Layer Database
 
PM
Performance Management
 
POSIX
Portable Operating System Interface
 
PS
Packet Switch
 
PXE
Preboot eXecution Environment
 
QoS
Quality Of Service
 
RAID
Redundant Arrays of Inexpensive Disks
 
RDBMS
Relational Data Base Management System
 
RDN
Relative Distinguished Name
 
RFC
Request For Comments
 
RMI
Remote Method Interface
 
RPI
Replication Progress Information
 
RPM
Red Hat Package Manager
 
SA
Service Availability
 
SAF
Service Availability Forum
 
SAPC
Service-Aware Policy Controller
 
SASL
Simple Authentication and Security Layer
 
SC
System Controller
 
SCP
Secure Copy Protocol
 
SCXB3
System Control Switch Board version 3
 
SDL
System DS List
 
SDP
Software Delivery Package
 
SFTP
Secure File Transfer Protocol
 
SG
Security Gateway
 
SGSN
Serving GPRS Support Node
 
SLF
Subscription Locator Function
 
SLF4J
Simple Logging Facade for Java
 
SM
System Monitor
 
SNMP
Simple Network Management Protocol
 
SOAP
Simple Object Access Protocol
 
SPoA
Single Point of Access
 
SQL
Structured Query Language
 
SSH
Secure Shell
 
SSL
Secure Socket Layer
 
SU
Service Unit
 
SW
Software
 
TAM
Take-All-Masters
 
TC
Transaction Coordinator
 
TFTP
Trivial File Transfer Protocol
 
TLS
Transport Layer Security
 
TPS
Transactions Per Second
 
UDC
User Data Consolidation
 
UDP
User Datagram Protocol
 
UPG
User Profile Gateway
 
URI
Uniform Resource Identifier
 
UUID
Universally Unique Identifier
 
vCUDB
Virtualized CUDB
 
VIM
Virtualized Infrastructure Manager
 
VIP
Virtual IP Address
 
VLAN
Virtual Local Area Network
 
VM
Virtual Machine
 
VNF
Virtualized Network Function
 
VNF-LCM
VNF Lifecycle Manager
 
VPN
Virtual Private Network
 
VRRP
Virtual Router Redundancy Protocol
 
VS
Virtual Server
 
WRR
Weighted Round Robin
 
XML
eXtensible Markup Language