1 Introduction
This document describes the data, software and configuration backup and restore procedures available in the Ericsson Centralized User Database (CUDB).
1.1 Document Purpose and Scope
The CUDB software, configuration, and data stored in the database of a CUDB node can be backed up and optionally transferred to an external storage. This backup can be then used to restore CUDB in case of system failure. The purpose of this document is to cover the available CUDB backup and restore procedures allowing secure backup and restore of CUDB software, configuration and data when needed.
1.3 Prerequisites
Before performing any of the operations described in this document, make sure that the solutions of the known issues described in the Release Notes are all available beforehand.
1.4 Typographic Conventions
Typographic Conventions can be found in the following document:
2 Overview
This section provides an overview of the available backup and restore operations supported by the CUDB system.
The CUDB system supports following two main types of backups:
Due to security reasons, only root or users in cudbadmin group (using the sudo command) can access the created tar files.
Note that no other type of backup or restore procedures are supported. Using Virtual Machine (VM) snapshots as a substitute of either of the types listed above is sure to wreak havoc in a CUDB deployment.
2.1 Data Backup and Restore
A CUDB system is composed of several data partitions, including the Processing Layer (PL) Database (PLDB) and a number of Data Store Groups (DSGs). This type of backup and restore is data centric, and data is distributed on each CUDB node.
This backup is stored on the disk storage system inside the CUDB node, and is not exported automatically to any external storage area. If the data backup is to be kept in external storage, the backup files can be extracted.
The data backup and restore types are as follows:
Data backup and restore can recover the following typical scenarios:
See Data Backup and Restore for more information on data backup and restore.
2.2 Software and Configuration Backup and Restore
This type of backup and restore is node driven, that is not distributed, and each CUDB node has its own software and configuration backup and restore.
This backup is stored on the disk storage system inside the CUDB node, and is not exported automatically to any external storage area. If the software and configuration backup is to be kept in external storage, the backup files can be extracted.
See Software and Configuration Backup and Restore for more information on SW and configuration backups.
3 Data Backup and Restore
This section describes the data backup and restore operations available in CUDB.
3.1 Data Backup
This section describes data backup operations available in CUDB.
3.1.1 Unit Data Backup
This section describes how to perform a unit data backup in the CUDB system.
3.1.1.1 Description
3.1.1.2 Performing Unit Data Backup
3.1.1.3 Store Backup in an External Storage
For safety reasons, it is recommended to store backup files in an external storage.
Run the following command to copy the backup in an external storage:
SC_2_1# scp [-l <bandwidth>] <file> <destination>
Where:
3.1.1.4 Alarms
In case any failure occurs during the backup phase, the system raises the Storage Engine, Backup Fault In DS or Storage Engine, Backup Fault In PLDB alarms.
3.1.1.5 Output
The cudbManageStore command creates a fresh backup of the data stored in the selected PLDB or DSG replica. The backup is made up of several files which are created in the following directory located on the disk storage system of every participating blade or VM:
/local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-YYYY-MM-DD_HH-mm
The identifiers of this participating blade or VM are as follows:
The backup components consist of the following files:
Typically, the backup files are generated in all the blades or VMs of the database unit. However, in case of degraded clusters (refer to CUDB High Availability for more information on degradation), the blades or VMs not participating in the backup do not generate the files, so these will be created in other blades or VMs of the cluster. See Unit Data Restore for further details on how these files are generated and moved to the slaves replicas.
3.1.2 System Data Backup
This section describes the system data backup procedures available in the CUDB system.
3.1.2.1 Description
A CUDB system is composed of several data partitions, including the PLDB and a number of DSGs. According to this structure, a system data backup consists of a number of pieces, each piece being the backup of an individual data partition of the whole CUDB system. One backup of the PLDB and one backup of each of the DSGs make up the full CUDB system data backup.
Refer to CUDB Data Storage Handling for more information.
This type of backup contains the information stored in all database units of the CUDB system.
Unlike most Operation and Maintenance (OAM) operations that affect a single CUDB node, a full CUDB system data backup is a system-wide operation.
The criteria to choose the replica to be used to create the backup is the following:
The distribution of these pieces and the topology of the system makes system data backup to be a “data centric” backup.
To better understand this “data centric” concept, see Figure 1 showing the subscriber data distribution over the system.
Data backup of the whole CUDB system consists of the following:
System data backup cannot be initiated in the following cases:
| Note: |
In case Self-Ordered Backup and Restore is running on some
nodes of the system, system data backup might fail for the DS or PLDB
unit, which is used as backup source for the Self-Ordered Backup and
Restore. Refer to CUDB Node Commands and Parameters for further information about the status of the Self-Ordered Backup and Restore process. |
The execution of a backup does not affect regular traffic operations and traffic operations do not affect the backup. If provisioning continues during a system data backup, it can cause misalignment between the data in the Processing Layer (PLDB) backup and the data in the Data Store Layer (DSG) backup. This misalignment will only become apparent in the unlikely case of a system data restore and the amount of effected Lightweight Directory Access Protocol (LDAP) entries will be negligible compared to the data loss caused by the restoration of the system data backup in most of the usage scenarios. The misalignment can be partially corrected by the reconciliation process. The remaining misalignment can be detected and corrected by application Front End (FE) level procedures.
| Note: |
In case of group data restore and system data restore operations, reconciliation must be
scheduled manually, unless the automatic execution of Selective Replica Check and Data
Repair processes is disabled by configuration. Check the details of the
CudbDsGroupRepairAndResync class in CUDB Node Configuration Data Model Description for more
information about configuring these processes. |
CUDB offers the function of sending a notification to the Ericsson Provisioning Gateway (PG) when a system data backup is started and finished so that the PG can withhold provisioning operations towards the CUDB system while the backup is being created.
Although this is the default behavior, the operator must decide whether ensuring full consistency between PLDB and DSG backup is preferred or avoiding impact in provisioning every time system data backup is performed. If continuous provisioning is preferred then sending notifications to PG can be disabled by using command-line parameters in the cudbSystemDataBackupAndRestore command. For more information on command parameters for cudbSystemDataBackupAndRestore, refer to CUDB Node Commands and Parameters. For instructions on how to schedule a backup without notifications, refer to Step 3 in Scheduling a Periodic System Data Backup in a CUDB System.
| Note: |
As a prerequisite for the notification sending function,
CUDB must be configured with the necessary PG information. Refer to
CUDB System Administrator Guide for more information
on configuring PG nodes. For alarms related to PG notifications, see Alarms. |
3.1.2.2 Performing System Data Backup
3.1.2.3 Alarms
If it is not possible to inform the PG about the backup, the backup fails and the system raises the Storage Engine, Backup Notification Failure To Provisioning Gateway alarm. Refer to Storage Engine, Backup Notification Failure To Provisioning Gateway for more information.
In case any failure occurs during the backup phase, the system raises the Storage Engine, Backup Fault In DS or Storage Engine, Backup Fault In PLDB alarms.
3.1.2.4 Output
The following output directories are created by the cudbSystemDataBackupAndRestore command:
A sample output of a full CUDB system data backup is shown below:
SC_2_2# cudbSystemDataBackupAndRestore -c
As a result of the example above (where automatic backup file collection was invoked), the following files are generated and stored in the /home/cudb/automatedBackupStorage/ NFS directory:
For the PLDB backup:
/home/cudb/automatedBackupStorage/44/0/BACKUP-2014-05-14_20-11-44.0.3.tar
/home/cudb/automatedBackupStorage/44/0/BACKUP-2014-05-14_20-11-44.0.4.tar
/home/cudb/automatedBackupStorage/44/0/BACKUP-2014-05-14_20-11-44.0.5.tar
/home/cudb/automatedBackupStorage/44/0/BACKUP-2014-05-14_20-11-44.0.6.tar
For DSG1 backup from node 43:
/home/cudb/automatedBackupStorage/43/1/BACKUP-2014-05-14_20-11-43.1.3.tar
/home/cudb/automatedBackupStorage/43/1/BACKUP-2014-05-14_20-11-43.1.4.tar
For DSG2 backup from node 43:
/home/cudb/automatedBackupStorage/43/255/BACKUP-2014-05-14_20-11-43.255.3.tar
/home/cudb/automatedBackupStorage/43/255/BACKUP-2014-05-14_20-11-43.255.4.tar
Example 1Checking /home/cudb/automatedBackupStorage on local node
--------------------------------
Local node: ... EMPTY
Checking /home/cudb/systemDataBackup on nodes
--------------------------------
44: ... EMPTY
43: ... EMPTY
CREATE PART
--------------------------------
/home/cudb/systemDataBackup
Listening for current PLDB and DSGs status reports (may take upto 2 minutes)
Starting backup...
Before calling pg_notification
BackupStart [INFO] - PS Notification trying stop notification was successfully sent.
PS Notification successfully sent to : 172.31.233.139
PS Notification failed to : NONE
PS not notified : NONE
cudb_backup ver(1.3.4)
BEGIN MANAGEMENT FOR LAST BACKUP
Backup 2014-05-14_20-11 finished successfully in :
PLDB in CUDB node 44
NDB node PL0
NDB node PL1
NDB node PL2
NDB node PL3
DSG#1 in CUDB node 43
NDB node DS1_0
NDB node DS1_1
DSG#2 in CUDB node 43
NDB node DS2_0
NDB node DS2_1
Attempting to de-block Provisioning Gateway. This may take up to a couple of minutes.
COLLECT PART
--------------------------------
Copying backup piece from node 44 for dsg 0 for ndb node id 3 ... OK
Copying backup piece from node 44 for dsg 0 for ndb node id 4 ... OK
Copying backup piece from node 44 for dsg 0 for ndb node id 5 ... OK
Copying backup piece from node 44 for dsg 0 for ndb node id 6 ... OK
Copying backup piece from node 43 for dsg 1 for ndb node id 3 ... OK
Copying backup piece from node 43 for dsg 1 for ndb node id 4 ... OK
Copying backup piece from node 43 for dsg 2 for ndb node id 3 ... OK
Copying backup piece from node 43 for dsg 2 for ndb node id 4 ... OK
Please copy /home/cudb/automatedBackupStorage to a safe location to be able to
restore the backup pieces when needed.
ERROR SUMMARY
--------------------------------
None.
3.1.2.5 Scheduling a Periodic System Data Backup in a CUDB System
The CUDB system supports scheduled periodic data backups.
| Note: |
Scheduled data backups must be configured only in one CUDB
node of the system. If the CUDB node configured to perform periodic
system data backups is down or isolated from the rest of the CUDB
system at the time the periodic system backup is to be taken, the
periodic system backup will not be executed. Do not schedule or perform
other tasks in parallel with periodic data backups, except data consistency
checks: system data backups have no collision with data consistency
checks, so they can run in parallel. |
Perform the following steps to configure a periodic data backup task:
Steps
3.1.2.6 Checking the Configured Periodic System Data Backups
Perform the following steps to print the scheduled data backups configured in the system:
Steps
3.1.2.7 Canceling a Periodic System Data Backup in a CUDB System
Perform the following steps to cancel a periodic data backup task:
Steps
3.2 Data Restore
This section describes how to perform a data restore in the CUDB system.
3.2.1 Unit Data Restore
This section describes how to perform a unit data restore in the CUDB system.
3.2.1.1 Description
CUDB unit data restore utilizes backup files generated by unit data backup to restore the PLDB or a specific DSG cluster.
After restoration, it is necessary to store SQL procedures as described in Recreating Stored Procedures After Data Restore.
Unit data restore commands also allow cluster backups to be restored on clusters which contain a different number of database nodes. This can happen in case of configuring a system on hybrid hardware, having less PLDB data nodes in GEP5 nodes or if one of the involved clusters is crippled. In these cases, the following scenarios apply:
Figure 2 shows the file distribution in case of a restoring the backup of a 4-database-node cluster to a 3-database-node cluster.
Figure 3 shows the file distribution in case a 3-database-node cluster backup is restored on a 4-database-node cluster.
Compared to data restores taking place between clusters where the number of blades or VMs is the same, the data restore procedure takes more time if the number of blades or VMs is different in the backup source and restore destination clusters.
3.2.1.2 Performing Unit Data Restore
Perform the following steps to restore a unit data backup:
Steps
3.2.1.3 Recreating Stored Procedures After Data Restore
Stored procedures must be created:
To restore the stored procedures and application counter procedures on PLDB, enter the following command:
shell> cudbManageStore -p -o restorestoredprocedures
To restore the stored procedures and application counter procedures on a DS, enter the following command:
shell> cudbManageStore -d <dsId> -o restorestoredprocedures
| Note: |
The scripts of the stored SQL procedures are kept in the
following location on all CUDB nodes: /home/cudb/oam/performanceMgmt/appCounters/procedures/ |
3.2.2 Group Data Restore
This section describes how to perform a group data restore in the CUDB system.
3.2.2.1 Description
Group data restore is basically a set of unit data restore procedures performed on all replicas of the affected data partition (PLDB or DSG). See Unit Data Restore for more information.
Following a successful group data restore, the replication channel must be synchronized properly.
| Note: |
All replicas of the affected data partition must be restored
from an earlier backup by performing a unit data restore on each replica
belonging to the group, so make sure to use the same backup to restore
all replicas. Data in the restored partition is likely to be inconsistent with the rest of the CUDB system since the restored data is older than the rest. A group data restore in a DSG requires reconciliation for that DSG. A group data restore in PLDB requires reconciliation for all DSGs. In case of group data restore and system data restore operations, reconciliation must be scheduled manually, unless the automatic execution of the Selective Replica Check and Data Repair processes is disabled by configuration. Check the details of the CudbDsGroupRepairAndResync class in CUDB Node Configuration Data Model Description for more information about configuring these processes. When no DSG database slave replicas are available, a group data restore is the same as a unit data restore in such deployments. |
The data restore procedure might change temporary subscriber data, such as location information. Refresh such data on the application FE level.
3.2.2.2 Performing Group Data Restore
To perform a group data restore, follow the steps below:
Steps
3.2.3 System Data Restore
This section describes how to restore a system data backup in a CUDB system.
3.2.3.1 Description
This type of restore affects the information stored in all database units of the CUDB system.
The data restore procedure might change temporary subscriber data, such as location information. Refresh such data on the application FE level.
| Note: |
To avoid collision with the Self-Ordered Backup and Restore
function, do not perform system data restore if a Self-Ordered Backup
and Restore process is running on any node of the system. Refer to the cudbSystemStatus section of CUDB Node Commands and Parameters for further information about the status of the Self-Ordered Backup and Restore process. |
System data restore with the cudbSystemDataBackupAndRestore command is available in two different methods. The main difference between these methods is the location of the backup files:
The difference is related to the system data backup procedure which is available both with and without automatic file collection.
| Note: |
The available system data restore methods are not exclusive
to any system data backup methods. Therefore, for example, system
data can be restored from a single source even if the system data
backup was performed without automatic file collection. |
The following sections describe how to restore system data with both methods. For the purposes of description, an example CUDB system is used, containing three CUDB nodes with triple geographical redundancy configuration. The system contains three DSGs.
The nodes, DSGs and the DSG replicas selected as the source replica during the system data backup creation are indicated in Example 2.
| Note: |
Each replica belonging to a DSG can be placed in different
locations on each node. In the above example, each replica is located
in different positions on each node. |
Example 2 Example CUDB System and Backup Fileset
PLDB in CUDB node 81 PLDB source replica was the replica that resides on Node 81 NDB node PL0 <- NDB node PL1 <- NDB node PL2 <- NDB node PL3 <- DSG#1 in CUDB node 81 NDB node DS1_0 NDB node DS1_1 DSG#2 in CUDB node 81 DSG2 source replica was the replica that resides on Node 81 NDB node DS2_0 <- NDB node DS2_1 <- DSG#3 in CUDB node 81 NDB node DS3_0 NDB node DS3_1 PLDB in CUDB node 82 NDB node PL0 NDB node PL1 NDB node PL2 NDB node PL3 DSG#1 in CUDB node 82 DSG1 source replica was the replica that resides on Node 82 NDB node DS3_0 <- NDB node DS3_1 <- DSG#2 in CUDB node 82 NDB node DS2_0 NDB node DS2_1 DSG#3 in CUDB node 82 NDB node DS1_0 NDB node DS1_1 PLDB in CUDB node 83 NDB node PL0 NDB node PL1 NDB node PL2 NDB node PL3 DSG#1 in CUDB node 83 NDB node DS2_0 NDB node DS2_1 DSG#2 in CUDB node 83 NDB node DS1_0 NDB node DS1_1 DSG#3 in CUDB node 83 DSG3 source replica was the replica that resides on Node 83 NDB node DS3_0 <- NDB node DS3_1 <-
3.2.3.2 Preparing System Data Restore
Before loading a backup to a CUDB node, prepare the system data restore by performing the steps of Restoring from a Single Source or Restoring from a Distributed Source (depending on how the source backup files are stored).
| Note: |
System data restore can be performed only on a system: Also, from a restore perspective, it is irrelevant from which CUDB nodes the backup files were taken, as long as their CUDB Node Identifiers are part of the system where the restore is performed. Finally, notice that automated system data restore works in fully operating systems only. |
During the system restore process, each backup file is transferred automatically to all the CUDB nodes where the specific DSG - referred by the backup file - has a replica. The files are copied to the following NFS directory on the nodes:
/home/cudb/systemDataBackup/
After the copy is finished, the files are extracted and transferred automatically to the local directories of the respective CUDB blades or VMs (see Data Restore for more information). By default, the local directory is the following:
/local/cudb/mysql/ndbd/backup/BACKUP
After the restore is performed, the slave PLDB and DSG replicas are kick-started, that is, forced to connect and synchronize with the master replica.
3.2.3.2.1 Restoring from a Single Source
In case system restore is performed from a single source, copy all backup files of the requested backup to the CUDB node where the system data restore is to be ordered. The guidelines for copying are the following:
In case of restoring the example backup (described in Description) from a single source, the backup
files must be copied to an NFS directory (for example to /home/cudb/automatedBackupStorage/) with the following file names and paths:
|
CUDB node: |
Path within CUDB node: |
|
|
PLDB backup files: |
<nodeID> |
<NFS path>/81/0/BACKUP-<YYYY-MM-DD_HH-mm>-81.0.3.tar |
|
<nodeID> |
<NFS path>/81/0/BACKUP-<YYYY-MM-DD_HH-mm>-81.0.4.tar |
|
|
<nodeID> |
<NFS path>/81/0/BACKUP-<YYYY-MM-DD_HH-mm>-81.0.5.tar |
|
|
<nodeID> |
<NFS path>/81/0/BACKUP-<YYYY-MM-DD_HH-mm>-81.0.6.tar |
|
|
DSG1 backup files: |
<nodeID> |
<NFS path>/82/1/BACKUP-<YYYY-MM-DD_HH-mm>-82.1.3.tar |
|
<nodeID> |
<NFS path>/82/1/BACKUP-<YYYY-MM-DD_HH-mm>-82.1.4.tar |
|
|
DSG2 backup files: |
<nodeID> |
<NFS path>/81/2/BACKUP-<YYYY-MM-DD_HH-mm>-81.2.3.tar |
|
<nodeID> |
<NFS path>/81/2/BACKUP-<YYYY-MM-DD_HH-mm>-81.2.4.tar |
|
|
DSG3 backup files: |
<nodeID> |
<NFS path>/83/3/BACKUP-<YYYY-MM-DD_HH-mm>-83.3.3.tar |
|
<nodeID> |
<NFS path>/83/3/BACKUP-<YYYY-MM-DD_HH-mm>-83.3.4.tar |
In the above example, <nodeID> is the ID of the selected CUDB node (same for all files), while <NFS path> is the NFS directory (also the same for all files).
3.2.3.2.2 Restoring from a Distributed Source
Copy all backup files of the requested backup to the CUDB node where the file was generated during the system data backup procedure. The destination node of each backup file is indicated in the filename itself (see Output for the backup file name template). Copy the backup files from their storage location to the following directory of the target CUDB node:
/home/cudb/systemDataBackup
Before copying the backup files, make sure the /home/cudb/systemDataBackup directory is empty or contains only the backup files of the target node, as shown in the example below.
In case of restoring the example backup (described in Description) from distributed sources, the backup files must be copied to the directory mentioned above with the following file names and the following relative paths:
|
CUDB node: |
Path within CUDB node: |
|
|
PLDB backup files: |
81 |
/home/cudb/systemDataBackup/BACKUP-<YYYY-MM-DD_HH-mm>-81.0.3.tar |
|
81 |
/home/cudb/systemDataBackup/BACKUP-<YYYY-MM-DD_HH-mm>-81.0.4.tar |
|
|
81 |
/home/cudb/systemDataBackup/BACKUP-<YYYY-MM-DD_HH-mm>-81.0.5.tar |
|
|
81 |
/home/cudb/systemDataBackup/BACKUP-<YYYY-MM-DD_HH-mm>-81.0.6.tar |
|
|
DSG1 backup files: |
82 |
/home/cudb/systemDataBackup/BACKUP-<YYYY-MM-DD_HH-mm>-82.1.3.tar |
|
82 |
/home/cudb/systemDataBackup/BACKUP-<YYYY-MM-DD_HH-mm>-82.1.4.tar |
|
|
DSG2 backup files: |
81 |
/home/cudb/systemDataBackup/BACKUP-<YYYY-MM-DD_HH-mm>-81.2.3.tar |
|
81 |
/home/cudb/systemDataBackup/BACKUP-<YYYY-MM-DD_HH-mm>-81.2.4.tar |
|
|
DSG3 backup files: |
83 |
/home/cudb/systemDataBackup/BACKUP-<YYYY-MM-DD_HH-mm>-83.3.3.tar |
|
83 |
/home/cudb/systemDataBackup/BACKUP-<YYYY-MM-DD_HH-mm>-83.3.4.tar |
3.2.3.3 Performing System Data Restore
Perform the steps of Restoring from a Single Source or Restoring from a Distributed Source (depending on how the source backup files are stored) to perform system data restore.
3.2.3.3.1 Restoring from a Single Source
Log on to the CUDB node where the backup files were copied earlier (see Preparing System Data Restore) and execute the following command:
SC_2_1# cudbSystemDataBackupAndRestore --restore <backupDir>
In the above command, <backupDir> is the folder where backup files are stored, for example the following:
/home/cudb/automatedBackupStorage/
Refer to CUDB Node Commands and Parameters for more information on this command.
| Note: |
In case any failure occurs, the alarms described in Alarms could be raised. |
After the restoration, all stored procedures related to the application counter process are lost, and therefore must be created again. Execute the procedure in Recreating Stored Procedures After Data Restore on all nodes in a CUDB system, for PLDB and for all DSGs.
3.2.3.3.2 Restoring from a Distributed Source
Log on to any of CUDB nodes in the CUDB system, and run the following command:
SC_2_1# cudbSystemDataBackupAndRestore --restore-date <backup_date>
In the above command, <backup_date> is the date when the backup was created. The date of creation can be checked from any of the backup filenames (see Output for the backup filename template).
Example of backup date:
Refer to CUDB Node Commands and Parameters for more information on this command.
| Note: |
In case any failure occurs, the alarms described in Alarms could be raised. |
After the restoration, all stored procedures related to the application counter process are lost, and therefore must be created again. Execute the procedure in Recreating Stored Procedures After Data Restore on all nodes in a CUDB system, for PLDB and for all DSGs.
2014-05-14_20-11
3.2.3.4 Reconciliation
A system data backup can contain misalignments between the data in the PLDB backup and the data in the DSG backup. Therefore, a system data restore requires reconciliation for all DSGs. Refer to the section on requesting reconciliation manually in CUDB System Administrator Guide for more information.
| Note: |
In case of group data restore and system data restore operations, reconciliation must be
scheduled manually, unless the automatic execution of Selective Replica Check and Data
Repair processes is disabled by configuration. Check the details of the
CudbDsGroupRepairAndResync class in CUDB Node Configuration Data Model Description for more
information about configuring these processes. |
3.2.3.5 Alarms
In case any failure occurs during the restore phase, the system sends an error message and raises the Storage Engine, Restore Fault In DS or Storage Engine, Restore Fault In PLDB alarms.
3.2.3.6 Output
Sample outputs of the CUDB system restore process are provided below both for single source (see Output of Restoring from a Single Source) and distributed source (see Output of Restoring from a Distributed Source) restore processes.
3.2.3.6.1 Output of Restoring from a Single Source
CUDB_44 SC_2_2# cudbSystemDataBackupAndRestore --restore /home/cudb/automatedBackupStorage/
WARNING: System data restoration will execute for the complete CUDB system with the data from the system backup. Are you sure you want to continue (y/n)? y
Example 4Checking /home/cudb/systemDataBackup on nodes
--------------------------------
44: ... EMPTY
43: ... EMPTY
VALIDATE PART
--------------------------------
NODE: 43
DSG: 1
NDB: 3
NDB: 4
DSG: 255
NDB: 3
NDB: 4
NODE: 44
DSG: 0
NDB: 3
NDB: 4
NDB: 5
NDB: 6
DISTRIBUTE PART
--------------------------------
Creating /home/cudb/systemDataBackup on node 44
Node 44 has DSG 0 -> copying ... id3-OK id4-OK id5-OK id6-OK
Node 44 has DSG 1 -> copying ... id3-OK id4-OK
Node 44 has DSG 255 -> copying ... id3-OK id4-OK
Creating /home/cudb/systemDataBackup on node 43
Node 43 has DSG 0 -> copying ... id3-OK id4-OK id5-OK id6-OK
Node 43 has DSG 1 -> copying ... id3-OK id4-OK
Node 43 has DSG 255 -> copying ... id3-OK id4-OK
EXTRACT PART
--------------------------------
Node 44 has DSG 0 -> extracting ... id3-OK id4-OK id5-OK id6-OK
Node 44 has DSG 1 -> extracting ... id3-OK id4-OK
Node 44 has DSG 255 -> extracting ... id3-OK id4-OK
Node 43 has DSG 0 -> extracting ... id3-OK id4-OK id5-OK id6-OK
Node 43 has DSG 1 -> extracting ... id3-OK id4-OK
Node 43 has DSG 255 -> extracting ... id3-OK id4-OK
RESTORE PART
--------------------------------
Starting System Data Backup with command cudbDataRestore -B 2014-05-14_20-25 -L 2>&1 ...
cudbDataRestore ver(1.3.6)
Trying for restore the backup from [2014-05-14_20-25] data files.
Restore command in PL/DS node in CUDBNode#44 sucessfully executed.
Restore command in PL/DS node in CUDBNode#43 sucessfully executed.
Restarting ldap frontends................
Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Restarting of slapd processes is finished
Sucessfully in executing the ldap frontends restart command in CUDBNode#44
Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Restarting of slapd processes is finished
Sucessfully in executing the ldap frontends restart command in CUDBNode#43
KICKSTART PART
--------------------------------
Checking that every DSG is Master or Slave ...
Kickstarting pl on node 43 with command
cudbManageStore -o kickstart -p ...
cudbManageStore stores to process: pl.
Launching order kickstart to pl in dsgroup 0.
Waiting for kickstart order(s) to be completed in CUDB Node 43 for stores : pl.
Replication channel is up for store pl
kickstart order finished successfully in CUDB Node 43 for store pl.
kickstart order(s) completed in CUDB Node 43 for stores : pl.
Stores where order kickstart was successfully completed: pl.
cudbManageStore command successful.
Kickstarting ds unit 1 on node 43 with command
cudbManageStore -o kickstart -d 1 ...
cudbManageStore stores to process: ds1 (in dsgroup1).
Launching order kickstart to ds1 in dsgroup 1.
Waiting for kickstart order(s) to be completed in CUDB Node 43 for stores : ds1.
Replication channel is up for store ds1
kickstart order finished successfully in CUDB Node 43 for store ds1.
kickstart order(s) completed in CUDB Node 43 for stores : ds1.
Stores where order kickstart was successfully completed: ds1.
cudbManageStore command successful.
Kickstarting ds unit 2 on node 43 with command
cudbManageStore -o kickstart -d 2 ...
cudbManageStore stores to process: ds2 (in dsgroup255).
Launching order kickstart to ds2 in dsgroup 255.
Waiting for kickstart order(s) to be completed in CUDB Node 43 for stores : ds2.
Replication channel is up for store ds2
kickstart order finished successfully in CUDB Node 43 for store ds2.
kickstart order(s) completed in CUDB Node 43 for stores : ds2.
Stores where order kickstart was successfully completed: ds2.
cudbManageStore command successful.
ERROR SUMMARY
--------------------------------
None.
3.2.3.6.2 Output of Restoring from a Distributed Source
CUDB_44 SC_2_2# cudbSystemDataBackupAndRestore --restore-date 2014-05-14_20-25 WARNING: System data restoration will execute for the complete CUDB system with the data from the system backup. Are you sure you want to continue (y/n)? y
Checking /home/cudb/systemDataBackup on nodes
--------------------------------
44: ... No unnecessary files found
43: ... No unnecessary files found
VALIDATE PART
--------------------------------
NODE: 44
DSG: 0
NDB: 3
NDB: 4
NDB: 5
NDB: 6
NODE: 43
DSG: 1
NDB: 3
NDB: 4
DSG: 255
NDB: 3
NDB: 4
DISTRIBUTE PART
--------------------------------
Creating /home/cudb/systemDataBackup on node 44
Node 44 has DSG 0 -> copying ... Files already present on this node
Node 44 has DSG 1 -> copying ... id3-OK id4-OK
Node 44 has DSG 255 -> copying ... id3-OK id4-OK
Creating /home/cudb/systemDataBackup on node 43
Node 43 has DSG 0 -> copying ... id3-OK id4-OK id5-OK id6-OK
Node 43 has DSG 1 -> copying ... Files already present on this node
Node 43 has DSG 255 -> copying ... Files already present on this node
EXTRACT PART
--------------------------------
Node 44 has DSG 0 -> extracting ... id3-OK id4-OK id5-OK id6-OK
Node 44 has DSG 1 -> extracting ... id3-OK id4-OK
Node 44 has DSG 255 -> extracting ... id3-OK id4-OK
Node 43 has DSG 0 -> extracting ... id3-OK id4-OK id5-OK id6-OK
Node 43 has DSG 1 -> extracting ... id3-OK id4-OK
Node 43 has DSG 255 -> extracting ... id3-OK id4-OK
RESTORE PART
--------------------------------
Starting System Data Backup with command cudbDataRestore -B 2014-05-14_20-25 -L 2>&1 ...
cudbDataRestore ver(1.3.6)
Trying for restore the backup from [2014-05-14_20-25] data files.
Restore command in PL/DS node in CUDBNode#44 sucessfully executed.
Restore command in PL/DS node in CUDBNode#43 sucessfully executed.
Restarting ldap frontends................
Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Restarting of slapd processes is finished
Sucessfully in executing the ldap frontends restart command in CUDBNode#44
Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Making sure cudbLdapFeMonitor is running (to start slapd)...OK
.Restarting of slapd processes is finished
Sucessfully in executing the ldap frontends restart command in CUDBNode#43
KICKSTART PART
--------------------------------
Checking that every DSG is Master or Slave ...
Kickstarting pl on node 43 with command
cudbManageStore -o kickstart -p ...
cudbManageStore stores to process: pl.
Launching order kickstart to pl in dsgroup 0.
Waiting for kickstart order(s) to be completed in CUDB Node 43 for stores : pl.
Replication channel is up for store pl
kickstart order finished successfully in CUDB Node 43 for store pl.
kickstart order(s) completed in CUDB Node 43 for stores : pl.
Stores where order kickstart was successfully completed: pl.
cudbManageStore command successful.
Kickstarting ds unit 1 on node 43 with command
cudbManageStore -o kickstart -d 1 ...
cudbManageStore stores to process: ds1 (in dsgroup1).
Launching order kickstart to ds1 in dsgroup 1.
Waiting for kickstart order(s) to be completed in CUDB Node 43 for stores : ds1.
Replication channel is up for store ds1
kickstart order finished successfully in CUDB Node 43 for store ds1.
kickstart order(s) completed in CUDB Node 43 for stores : ds1.
Stores where order kickstart was successfully completed: ds1.
cudbManageStore command successful.
Kickstarting ds unit 2 on node 43 with command
cudbManageStore -o kickstart -d 2 ...
cudbManageStore stores to process: ds2 (in dsgroup255).
Launching order kickstart to ds2 in dsgroup 255.
Waiting for kickstart order(s) to be completed in CUDB Node 43 for stores : ds2.
Replication channel is up for store ds2
kickstart order finished successfully in CUDB Node 43 for store ds2.
kickstart order(s) completed in CUDB Node 43 for stores : ds2.
Stores where order kickstart was successfully completed: ds2.
cudbManageStore command successful.
ERROR SUMMARY
--------------------------------
None.
3.3 Combined Data Backup and Restore
This section describes the combined unit data backup and restore procedures available in the CUDB system.
3.3.1 Combined Unit Data Backup and Restore
This section describes how to perform a combined unit data backup and restore in the CUDB system.
3.3.1.1 Description
3.3.1.3 Alarms
In case any failure occurs during the backup phase, the system raises the Storage Engine, Backup Fault In DS or Storage Engine, Backup Fault In PLDB alarms.
4 Software and Configuration Backup and Restore
CUDB supports the backup and restore of the configuration information stored in the system. The software and configuration backup (or restore) is performed separately in each CUDB node of the CUDB system.
The CUDB software and configuration backup contains the following data:
| Note: |
Software and configuration backup/restore operations cannot
be performed in the following cases: |
4.1 Software and Configuration Backup
To order a software and configuration backup in a CUDB node, log on to the CUDB node and issue the cudbSwBackup command as follows:
SC_2_1# cudbSwBackup -c <backupName>
In the above command, <backupName> is the name of the software and configuration backup, the resulting backup is made up of the following files:
Refer to CUDB Node Commands and Parameters for more information on the cudbSwBackup command.
The software and configuration backup process automatically rotates the oldest backup when five backups are created. In case a specific software and configuration backup must be preserved, store it on an external storage. Refer to Store Backup in an External Storage for more information.
| Note: |
In case any fault occurs during the backup procedure, the
system raises the
Operating System, Server Configuration Backup Fault alarm. |
4.2 Software and Configuration Restore
Software and configuration restore must be performed in the following cases in the CUDB system:
4.2.2 Performing Software and Configuration Restore
Follow the steps below to perform a software and configuration restore:
Steps
After This Task
4.3 Periodic Software and Configuration Backup in a CUDB System
The CUDB system supports scheduled periodic software and configuration backups.
4.3.2 Printing Configured Software and Configuration Backups
Perform the following steps to print the scheduled software and configuration backups configured in the system:
Steps
4.4 BSP 8100 Configuration Backup and Restore
For CUDB systems deployed on native BSP 8100, refer to the Backup and Restore BSP Configuration document in the BSP 8100 CPI for a detailed description on how to create, export, and delete a backup from a BSP 8100 node, and also how to import and restore configuration and software from a saved backup.
Reference List
- CUDB Node Commands and Parameters
- CUDB System Administrator Guide
- Storage Engine, Backup Fault In DS
- Storage Engine, Backup Fault In PLDB
- CUDB High Availability
- CUDB Data Storage Handling
- CUDB Node Configuration Data Model Description
- Storage Engine, Backup Notification Failure To Provisioning Gateway
- Storage Engine, Restore Fault In DS
- Storage Engine, Restore Fault In PLDB
- Operating System, Server Configuration Backup Fault
- CUDB Glossary of Terms and Acronyms

Contents