CUDB Node Commands and Parameters

Contents

1Introduction
1.1Document Purpose and Scope
1.2Revision Information
1.3Target Groups
1.4Typographic Conventions

2

CUDB Commands
2.1CUDB Command
2.2Management Commands
2.2.1cudbAnalyser
2.2.2cudbApplicationCounters
2.2.3cudbApplyConfig
2.2.4cudbCheckConsistency
2.2.5cudbCheckReplication
2.2.6cudbCollectInfo
2.2.7cudbConsistencyMgr
2.2.8cudbDataBackup
2.2.9cudbDbDiskManage
2.2.10cudbDsgMastershipChange
2.2.11cudbDsgProvisioningManage
2.2.12cudbGetLogs
2.2.13cudbHaState
2.2.14cudbLdapFeRestart
2.2.15cudbManageBCServer
2.2.16cudbManageLibDataDist
2.2.17cudbManageMsgSrvServer
2.2.18cudbManageStore
2.2.19cudbPmJobReload
2.2.20cudbPrepareStore
2.2.21cudbReallocate
2.2.22cudbReconciliationMgr
2.2.23cudbRemoteTrust
2.2.24cudbResumeProvisioningNotification
2.2.25cudbServiceContinuity
2.2.26cudbSwBackup
2.2.27cudbSwVersionCheck
2.2.28cudbSystemDataBackupAndRestore
2.2.29cudbSystemStatus
2.2.30cudbTakeAllMasters
2.2.31cudbUnitDataBackupAndRestore
2.2.32cudbUpdateUserInfo
2.3Internal Commands
2.3.1cudbAppCheckerManager
2.3.2cudbBCServersRestart
2.3.3cudbCopyAclsConfFile
2.3.4cudbEvipConfigExtension
2.3.5cudbEvipEncapsulator
2.3.6cudbExecuteAllBlades
2.3.7cudbFollowLdapfeLogs
2.3.8cudbManageDsGroup
2.3.9cudbManageNode
2.3.10cudbManageSite
2.3.11cudbMpstat
2.3.12cudbOomConfigurator
2.3.13cudbParallelCommandRun
2.3.14cudbRemoveNode
2.3.15cudbSdpInfo
2.3.16cudbSetDsgMaster
2.3.17cudbSetPartitionStatus
2.3.18cudbTpsStat

Glossary

Reference List

1   Introduction

This document describes the Ericsson Centralized User Database (CUDB) commands and command options, along with a complete reference on their options.

1.1   Document Purpose and Scope

This document describes the CUDB commands and command options with a complete reference on their options, syntax, output and some examples. The following topics are out of the scope of this document:

Most of the commands provide the output by means of logging information. Refer to CUDB Node Logging Events, Reference [2] for more details about logging events in CUDB.

1.2   Revision Information


Rev. A
Rev. B
Rev. C
Rev. D
Rev. E
Rev. F
Rev. G
Rev. H
Rev. J
Rev. K
Rev. L
Rev. M
Rev. N
Rev. S

Other than editorial changes, this document has been revised as follows:

1.3   Target Groups

This document is intended for system administrators and operators who are familiar with Linux systems. Users of this document must possess moderate knowledge of the CUDB system and its Operation and Maintenance (OAM) procedures.

1.4   Typographic Conventions

Typographic Conventions can be found in the following document:

2   CUDB Commands

The following sections list and describe all CUDB commands that can be used within a Secure Shell (SSH) session in the CUDB nodes belonging to a CUDB system. The commands are grouped into the following categories:

The commands in this document can be executed from any CUDB node, unless specific conditions or restrictions are specified in the Requisites section of the related command.

For each command, the following information is included:

The commands described in this document can be used by connecting to any CUDB node through an SSH interface, and getting authenticated by the system. The commands can be executed with the cudbOperator or cudbadmin users through the sudo Linux command. The available rights of these users are as follows:

By default, the location of the commands described in the document is set on the PATH system variable during installation time.

Several commands offer a debug option. This option activates the DEBUG log level, whose DEBUG log messages are stored in the same location as the log messages belonging to other levels. The location of the log messages for each component are listed in CUDB Node Logging Events, Reference [2].

Note:  
The debugging option of the commands is used only when required by Ericsson personnel for troubleshooting purposes. This is because the amount of logging information can be very high during debugging, which can alter system working conditions.

All commands include a help option as well, used to provide information on the command and its available options.

For further information on CUDB system administration procedures, refer to CUDB System Administrator Guide, Reference [5].

2.1   CUDB Command

2.2   Management Commands

Commands in this section can be executed both by operators and Ericsson personnel.

2.2.1   cudbAnalyser

The cudbAnalyser command performs a log analysis on the logs collected by cudbGetLogs. For more information about this command, refer to CUDB Logchecker, Reference [6].

2.2.1.1   Requisites

cudbOperator users have no access to run this command.

2.2.1.2   Syntax

2.2.1.3   Command Options

The following command options can be used:

2.2.1.4   Output

Started on ACTIVE SC...
[INFO] Checking files: //home/cudb/monitoring/preventiveMaintenance/ \
/CUDB_50_201307231233.log //home/cudb/monitoring/preventiveMaintenance/ \
/CUDB_50_201307231231.log logfile versions:0.0.60/0.0.60
[ERROR] OS: network stat shows errors (Severity: Minor)
> CUDB50 PL0 eth1/statistics/rx_dropped 27 K packets
> CUDB50 PL1 eth1/statistics/rx_dropped 27 K packets
> CUDB50 DS1_0 eth1/statistics/rx_dropped 27 K packets
> CUDB50 DS1_1 eth1/statistics/rx_dropped 27 K packets

2.2.1.5   Examples of Use

2.2.2   cudbApplicationCounters

The cudbApplicationCounters command executes application counters defined by the counter configuration file. Refer to CUDB Application Counters, Reference [7] for more information.

2.2.2.1   Requisites

cudbOperator users have no access to run this command.

2.2.2.2   Syntax

cudbApplicationCounters -C | --Counter-file <counter_configuration_file> -U | --Unique <uniqueNumber> [-f | --file-config <configuration_file>] [-u | --user-facility <facility_level>] [-l | --licensing] [-D | --Debug] [-h | --help]

2.2.2.3   Command Options

The following command options can be used:

2.2.2.4   Output

The command output shows the command version and OAM library version when executed successfully:

cudbApplicationCounters Ver.(1.3.1) OAM Lib.Version (1.1.0)

2.2.2.5   Examples of Use

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the System Controller (SC) of the node where the PLDB master is located:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the cudbApplicationCounters command as follows:

    <CUDB_node_prompt> cudbApplicationCounters -C /cluster/home/cudb/oam/performanceMgmt/appCounters/config/<app_counter>.conf -U 1 -u LOCAL1

    The expected output must look similar to the below example:

    cudbApplicationCounters Ver.(1.3.1) OAM Lib.Version (1.1.0)

2.2.3   cudbApplyConfig

Attention!

The use of this command is restricted to installation only. To make configuration model changes persistent in regular operation and maintenance procedures, use the administrative operation applyConfig. Refer to the "applyConfig" section of CUDB Node Configuration Data Model Description, Reference [1] for more information on applyConfig.

The cudbApplyConfig command automatically makes any configuration model change persistent. If no successful result is returned by the command, the configuration updates do not take effect, even if they are stored in the model and are visible through new CUDB Configuration CLI sessions. Refer to CUDB Node Configuration Data Model Description, Reference [1] for more information on using the command.

2.2.3.1   Requisites

cudbOperator users have no access to run this command.

2.2.3.2   Syntax

cudbApplyConfig [-s|--scope <scope>] [-v|--verbose] [-h|--help]

2.2.3.3   Command Options

The following command options can be used:

2.2.3.4   Output

The command output is a log of the performed changes. The command result is provided at the end of the log.

2.2.3.5   Examples of Use

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC of the node where the PLDB master is located:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the cudbApplyConfig command as follows:

    <CUDB_node_prompt> cudbApplyConfig

The expected output must be similar to the below example:

INFO - Starting...
INFO - *************************** PREPARATION PHASE STARTS ***************************
INFO - *************************** PREPARATION PHASE STARTS ***************************
INFO - *************************** PREPARATION PHASE STARTS ***************************
INFO - *************************** PREPARATION PHASE STARTS ***************************

INFO - Waiting [30] seconds for CUDB OI to commit changes.

INFO - Checking if OI changes file exists. INFO - Done.
INFO - Reading OI changes file [/home/cudb/oam/configMgmt/commands/config/cudbOiImmChanges.txt]...

INFO - OI changes file successfully read, parsing changes...
INFO - Processing non info operations start.
INFO - Processing info operations start.
INFO - Processing info operations end.
INFO - OI changes file successfully parsed.
INFO - Done.

INFO - Retrieving current redundancy level.
INFO - Done.

INFO - Checking live hosts...

PING OAM1 (10.22.49.1) 56(84) bytes of data.

--- OAM1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.004/0.038/0.121/0.048 ms, ipg/ewma 0.235/0.084 ms
PING OAM2 (10.22.49.2) 56(84) bytes of data.

--- OAM2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3ms
rtt min/avg/max/mdev = 0.156/0.272/0.605/0.192 ms, ipg/ewma 1.274/0.458 ms
PING 10.22.49.3 (10.22.49.3) 56(84) bytes of data.

--- 10.22.49.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.099/0.184/0.379/0.115 ms, ipg/ewma 0.269/0.292 ms
PING 10.22.49.4 (10.22.49.4) 56(84) bytes of data.

--- 10.22.49.4 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.118/0.162/0.280/0.068 ms, ipg/ewma 0.224/0.228 ms
PING 10.22.49.5 (10.22.49.5) 56(84) bytes of data.

--- 10.22.49.5 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.128/0.149/0.209/0.037 ms, ipg/ewma 0.201/0.183 ms
PING 10.22.49.6 (10.22.49.6) 56(84) bytes of data.

--- 10.22.49.6 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.102/0.127/0.191/0.037 ms, ipg/ewma 0.191/0.162 ms
PING 10.22.49.7 (10.22.49.7) 56(84) bytes of data.

--- 10.22.49.7 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.101/0.173/0.300/0.076 ms, ipg/ewma 0.234/0.243 ms
PING 10.22.49.8 (10.22.49.8) 56(84) bytes of data.

--- 10.22.49.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.120/0.143/0.196/0.030 ms, ipg/ewma 0.175/0.172 ms
PING 10.22.49.9 (10.22.49.9) 56(84) bytes of data.

--- 10.22.49.9 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.100/0.129/0.209/0.047 ms, ipg/ewma 0.176/0.174 ms
PING 10.22.49.10 (10.22.49.10) 56(84) bytes of data.

--- 10.22.49.10 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.096/0.170/0.244/0.055 ms, ipg/ewma 0.184/0.186 ms
INFO - Hosts checked. Live control blades [2] - live PL blades [4] live DS blades [4].
INFO - Done.

INFO - Reading configuration file [/home/cudb/oam/configMgmt/commands/config/sqlSentences.cfg]...
INFO - Configuration file parsing started.
INFO - Configuration file parsing finished.
INFO - Generating SQL sentences...
INFO - SQL sentences generated.
INFO - Done.

INFO - Reading configuration file [/home/cudb/oam/configMgmt/commands/config/commands2Launch.cfg]...
INFO - Configuration file parsing started....
INFO - Configuration file parsing finished.
INFO - Generating commands to launch...
INFO - Commands to launch generated.
INFO - Reading files to detect...
INFO - Reading configuration file [/home/cudb/oam/configMgmt/commands/config/files2Detect.cfg]...
INFO - Configuration file parsing started....
INFO - Configuration file parsing finished.
INFO - Adding commands to execute...
INFO - Done.

INFO - Reading configuration file [/home/cudb/oam/configMgmt/commands/config/procs2Stop.cfg]...
INFO - Configuration file parsing started.
INFO - Configuration file parsing finished.
INFO - Building list of processes affected...
INFO - List of processes affected built.
INFO - Done.

INFO - *************************** PREPARATION PHASE ENDS *****************************
INFO - *************************** PREPARATION PHASE ENDS *****************************
INFO - *************************** PREPARATION PHASE ENDS *****************************
INFO - *************************** PREPARATION PHASE ENDS *****************************

INFO - *************************** EXECUTION PHASE STARTS *****************************
INFO - *************************** EXECUTION PHASE STARTS *****************************
INFO - *************************** EXECUTION PHASE STARTS *****************************
INFO - *************************** EXECUTION PHASE STARTS *****************************

INFO - SQL queries executed.
INFO - Done.

INFO - Done.

INFO - Done.

INFO - Launching commands...
INFO - Commands executed.
INFO - Done.

INFO - Executing processes restart procedure...
INFO - Processes restart procedure executed.
INFO - Done.


INFO - OI changes file moved to backup.
INFO - If it is needed to execute this command again for the same OI changes set rename the file [/home/cudb/oam/configMgmt/commands/config/cudbOiImmChanges.txt.bak] to [/home/cudb/oam/configMgmt/commands/config/cudbOiImmChanges.txt].

INFO - ***************************** EXECUTION PHASE ENDS *****************************
INFO - ***************************** EXECUTION PHASE ENDS *****************************
INFO - ***************************** EXECUTION PHASE ENDS *****************************
INFO - ***************************** EXECUTION PHASE ENDS *****************************

2.2.4   cudbCheckConsistency

The cudbCheckConsistency command checks if the slave Data Store (DS) Units hold approximately as many rows in their tables as the corresponding master DS Unit in the same DS Unit Group (DSG). The lightweight consistency check relates to PLDB units as well. The check is considered to be failed if the difference of the row counts of any Structured Query Language (SQL) table exceeds a certain value, given in percentage. Tables with only a few rows can be skipped.

cudbCheckConsistency can be executed manually, but it also runs regularly as a cron job. When executed manually, the default configuration values can be overridden with command line options. When it finds a failure while running as a cron job, it raises an alarm on the node which hosts the failed slave DS Unit. For further information about alarms related to this command, refer to Storage Engine, Potential Data Inconsistency between Replicas Found in DS, Reference [8] and Storage Engine, Potential Data Inconsistency between Replicas Found in PLDB, Reference [9].

Note:  
If any changes are made in the configuration file on a CUDB node, then the exact same configuration changes must be performed on all the other CUDB nodes as well. This is needed because cudbCheckConsistency operates at the CUDB system level instead of the CUDB node level. After the configuration is updated in the whole system, the following command must be issued on every SC of every CUDB node:

/etc/init.d/cudbCheckConsistencySrv restart


2.2.4.1   Requisites

cudbOperator users have no access to run this command.

2.2.4.2   Syntax

2.2.4.3   Command Options

The following command options can be used:

2.2.4.4   Output

The following example output is displayed:

   [info] cudbCheckConsistency is running with options: '', MAXDIFF_LIMIT: 1.00%, CHECK_LIMIT: 100
   [info] Acquiring mastership information.
   [info] Checking consistency in slave DS units.
   [info]   Node42-DSG0 consistency: OK - row count difference 0.00%
   [info]   Node42-DSG1 consistency: OK - row count difference 0.00%
   [info]   Node42-DSG2 consistency: OK - row count difference 0.00%
   [info] Summary: slaves checked: 3 -> PASSED: 3, FAILED: 0, UNKNOWN: 0.

In this output, Node42-DSG0 refers to the PLDB.

2.2.4.5   Examples of Use

Check the following examples of executing the command:

2.2.4.6   cudbCheckConsistency Configuration File

The configuration file of the command is located in the following directory:

/home/cudb/monitoring/replicaConsistency/config/cudbCheckConsistency.conf

This file is directly included by a Bash shell script with the following prerequisites:

MAXDIFF_LIMIT=1.00

Maximum tolerable row count difference in PERCENTS between tables in the master DS Unit and the slave DS Unit. The difference is counted table-by-table, and if the difference is greater than the configured tolerance limit in any of the tables, the verdict is FAILED.

The value must be a floating point number with exactly two decimal digits, in the [ 0.00 ; 100.00 ] closed interval.

Default value: 1.00

Command line option: -m | --maxdiff-limit <LIMIT>

CHECK_LIMIT=100

If a table in the master DS Unit has fewer rows than this check limit, then the table is skipped.

The value must be an integer number in the [ 0 ; INT_MAX ) left-closed right-open interval, where INT_MAX is the constant defined in POSIX.1-2008 Base Specifications, Issue 7.

Default value: 100

Command line option: -c | --check-limit <LIMIT>

CRON_TIMESPEC='37 0 * * *'

Cron job time specification. If this value is changed, then the value must also be changed on every other CUDB node of the CUDB system, and the new value must be installed to cron on both SCs of every CUDB node. On a specific CUDB node, after saving the new value, the new settings can be installed to cron by issuing the following command on an SC:

ssh OAM1 '/etc/init.d/cudbCheckConsistencySrv restart'

ssh OAM2 '/etc/init.d/cudbCheckConsistencySrv restart'

The value must be either " disabled" or a valid cron job time specification.

Default value: '37 0 * * *'

Command line option: -

2.2.5   cudbCheckReplication

The cudbCheckReplication command checks if the active database cluster replication channels are functional in the CUDB system. cudbCheckReplication can be executed manually, but it also runs regularly as a cron job. When executed manually, the default configuration values can be overridden with command line options. When it finds a failure while running as a cron job, it raises an alarm on the node which hosts the failed slave DS Unit. For further information about alarms related to this command, refer to Storage Engine, Replication Stopped Working in DS, Reference [10], and Storage Engine, Replication Stopped Working in PLDB, Reference [11].

Note:  
If any changes are made in the configuration file on a CUDB node, then the exact same configuration changes must be performed on every other CUDB node as well. This is required because cudbCheckReplication operates on the CUDB system level instead of the CUDB node level. After the configuration is updated in the entire system, execute the below command on every SC of every CUDB node:

/etc/init.d/cudbCheckReplicationSrv restart


2.2.5.1   Requisites

cudbOperator users have no access to run this command.

2.2.5.2   Syntax

2.2.5.3   Command Options

The following command options can be used:

2.2.5.4   Output

The command can display the following example output:

   [info] cudbCheckReplication is running with options: '', BEHIND_LIMIT: 10
   [info] Acquiring mastership information.
   [info] Acquiring DSG status information.
   [info] Injecting verification data to master DS units.
   [info] Sleeping 10 seconds to wait for replication.
   [info] Checking replication in slave DS units.
   [info]   Node42-DSG0 replication: OK
   [info]   Node42-DSG1 replication: OK
   [info]   Node42-DSG2 replication: OK
   [info] Summary: channels checked: 3 -> PASSED: 3, FAILED: 0, UNKNOWN: 0.

2.2.5.5   Examples of Use

Use the command as shown below to check the replication channels with a wait limit of 5 seconds, and ensure that the command prints only basic progress information, and does not raise any alarm if a check fails:

<CUDB_node_prompt> cudbCheckReplication -b 5

2.2.5.6   cudbCheckReplication Configuration File

The configuration file of the command is located in the following directory:

/home/cudb/monitoring/replicaConsistency/config/cudbCheckReplication.conf

This file is directly included by a Bash shell script with the following prerequisites:

BEHIND_LIMIT=10

This setting assumes that any data injected to a master DS Unit are replicated within the defined amount of seconds to the slave DS Units. If the verification data does not arrive after the amount of seconds defined with BEHIND_LIMIT, the test verdict is FAILED. The value must be an integer number in the [ 1 ; INT_MAX ) left-closed right-open interval, where INT_MAX is the constant defined in POSIX.1-2008 Base Specifications, Issue 7.

Default value: 10

Command line option: -b | --behind-limit <LIMIT>

CRON_TIMESPEC='7 0 * * *'

This setting is the cron job time specification. If this value is changed, then the value must also be changed in every other CUDB node of the CUDB system, and the new value must be installed to cron on both SCs of every CUDB node. On a specific CUDB node, after saving the new value, you can install the new settings to cron by issuing the following command on the SCs:

ssh OAM1 '/etc/init.d/cudbCheckReplicationSrv restart'

ssh OAM2 '/etc/init.d/cudbCheckReplicationSrv restart'

The value must be either " disabled" or a valid cron job time specification.

Default value: '7 0 * * *'

Command line option : -

2.2.6   cudbCollectInfo

The cudbCollectInfo command is used to create a tarball of CUDB logs.

2.2.6.1   Requisites

cudbOperator users have no access to run this command.

Restrictions of cudbCollectInfo command:

2.2.6.2   Syntax

cudbCollectInfo [-n | --node <node id>] [-d | --ds <dsg id>] [-a | --action <action name>] [-c | --no-compress] [-e | --no-encrypt] [-h | --help] [-x | --exit-on-error]

2.2.6.3   Command Options

The following command options can be used:

2.2.6.4   Output

The following list contains some example output messages:

Creating dir for node 1 (/local/tmp/cudb_collect_info_20120412-104337/1) ... OK 

 Waiting 1 to finish ... OK 


 CREATING ARCHIVE ... OK 
 Encrypting archive ... OK 
 Removing unencrypted archive ... OK 


 REMOVING RAW DATA ... OK 


 Fetch the file: /local/tmp/cudb_collect_info_20120412-104337.c

In case cudbCollectInfo is already running on a local node or on a remote node (from another terminal or from another node in the system), the following output messages are printed to the console:

cudbCollectInfo is already running on this node. 
cudbCollectInfo is already running on node <Node ID where the command is executing>.
cudbCollectInfo is already running for node <target node ID> on node 
<Node ID where the command is executing>.
Note:  
These last three outputs are displayed if cudbCollectInfo is already running somewhere in the CUDB system. If no other cudbCollectInfo process is running, this output is displayed due to a previous cudbCollectInfo process finished with an unexpected error and lock file was not removed from the CUDB node where it was executing. To unlock cudbCollectInfo, execute the following command in the CUDB nodes where the lock file is: rm -rf /cluster/tmp/cudbcollectinfo*

Do not delete this file if the command is up and running, to prevent unexpected behaviors.


2.2.6.5   Examples of Use

Use the command as shown below to perform all log collection actions on Node 39:

<CUDB_node_prompt> cudbCollectInfo -n 39 -a all

Use the command as shown below to perform all log collection actions on all nodes of the system, disabling the GPG encryption of the tgz archive:

<CUDB_node_prompt> cudbCollectInfo -e

2.2.7   cudbConsistencyMgr

The cudbConsistencyMgr command manages consistency check tasks in the CUDB system. Refer to CUDB Consistency Check, Reference [12] for more information on how to use the command and perform consistency check tasks.

2.2.7.1   Requisites

The user permissions of executing cudbConsistencyMgr are as follows:

2.2.7.2   Syntax

2.2.7.3   Command Options

The following command options can be used:

Note:  
Either the -p | --pl, or -d | --dsg command option must be specified when executing the command.

2.2.7.4   Output

The output of the cudbConsistencyMgr command depends on the specified command options. Some examples are provided below.

Scheduling-Related Output

If the scheduling is successful, then cudbConsistencyMgr prints the task ID as shown below, which can be used to track the task:

Task UTC_2014-09-23-17-05-42_N242_U0000001955 is put into the pending task list and will be executed on node 242.

Task Listing-Related Output

The output of the command uses two lists: the PTL lists the scheduled tasks, while the RTL lists the currently running check tasks. Examples for both lists are shown below:

[Site 1]

        RTL:
                UTC_2014-09-09-11-00-10_N121_U0000000849
                        checkType=ms,source=S1-N121-D1,check=S2-N122-D1,maxReplicaLag=2000,alarmSeverityLimit=500,\
                        verboseMode=off,debugMode=off
        PTL:

[Site 2]

        RTL:
                UTC_2014-09-09-11-00-16_N122_U0000000793
                        checkType=ms,source=S2-N122-D2,check=S1-N121-D2,maxReplicaLag=2000,alarmSeverityLimit=500,\
                        verboseMode=off,debugMode=off
        PTL:
                UTC_2014-09-09-11-00-17_N122_U0000000794
                        checkType=ms,source=S2-N122-D2,check=S1-N121-D2,maxReplicaLag=2000,alarmSeverityLimit=500,\
                        verboseMode=off,debugMode=off
                UTC_2014-09-09-11-00-21_N122_U0000000795
                        checkType=ms,source=S2-N122-D3,check=S1-N121-D3,maxReplicaLag=2000,alarmSeverityLimit=500,\
                        verboseMode=off,debugMode=off
                UTC_2014-09-09-11-00-22_N122_U0000000796
                        checkType=ms,source=S2-N122-D3,check=S1-N121-D3,maxReplicaLag=2000,alarmSeverityLimit=500,\
                        verboseMode=off,debugMode=off
                UTC_2014-09-09-11-00-23_N122_U0000000797
                        checkType=ms,source=S2-N122-D3,check=S1-N121-D3,maxReplicaLag=2000,alarmSeverityLimit=500,\
                        verboseMode=off,debugMode=off
                UTC_2014-09-09-11-00-23_N122_U0000000798
                        checkType=ms,source=S2-N122-D3,check=S1-N121-D3,maxReplicaLag=2000,alarmSeverityLimit=500,\
                        verboseMode=off,debugMode=off
                UTC_2014-09-09-11-00-23_N122_U0000000799
                        checkType=ms,source=S2-N122-D3,check=S1-N121-D3,maxReplicaLag=2000,alarmSeverityLimit=500,\
                        verboseMode=off,debugMode=off

Task History-Related Output

The output in this case consists of log lines describing the following information:

The results are ordered by time, and are listed from the oldest to the newest, as shown below:

2014-09-23 15:53:49+02:00 SC_2_1 UTC_2014-09-23-13-53-48_N243_U0000000586: task found in PTL
checkType=ms,source=S2-N243-D1,check=S1-N242-D1,maxReplicaLag=2000,alarmSeverityLimit=500,verboseMode=off,debugMode=off

2014-09-23 15:53:49+02:00 SC_2_1 UTC_2014-09-23-13-53-48_N243_U0000000586: task execution started

2014-09-23 15:54:00+02:00 SC_2_1 UTC_2014-09-23-13-53-48_N243_U0000000586: task execution finished with result:
Successful completion, no difference found. Exit code: 0.

2.2.7.5   Examples of Use

Check the following examples of executing the command:

2.2.8   cudbDataBackup

The cudbDataBackup command executes a data backup in the complete CUDB system. Refer to CUDB Backup and Restore Procedures, Reference [13] for further details about when to use it.

2.2.8.1   Requisites

cudbOperator users have no access to run this command.

2.2.8.2   Syntax

2.2.8.3   Command Options

The following command options can be used:

2.2.8.4   Output

The command output is a log of the performed changes. The command result is provided at the end of the log.

2.2.8.5   Examples of Use

The following example procedure shows how to execute cudbDataBackup with the -S | --Slack-backup option:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the cudbDataBackup command as follows:

    <CUDB_node_prompt> cudbDataBackup -S 5

    The expected output must be similar to the below example:

    /home/cudb/systemDataBackup
    Listening for current PLDB and DSGs status reports (may take up to 2 minutes)
    Starting backup...
    Before calling pg_notification
    cudb_backup ver(1.3.4)
    BEGIN MANAGEMENT FOR LAST BACKUP Backup
    2013-10-31_15-09 finished successfully in :
    PLDB in CUDB node 27
    NDB node PL0
    NDB node PL1
    DS#2 in CUDB node 27
    NDB node DS2_0
    NDB node DS2_1
    DS#1 in CUDB node 29
    NDB node DS1_0
    NDB node DS1_1
    Attempting to de-block Provisioning Gateway. This may take up to a couple of minutes.

2.2.9   cudbDbDiskManage

The cudbDbDiskManage command manages the space of the Binary Large Object (BLOB) attributes stored on the disk storage system for certain CUDB object classes.

2.2.9.1   Requisites

cudbOperator users have no access to run this command.

2.2.9.2   Syntax

2.2.9.3   Command Options

The following command options can be used:

2.2.9.4   Output

The command reports OK in case of success, and error in case of failure.

2.2.9.5   Examples of Use

The following example procedure shows how to allocate a set amount of space on the disk storage system for a BLOB object with cudbDbDiskManage:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. <CUDB_node_prompt> cudbDbDiskManage -a <BLOB_object_name_stored_on_disk> <Allocated_disk_space_MB>

    The expected output of the above command must look similar to the below example:

    Obtaining SQL Servers Information ...
    NODE <DS_ID or PL_ID> : <type>
    Connection strings : <mysql ip:port> <mysql ip:port>
    Disk space assigned to the object class <BLOB_object_name_stored_on_disk> before increase : <x> Megabytes
    Disk space assigned to the object class <BLOB_object_name_stored_on_disk> after increase : <x> Megabytes

    Note:  
    The value of <x> in the above output must be greater than, or equal to the value defined with <Allocated_disk_space_MB>

2.2.10   cudbDsgMastershipChange

The cudbDsgMastershipChange command is used to move the master of a DSG to the selected node. It must be launched on the CUDB node that will hold the new master replica.

When executed, the command first checks the PLDB replication status on the node where the command is invoked. If the PLDB is unable to synchronize or replication status info is not available, the mastership change will be rejected. After this, the command checks if the new master replica has a longer delay than the time specified by the --time parameter: if it does, it rejects the mastership change. Then, for a period of four seconds, all write operations toward the current master are rejected to allow the new master replica to catch up with the current master. After this four-second period passes, the command checks if the new master is completely synchronized with the current master to avoid any potential data loss. If the check succeeds, the master of the specified DSG or PLDB is changed to the node.

In case Automatic Mastership Change (AMC) is active, the system periodically tries to move the master of each DSG to a DSG replica with higher priority, unless the master was intentionally moved to a degraded replica.

To avoid undesired master movements, disable the AMC feature. Refer to the "Automatic Mastership Change" section of CUDB High Availability, Reference [15] for more information.

2.2.10.1   Requisites

cudbOperator users have no access to run this command.

2.2.10.2   Syntax

2.2.10.3   Command Options

The following command options can be used:

2.2.10.4   Output

The possible outputs for the command are as follows:

2.2.10.5   Examples of Use

The following example procedure shows how to change the master of DSG 2 with cudbDsgMastershipChange:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the command as follows:

    <CUDB_node_prompt> cudbDsgMastershipChange -d 2

    The expected output must be similar to the below example:

    Processing DSG mastership change...
    cudbDsgMastershipChange: Success.
    DSG 2 master replica moved from node 80 to 81 successfully.

2.2.11   cudbDsgProvisioningManage

The cudbDsgProvisioningManage command is used to disable or enable a specific DSG during the provisioning of Distribution Entries (DEs). When using the command, consider the following:

2.2.11.1   Requisites

cudbOperator users have no access to run this command.

2.2.11.2   Syntax

2.2.11.3   Command Options

The following command options can be used:

2.2.11.4   Output

An example output of the command is shown below:

cudbDsgProvisioningManage: Failed.
Invalid syntax

cudbDsgProvisioningManage: Failed.
Unrecognized command line argument or arguments

cudbDsgProvisioningManage: Failed.
DSG %d is not valid

cudbDsgProvisioningManage: Failed.
Unexpected error.

cudbDsgProvisioningManage: Success.
DSG %d is already enabled.

cudbDsgProvisioningManage: Success.
DSG %d is already disabled.

cudbDsgProvisioningManage: Success.
DSG %d is enabled for provisioning.

cudbDsgProvisioningManage: Success.
DSG %d is disabled for provisioning.

2.2.11.5   Examples of Use

The following example procedure shows how to enable provisioning on DSG 1 with cudbDsgProvisioningManage:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the command as follows:

    <CUDB_node_prompt> cudbDsgProvisioningManage --enable 1

    The expected output must be similar to the below example:

    cudbDsgProvisioningManage: Success

2.2.12   cudbGetLogs

The cudbGetLogs command collects preventive maintenance logs for log analysis. For more information about this command, refer to CUDB Logchecker, Reference [6].

2.2.12.1   Requisites

cudbOperator users have no access to run this command.

2.2.12.2   Syntax

cudbGetLogs [-d | --debug] [-b | --bash-debug] [-c | --consistency-threshold <threshold>] [-f | --force-c-check] [-h | --help]

2.2.12.3   Command Options

The following command options can be used:

2.2.12.4   Output

An example output for the command is provided below:

Starting /opt/ericsson/cudb/OAM/bin/cudbGetLogs ...
Grepping logs and creating /home/cudb/monitoring/preventiveMaintenance/CUDB_107_201201241136.log ...
The log file is saved as : /home/cudb/monitoring/preventiveMaintenance/CUDB_107_201201241136.log

2.2.12.5   Examples of Use

2.2.13   cudbHaState

The cudbHaState commands automatically prints the cluster status.

2.2.13.1   Requisites

cudbOperator users have no access to run this command.

2.2.13.2   Syntax

cudbHaState

2.2.13.3   Command Options

Not applicable.

2.2.13.4   Output

The output shows the LDE and AMF cluster state, the CoreMW, COM state and the SI and SU states. The output must be similar to the below example:

LOTC cluster uptime:
--------------------
Sat Apr 20 18:33:30 2013
LOTC cluster state:
-------------------
Node safNode=SC_2_1 joined cluster | Sat Apr 20 18:33:40 2013
Node safNode=SC_2_2 joined cluster | Sat Apr 20 18:33:30 2013
Node safNode=PL_2_3 joined cluster | Sat Apr 20 18:35:39 2013
Node safNode=PL_2_4 joined cluster | Sat Apr 20 18:35:43 2013
Node safNode=PL_2_5 joined cluster | Sat Apr 20 18:35:43 2013
Node safNode=PL_2_6 joined cluster | Sat Apr 20 18:35:43 2013
Node safNode=PL_2_7 joined cluster | Sat Apr 20 18:35:48 2013
Node safNode=PL_2_8 joined cluster | Sat Apr 20 18:35:48 2013
Node safNode=PL_2_9 joined cluster | Sat Apr 20 18:35:53 2013
Node safNode=PL_2_10 joined cluster | Sat Apr 20 18:35:48 2013
AMF cluster state:
------------------
saAmfNodeAdminState."safAmfNode=SC-1,safAmfCluster=myAmfCluster": Unlocked
saAmfNodeOperState."safAmfNode=SC-1,safAmfCluster=myAmfCluster": Enabled
saAmfNodeAdminState."safAmfNode=SC-2,safAmfCluster=myAmfCluster": Unlocked
saAmfNodeOperState."safAmfNode=SC-2,safAmfCluster=myAmfCluster": Enabled
saAmfNodeAdminState."safAmfNode=PL-3,safAmfCluster=myAmfCluster": Unlocked
saAmfNodeOperState."safAmfNode=PL-3,safAmfCluster=myAmfCluster": Enabled
saAmfNodeAdminState."safAmfNode=PL-4,safAmfCluster=myAmfCluster": Unlocked
saAmfNodeOperState."safAmfNode=PL-4,safAmfCluster=myAmfCluster": Enabled
saAmfNodeAdminState."safAmfNode=PL-5,safAmfCluster=myAmfCluster": Unlocked
saAmfNodeOperState."safAmfNode=PL-5,safAmfCluster=myAmfCluster": Enabled
saAmfNodeAdminState."safAmfNode=PL-6,safAmfCluster=myAmfCluster": Unlocked
saAmfNodeOperState."safAmfNode=PL-6,safAmfCluster=myAmfCluster": Enabled
saAmfNodeAdminState."safAmfNode=PL-7,safAmfCluster=myAmfCluster": Unlocked
saAmfNodeOperState."safAmfNode=PL-7,safAmfCluster=myAmfCluster": Enabled
saAmfNodeAdminState."safAmfNode=PL-8,safAmfCluster=myAmfCluster": Unlocked
saAmfNodeOperState."safAmfNode=PL-8,safAmfCluster=myAmfCluster": Enabled
saAmfNodeAdminState."safAmfNode=PL-9,safAmfCluster=myAmfCluster": Unlocked
saAmfNodeOperState."safAmfNode=PL-9,safAmfCluster=myAmfCluster": Enabled
saAmfNodeAdminState."safAmfNode=PL-10,safAmfCluster=myAmfCluster": Unlocked
saAmfNodeOperState."safAmfNode=PL-10,safAmfCluster=myAmfCluster": Enabled
CoreMW HA state:
----------------
CoreMW is assigned as ACTIVE in controller SC-2
CoreMW is assigned as STANDBY in controller SC-1
COM state:
----------
COM is assigned as ACTIVE in controller SC-2
COM is assigned as STANDBY in controller SC-1
SI HA state:
------------
saAmfSISUHAState."safSu=SC-2,safSg=2N,safApp=ERIC-CUDB_CUDBOI"."safSi=2N-1": active(1)
saAmfSISUHAState."safSu=SC-1,safSg=2N,safApp=ERIC-CUDB_KPICENTRAL"."safSi=2N-1": active(1)
saAmfSISUHAState."safSu=SC-2,safSg=DS1_2N,safApp=ERIC-CUDB_CS"."safSi=DS1_2N-1": active(1)
saAmfSISUHAState."safSu=SC-2,safSg=DS2_2N,safApp=ERIC-CUDB_CS"."safSi=DS2_2N-1": active(1)
saAmfSISUHAState."safSu=SC-2,safSg=PLDB_2N,safApp=ERIC-CUDB_CS"."safSi=PLDB_2N-1": active(1)
saAmfSISUHAState."safSu=SC-2,safSg=2N,safApp=ERIC-CUDB_LDAPFE_MONITOR"."safSi=2N-1": active(1)
saAmfSISUHAState."safSu=SC-1,safSg=2N,safApp=ERIC-CUDB_CUDBOI"."safSi=2N-1": standby(2)
saAmfSISUHAState."safSu=SC-1,safSg=2N,safApp=ERIC-CUDB_LDAPFE_MONITOR"."safSi=2N-1": standby(2)
saAmfSISUHAState."safSu=SC-1,safSg=2N,safApp=ERIC-CUDB_MSGSRV_MONITOR"."safSi=2N-1": active(1)
saAmfSISUHAState."safSu=SC-1,safSg=DS2_2N,safApp=ERIC-CUDB_CS"."safSi=DS2_2N-1": standby(2)
saAmfSISUHAState."safSu=SC-1,safSg=DS1_2N,safApp=ERIC-CUDB_CS"."safSi=DS1_2N-1": standby(2)
saAmfSISUHAState."safSu=SC-1,safSg=PLDB_2N,safApp=ERIC-CUDB_CS"."safSi=PLDB_2N-1": standby(2)
saAmfSISUHAState."safSu=PL-3,safSg=2N,safApp=ERIC-CUDB_SOAP_NOTIFIER"."safSi=2N-1": active(1)
saAmfSISUHAState."safSu=PL-4,safSg=2N,safApp=ERIC-CUDB_SOAP_NOTIFIER"."safSi=2N-1": standby(2)
SU States:
----------
Status OK

2.2.13.5   Examples of Use

<CUDB_node_prompt> cudbHaState

2.2.14   cudbLdapFeRestart

The cudbLdapFeRestart command is used to perform a rolling restart on the LDAP Front Ends (FEs).

By default, the LDAP FE processes in the node are restarted one by one. An LDAP FE process is not restarted until the previous one is not fully operative (waiting up to a maximum of 120 seconds). In case the restarted LDAP FE process is not fully operative in period of 120 seconds, restart of the LDAP FEs is aborted. This ensures that the LDAP connections are equally balanced between all LDAP FEs after executing the command.

Note:  
The current connections of the LDAP FEs are dropped when the processes are restarted.

2.2.14.1   Requisites

cudbOperator users have no access to run this command.

2.2.14.2   Syntax

cudbLdapFeRestart [-f | --force] [-p | --parallel [--no-prompt]] [-h | --help]

2.2.14.3   Command Options

The following command options can be used:

Caution!

After executing a forced restart, in case there is an error in the LDAP-FE configuration, all LDAP-FEs will be down.

2.2.14.4   Output

The command output is a log of the command progress. The result of the command is provided at the end of the log.

2.2.14.5   Examples of Use

The following example procedure shows how to execute cudbLdapFeRestart:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the command as follows:

    <CUDB_node_prompt> cudbLdapFeRestart

    The expected output must be similar to the below example:

    Reading necessary data from /home/cudb/common/config/cudbSystem.xml
    Restarting slapd process in host 10.22.89.3
    Stopping slapd...OK
    Making sure cudbLdapFeMonitor is running (to start slapd)...OK
    Waiting for slapd to be ready...OK
    slapd ready after 24 seconds
    ------------------------------------
    Restarting slapd process in host 10.22.89.4
    Stopping slapd...OK
    Making sure cudbLdapFeMonitor is running (to start slapd)...OK
    Waiting for slapd to be ready...OK
    slapd ready after 56 seconds
    ------------------------------------
    Restarting slapd process in host 10.22.89.5
    Stopping slapd...OK
    Making sure cudbLdapFeMonitor is running (to start slapd)...OK
    Waiting for slapd to be ready...OK
    slapd ready after 10 seconds
    ------------------------------------
    Restarting slapd process in host 10.22.89.6
    Stopping slapd...OK
    Making sure cudbLdapFeMonitor is running (to start slapd)...OK
    Waiting for slapd to be ready...OK
    slapd ready after 12 seconds
    ------------------------------------
    Restarting slapd process in host 10.22.89.7
    Stopping slapd...OK
    Making sure cudbLdapFeMonitor is running (to start slapd)...OK
    Waiting for slapd to be ready...OK
    slapd ready after 12 seconds
    ------------------------------------
    Restarting slapd process in host 10.22.89.8
    Stopping slapd...OK
    Making sure cudbLdapFeMonitor is running (to start slapd)...OK
    Waiting for slapd to be ready...OK
    ready after 11 seconds
    ------------------------------------
    Restarting slapd process in host 10.22.89.9
    Stopping slapd...OK
    Making sure cudbLdapFeMonitor is running (to start slapd)...OK
    Waiting for slapd to be ready...OK
    slapd ready after 7 seconds
    ------------------------------------
    Restarting slapd process in host 10.22.89.10
    Stopping slapd...OK
    Making sure cudbLdapFeMonitor is running (to start slapd)...OK
    Waiting for slapd to be ready...OK
    slapd ready after 15 seconds
    ------------------------------------
    Restarting of slapd processes is finished

2.2.15   cudbManageBCServer

The cudbManageBCServer command manages the Blackboard Coordination (BC) servers in the CUDB node. Refer to CUDB High Availability, Reference [15] for further information about BC servers and the BC cluster.

2.2.15.1   Requisites

This command acts on the BC server running on the blade or Virtual Machine (VM) where it is executed.

2.2.15.2   Syntax

2.2.15.3   Command Options

The following command options can be used:

2.2.15.4   Output

When executed with the -checkMode option, the command returns an output similar to the below example:

cudbManageBCServer -checkMode
Actual Host=SC_2_1
Mode: follower

2.2.15.5   Examples of Use

<CUDB_node_prompt> cudbManageBCServer -checkMode

2.2.16   cudbManageLibDataDist

The cudbManageLibDataDist command checks if a specific CUDB node is using the default distribution policy or a custom one. Refer to CUDB System Administrator Guide, Reference [5] for further details about using the command.

2.2.16.1   Requisites

cudbOperator users have no access to run this command.

2.2.16.2   Syntax

cudbManageLibDataDist [-l | --load] [-0 | --unload] [-s | --status] [-h | --help]

2.2.16.3   Command Options

The following command options can be used:

2.2.16.4   Output

The command must return an output similar to the below example:

Customer distribution policy active

2.2.16.5   Examples of Use

The following example procedure shows how to check the status of the distribution policy with cudbManageLibDataDist -s:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the command as follows:

    <CUDB_node_prompt> cudbManageLibDataDist -s

    The expected output must be similar to the below example:

    CUDB default distribution policy active

2.2.17   cudbManageMsgSrvServer

The cudbManageMsgSrvServer command manages the Messaging Service servers (MsgSrv) in the CUDB node. Refer to CUDB High Availability, Reference [15] for further information about BC servers and the BC cluster.

2.2.17.1   Requisites

This command acts on the MsgSrv server running on the blade or VM where it is executed.

2.2.17.2   Syntax

2.2.17.3   Command Options

The following command options can be used:

2.2.17.4   Output

The result the command depends on the option used. For example:

2.2.17.5   Examples of Use

<CUDB_node_prompt> cudbManageMsgSrvServer status

2.2.18   cudbManageStore

The cudbManageStore command executes administrative orders over clusters installed in the CUDB node. Refer to CUDB System Administrator Guide, Reference [5] for more information about using the command.

2.2.18.1   Requisites

The requisites of executing this command are as follows:

2.2.18.2   Syntax

2.2.18.3   Command Options

The following command options can be used:

Note:  
The -l | --location, -S | --Script-path, -n | --no-start-replication and -s | --restore-stored-procedures options are valid only for the restore order.

Attention!

Some DSG cluster manipulation orders, such as starting a stopped cluster, or a cluster with in maintenance status, may raise or clear several SNMP alarms for a short period of time. However, once the order is executed, the situation regarding alarms reaches a stable state, coherent with the system state after executing the orders.

2.2.18.4   Output

An example of command output is shown below:

cudbManageStore stores to process: pl.
Store pl is in ready mode.
cudbManageStore command successful.

2.2.18.5   Examples of Use

The following examples show how to execute cudbManageStore. Before all command executions, the following procedure must be performed:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

Some examples of using cudbManageStore are provided below:

2.2.19   cudbPmJobReload

Attention!

This command is deprecated. To restart the ESA Performance Management (PM) agent, execute /sbin/service esapma restart in both SCs instead.

The cudbPmJobReload command restarts the PM agent to allow it to get new configuration. Refer to CUDB System Administrator Guide, Reference [5] for more information on when to use it.

2.2.19.1   Requisites

The requisites of using the command are as follows:

2.2.19.2   Syntax

cudbPmJobReload [-t | --time <timeout>] [-D | --Debug] [-u | --user-facility <facility_level>] [-h | --help]

2.2.19.3   Command Options

The following command options can be used:

2.2.19.4   Output

When executed, the command must return an output similar to the below example:

Stopping PmAgent in node 10.22.27.10 ... OK
Waiting for ESA PmAgent to go off line.
ESA PmAgent has been successfully stopped.
Starting PmAgent in node 10.22.27.10 ... OK

2.2.19.5   Examples of Use

The following example shows how to execute cudbPmJobReload:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the command as follows:

    <CUDB_node_prompt> cudbPmJobReload

    The expected output must be similar to the below example:

    Stopping PmAgent in node 10.22.28.1 ... OK
    Waiting for ESA PmAgent to go off line.
    ESA PmAgent has been successfully stopped.
    Starting PmAgent in node 10.22.28.1 ... OK

2.2.20   cudbPrepareStore

The cudbPrepareStore command creates databases and tables for the PLDB or the specified DS Unit. Refer to CUDB System Administrator Guide, Reference [5] for further details on when to use it.

2.2.20.1   Requisites

cudbOperator users have no access to run this command.

2.2.20.2   Syntax

2.2.20.3   Command Options

The following command options can be used:

2.2.20.4   Output

An example output of the command is shown below:

Starting
Obtaining SQL Servers Information ...
Done
Skipping Step 1 ...
Executing Step 2 ...
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/cudb-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/identities-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/msc-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/csps-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/nph-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/auth-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ftest-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/avg-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/eps-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ims-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/sm-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ldap-indexes-ds.sql on 10.22.23.5:15010
..ok
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/cudb-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/identities-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/msc-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/csps-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/nph-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/auth-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ftest-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/avg-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/eps-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ims-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/sm-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ldap-indexes-ds.sql on 10.22.23.6:15010
..ok
Done
Finished

2.2.20.5   Examples of Use

The following example shows how to prepare the databases and tables on DS1 with cudbPrepareStore in application scope:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the command as follows:

    <CUDB_node_prompt> cudbPrepareStore --ds 1 -s appsrv

The expected output must be similar to the below example:

Starting
Obtaining SQL Servers Information ...
Done
Skipping Step 1 ...
Executing Step 2 ...
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/cudb-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/identities-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/msc-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/csps-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/nph-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/auth-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ftest-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/avg-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/eps-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ims-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/sm-ds.sql on 10.22.23.5:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ldap-indexes-ds.sql on 10.22.23.5:15010
..ok
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/cudb-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/identities-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/msc-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/csps-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/nph-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/auth-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ftest-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/avg-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/eps-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ims-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/sm-ds.sql on 10.22.23.6:15010
..ok
Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ldap-indexes-ds.sql on 10.22.23.6:15010
..ok
Done
Finished

2.2.21   cudbReallocate

The cudbReallocate command performs the Subscription Reallocation (also known as "reallocation") of the Distribution Entries (DEs) between DSGs. When executing the command, the complete DE subtree is moved. If one or more DSGs is disabled for provisioning of DEs, reallocation will not be performed to such DSGs. In case when DSG disabled for provisioning of DEs is specified as reallocation destination, reallocation will be stopped. When the reallocation destination is not specified, the command will select one or more suitable DSGs which are not blocked for provisioning as reallocation targets. In both cases, reallocation can be forced if optional parameter is used.

Refer to CUDB Subscription Reallocation, Reference [18] for more information on when to use the command.

Use the reallocation command during the low-traffic hours and use it repeatedly. Either way, use it with small chunks of subscribers (--list) or small percentages (--entriespercentage) at a time, to make sure that reallocating those small chunks of data fits into the low-traffic time window.

2.2.21.1   Requisites

The requisites of using the command are as follows:

2.2.21.2   Syntax

2.2.21.3   Command Options

The following command options can be used:

2.2.21.4   Output

The following list contains some example output messages:

If destination DSG is blocked for provisioning, the example of output message is:

2.2.21.5   Examples of Use

2.2.21.5.1   How to Check the Occupancy Percentage and Memory Level of All DSGs

The following example shows how to check the occupancy percentage and memory level of all DSGs in the system with the cudbReallocate command:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the command as follows:

    <CUDB_node_prompt> cudbReallocate --status

    The expected output must be similar to the below example:

    cudbReallocate: Success
    DSG1 occupancy: 40%, Above the warning level
    DSG2 occupancy: 40%, Below the stop level

    Note:  
    In case of error, the output file is generated with no moved entries.

2.2.21.5.2   How to Force Reallocation if One DSG is Blocked for Provisioning of DEs

The following example shows how to force reallocation if one DSG is blocked for provisioning of DEs and it is specified as reallocation destination:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the command as follows:

    <CUDB_node_prompt> cudbReallocate -p 20 -s 1 -d 255 -f

    The expected output must be similar to the below example:

    cudbReallocate: Failed.
    Destination DSG 255 is blocked for reallocation.

2.2.22   cudbReconciliationMgr

The cudbReconciliationMgr command allows to check, print, and modify the pending reconciliation task list. Refer to CUDB Data Storage Handling, Reference [19] for further details about when to use it.

2.2.22.1   Requisites

cudbOperator users have no access to run this command.

2.2.22.2   Syntax

2.2.22.3   Command Options

The following command options can be used:

2.2.22.4   Output

The command output is a log of the execution progress. The command result is provided at the end of the log.

2.2.22.5   Examples of Use

The following examples show how to execute cudbReconciliationMgr. Before all command executions, the following procedure must be performed:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

Some examples of using cudbReconciliationMgr are provided below:

2.2.23   cudbRemoteTrust

The cudbRemoteTrust command is used in scalability procedures during installation to establish new SSH trust relationships between the installed CUDB node and the rest of the CUDB nodes.

It is also used to disable legal warning banner to be displayed for internal CUDB logins. Refer to Ericsson personnel for more information.

2.2.23.1   Requisites

The requisites of using the command are as follows:

2.2.23.2   Syntax

cudbRemoteTrust [-l | --local] [-a | --all [-p | --persistent]] [-b | --banner] [-h | --help]

2.2.23.3   Command Options

The following command options can be used:

2.2.23.4   Output

Successful execution provides no output.

2.2.23.5   Examples of Use

The following example shows how to execute cudbRemoteTrust on every blade or VM of the current CUDB node:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the command as follows:

    <CUDB_node_prompt> cudbRemoteTrust -a

2.2.24   cudbResumeProvisioningNotification

The cudbResumeProvisioningNotification command is used to send a notification to the Provisioning Gateway (PG) to resume provisioning. Refer to Storage Engine, Backup Notification Failure To Provisioning Gateway, Reference [20] for further details about when to use it.

2.2.24.1   Requisites

cudbOperator users have no access to run this command.

2.2.24.2   Syntax

cudbResumeProvisioningNotification [-u | --user-facility <facility>] [-D |--debug] [-h |--help]

2.2.24.3   Command Options

The following command options can be used:

2.2.24.4   Output

The command output is a log of the changes performed. The command result is provided at the end of the log.

2.2.24.5   Examples of Use

The following example shows how to execute cudbResumeProvisioningNotification:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the command as follows:

    <CUDB_node_prompt> cudbResumeProvisioningNotification

2.2.25   cudbServiceContinuity

The cudbServiceContinuity command is used to change DSG mastership manually to a working site that has been left in minority after a CUDB system split situation. The cudbServiceContinuity command maximizes service, allowing traffic operation with the resources available in the minority partition. New masters will be elected for all DSGs configured in that site.

Note:  
PLDB mastership is not affected by the command.

Refer to CUDB High Availability, Reference [15] for further details about using the command.

2.2.25.1   Requisites

The requisites of using the command are as follows:

2.2.25.2   Syntax

cudbServiceContinuity [-h |--help]

2.2.25.3   Command Options

The following command option can be used:

2.2.25.4   Output

The command can return the following output messages:

cudbServiceContinuity 
WARNING: This command will take all masters excluding PLDB in this site! Are you sure you want to continue (y/n)? y
The node /cudb/actions already existed
Set in path: /cudb/actions  DATA: serviceContinuity
Service Continuity successfully executed
cudbServiceContinuity 
WARNING: This command will take all masters excluding PLDB in this site! Are you sure you want to continue (y/n)? n
No action executed
cudbServiceContinuity -h
===================================================
               USE of cudbserviceContinuity
===================================================
Format : cudbserviceContinuity [-h | --help]
Where:
               -h | --help            Shows command usage and help
cudbServiceContinuity 
WARNING: This command will take all masters excluding PLDB in this site! Are you sure you want to continue (y/n)? y
SC can not be set in a majority (majority[1, 2, 3] AR[] NS[])
Command failed
cudbServiceContinuity 
WARNING: This command will take all masters excluding PLDB in this site! Are you sure you want to continue (y/n)? y
SC can not be set in a evenSplit (evenSplit[2] AR[] NS[])
Command failed
Note:  
If the command fails for other reasons than the ones above, contact the next level of maintenance support.

2.2.25.5   Examples of Use

<CUDB_node_prompt> cudbServiceContinuity

2.2.26   cudbSwBackup

The cudbSwBackup command performs a software and configuration backup in the CUDB node. The command rotates software and configuration backup files automatically, and asks for confirmation when the oldest backup is about to be deleted. Refer to CUDB System Administrator Guide, Reference [5] for more information on using the command.

2.2.26.1   Requisites

The requisites of using the command are as follows:

2.2.26.2   Syntax

2.2.26.3   Command Options

The following command options can be used:

2.2.26.4   Output

The output of the command is a log of the performed actions. An example output is provided below:

Pack and compress all directories and files under /home/cudb/* except swbackup, systemDataBackup and automatedBackupStorage directories.

IMM data persisted
Obtaining SQL Servers Information ...
Creating tables backup into file /cluster/home/cudb/swbackup/newbackup-cudbSmpConfig.sql
Connecting to host 10.22.49.5:15000 , database: cudb_system_monitor
Let's tar the directory /cluster/home/cudb excluding backup directory itself /cluster/home/cudb/swbackup/

...

Let's execute the lde-brf command: cluster brf create -l newbackup -t system -m newbackup
The resulting file is saved in /cluster/storage/no-backup/newbackup.tar
Snapshot cleanup completed
newbackup/config.md5sum
newbackup/config.metadata
newbackup/config.tar.gz
newbackup/software.md5sum
newbackup/software.metadata newbackup/software.tar.gz
Backup successfully created.
The backup files: newbackup, are located in directories:
/cluster/home/cudb/swbackup and
/cluster/storage/no-backup

2.2.26.5   Examples of Use

The following example shows how to create a new software and configuration backup (named newbackup) with cudbSwBackup:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the command as follows:

    <CUDB_node_prompt> cudbSwBackup -c newbackup

The expected output must be similar to the below example:

Let's execute the lde-brf command: cluster brf create -l newbackup -t system -m newbackup
The resulting files will be saved in /cluster/storage/no-backup/newbackup.tar
Snapshot cleanup completed
newbackup/config.md5sum
newbackup/config.metadata
newbackup/config.tar.gz
newbackup/software.md5sum
newbackup/software.metadata
newbackup/software.tar.gz
Backup successfully created.
The backup files: newbackup, are located in directories:
/cluster/home/cudb/swbackup and
/cluster/storage/no-backup

<CUDB_node_prompt> cudbSwBackup -i

CUDB SW Backups are stored in two directories:
/cluster/home/cudb/swbackup and
/cluster/storage/no-backup
The backup files are:
- One tar backup file
- One gz backup file
- One sql backup file (Only in nodes with PLDB)

The following example shows how to restore a previously created software and configuration backup (named newbackup) with cudbSwBackup:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the command as follows:

    <CUDB_node_prompt> cudbSwBackup --restore newbackup

The expected output must be similar to the below example:

WARNING: Restoring the CUDB software will replace previous software and configuration for the CUDB node. Are you sure you want to continue (y/n)?y
Check for ongoing or pending applyConfig...
Waiting for OIWorker to write to file and release lock.
Obtaining SQL Servers Information ...
Checking SQL connections...

2.2.27   cudbSwVersionCheck

The cudbSwVersionCheck command gathers information on the installed software packages from both the object model and LDE, and then prints the current installation state for each node in the LDE cluster. It also compares the current state to the official package list of the installed CUDB release, and shows the differences between the reference file and the currently installed packages.

The package information listing and the comparison of the installed packages can be requested separately with the use of the --packages and --compare options, respectively. If none of these options are used when executing the command, both actions are performed.

2.2.27.1   Requisites

cudbOperator users have no access to run this command.

2.2.27.2   Syntax

cudbSwVersionCheck [-p | --packages] [-c | --compare] [(-f | --reffile <REFFILE>) | (-d | --refdir <REFDIR>)] [-h | --help]

2.2.27.3   Command Options

The following command options can be used:

2.2.27.4   Output

An example output is provided below:

CUDB_67 SC_2_1# cudbSwVersionCheck
Checking SW on blades.......... ..........................................................

SW num |  2_1 |  2_2 ||  2_3 |  2_4 |  2_5 |  2_6 |  2_7 |  2_8 |  2_9 | 2_10 |
-------------------------------------------------------------------------------
     1 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
     2 |      |      ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
     3 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
     4 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
     5 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
     6 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
     7 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
     8 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
     9 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    10 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    11 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    12 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    13 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    14 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    15 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    16 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    17 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    18 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    19 |      |      ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    20 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    21 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    22 |      |      ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    23 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    24 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    25 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    26 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    27 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    28 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    29 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    30 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    31 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    32 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    33 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    34 |      |      ||  XO  |  XO  |      |      |      |      |      |      |
    35 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    36 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    37 |      |      ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    38 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    39 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    40 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    41 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    42 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    43 |      |      ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    44 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    45 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    46 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    47 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    48 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    49 |      |      ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    50 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    51 |      |  XO  ||      |      |      |      |      |      |      |      |
    52 |  XO  |  XO  ||  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |  XO  |
    53 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    54 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    55 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    56 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    57 |  XO  |  XO  ||      |      |      |      |      |      |      |      |
    58 |  XO  |  XO  ||      |      |      |      |      |      |      |      |

LEGEND:
        X: queried by the "cmw-repository-list --node <blade> <sw_string>" command
        O: queried by the "immlist <sw_string>" command

SDPs imported to the LOTC cluster:
----------------------------------

SW num |                                        SW string | Used |
------------------------------------------------------------------
     1 |                ERIC-COREMW_SC-CXP9017565_1-R5K01 |   XO |
     2 |            ERIC-LINUX_PAYLOAD-CXP9013152_3-R4F04 |   XO |
     3 |        ERIC-CUDB_SMBC_PROCESS-CXP9030366_6-R1A02 |   XO |
     4 |           ERIC-CUDB_BC_SERVER-CXP9030364_6-R1A01 |   XO |
     5 |           ERIC-CUDB_BC_CLIENT-CXP9030367_6-R1A02 |   XO |
     6 |      ERIC-CUDB_LDAPFE_MONITOR-CXP9015809_6-R1A10 |   XO |
     7 |       ERIC-CUDB_CBATMPPATCHES-CXP9030357_6-R1A10 |   XO |
     8 |           ERIC-COREMW_OPENSAF-CXP9017656_1-R5K01 |   XO |
     9 |         ERIC-CUDB_APPCOUNTERS-CXP9015318_6-R1A10 |   XO |
    10 |   ERIC-CUDB_BC_SERVER_MONITOR-CXP9030379_6-R1A01 |   XO |
    11 |            ERIC-COREMW_COMMON-CXP9017566_1-R5K01 |   XO |
    12 |                      ERIC-COM-CXP9017585_2-R6K01 |   XO |
    13 |                    ERIC-ComSa-CXP9017697_3-R2K01 |   XO |
    14 |                 ERIC-CUDB_ESA-CXP9015816_6-R1A09 |   XO |
    15 |             ERIC-CUDB_CURATOR-CXP9030365_6-R1A02 |   XO |
    16 |       ERIC-CUDB_BCCLUSTER_CMD-CXP9030380_6-R1A01 |   XO |
    17 |      ERIC-CUDB_BACKUP_RESTORE-CXP9015818_6-R1A10 |   XO |
    18 |          ERIC-CUDB_BACKUP_CMD-CXP9016194_6-R1A05 |   XO |
    19 |              ERIC-CUDB_LDAPFE-CXP9015319_6-R1A10 |   XO |
    20 |           ERIC-CUDB_KEEPALIVE-CXP9015322_6-R1A10 |   XO |
    21 |           ERIC-CUDB_CS_SC_CMD-CXP9030358_6-R1A10 |   XO |
    22 |           ERIC-CUDB_CS_PL_CMD-CXP9019599_6-R1A10 |   XO |
    23 |                  ERIC-CUDB_CS-CXP9015812_6-R1A10 |   XO |
    24 |      ERIC-CUDB_COUNTCOLLECTOR-CXP9015814_6-R1A07 |   XO |
    25 |         ERIC-CUDB_LIB_REALLOC-CXP9030372_6-R1A10 |   XO |
    26 |           ERIC-CUDB_LIB_BCCLI-CXP9030363_6-R1A01 |   XO |
    27 |                ERIC-CUDB_JAVA-CXP9015823_6-R1A04 |   XO |
    28 | ERIC-CUDB_MANAGE_BCSERVER_CMD-CXP9030381_6-R1A01 |   XO |
    29 |           ERIC-CUDB_LIB_BOOST-CXP9030227_6-R1A04 |   XO |
    30 |          ERIC-CUDB_LIB_ALARMS-CXP9030217_6-R1A10 |   XO |
    31 |           ERIC-CUDB_LIB_SMMCP-CXP9015821_6-R1A10 |   XO |
    32 |            ERIC-CUDB_LIB_LDAP-CXP9030233_6-R1A10 |   XO |
    33 |             ERIC-CUDB_LIB_AMF-CXP9017480_6-R1A10 |   XO |
    34 |     ERIC-CUDB_LDAP_SOAP_NOTIF-CXP9016181_6-R1A10 |   XO |
    35 |            ERIC-CUDB_LDAP_CMD-CXP9030234_6-R1A03 |   XO |
    36 |         ERIC-CUDB_REALLOC_CMD-CXP9030373_6-R1A10 |   XO |
    37 |          ERIC-CUDB_MYSQL_NDBD-CXP9030225_6-R1A02 |   XO |
    38 |        ERIC-CUDB_MYSQL_COMMON-CXP9030223_6-R1A10 |   XO |
    39 |          ERIC-CUDB_LIB_LOGGER-CXP9030219_6-R1A10 |   XO |
    40 |            ERIC-CUDB_LIB_CDXQ-CXP9030216_6-R1A10 |   XO |
    41 |       ERIC-CUDB_LIB_CONF_MGMT-CXP9030218_6-R1A10 |   XO |
    42 |          ERIC-CUDB_MYSQL_MGMD-CXP9030224_6-R1A02 |   XO |
    43 |         ERIC-CUDB_PL_PLATFORM-CXP9015820_6-R1A10 |   XO |
    44 |         ERIC-CUDB_LOADMONITOR-CXP9017650_6-R1A10 |   XO |
    45 |          ERIC-CUDB_LIB_XERCES-CXP9030220_6-R1A01 |   XO |
    46 |          ERIC-CUDB_LIB_ZKCCLI-CXP9030362_6-R1A01 |   XO |
    47 |            ERIC-CUDB_PDSH_CMD-CXP9030391_6-R1A01 |   XO |
    48 |                  ERIC-CUDB_OI-CXP9015813_6-R1A10 |   XO |
    49 |          ERIC-CUDB_MYSQL_SERV-CXP9030226_6-R1A02 |   XO |
    50 |      ERIC-CUDB_RECONCILIATION-CXP9015808_6-R1A10 |   XO |
    51 |         ERIC-CUDB_NODE_CONFIG-CXP9015320_6-R1A10 |   XO |
    52 |            ERIC-CUDB_SECURITY-CXP9016002_6-R1A10 |   XO |
    53 |           ERIC-CUDB_SYSHEALTH-CXP9030087_6-R1A10 |   XO |
    54 |         ERIC-CUDB_SC_PLATFORM-CXP9015817_6-R1A10 |   XO |
    55 |     ERIC-CUDB_SLES_SCREEN_CMD-CXP9030371_6-R1A01 |   XO |
    56 |            ERIC-LINUX_CONTROL-CXP9013151_3-R4F04 |   XO |
    57 |             ERIC-CUDB_SW_MGMT-CXP9015815_6-R1A10 |   XO |
    58 |           ERIC-CUDB_SM_CLIENT-CXP9015810_6-R1A10 |   XO |

SW Version of the CUDB Node:
----------------------------

  Reference: /home/coremw_appdata/incoming/cudb-install-temp/cudbReference
    Version: CUDB13B CXP9020214/6 R1A10
Differences:                                       reference <|> current sw version
                                                   --------------------------------
ERIC-CUDB_SM_PROCESS-CXP9015811_6-R1A10                       <

2.2.27.5   Examples of Use

Execute the command as follows to print the package installation status and compare the package installation state to the contents of the cudbReference file:

<CUDB_node_prompt> cudbSwVersionCheck --packages --compare

2.2.28   cudbSystemDataBackupAndRestore

The cudbSystemDataBackupAndRestore command automates the system data backup and restore processes. In case of kickstart command failure, the execution is not stopped. Furthermore, a retry mechanism exists, which executes the kickstart command a maximum of 2 more times with a 5-second delay in-between them. The encountered errors are stored and displayed in a summary at the script exit. In case of error, the exit code of cudbSystemDataBackupAndRestore is the number of errors encountered during execution. The cudbSystemDataBackupAndRestore script exits when running kickstart_replication and leaves most of the slave DS replica without the replication started. Refer to CUDB Backup and Restore Procedures, Reference [13] for further details on when to use the command.

Note:  
Do not run the command cudbSystemDataBackupAndRestore while configuration changes are being applied.

Note:  
To avoid collision with the Self-Ordered Backup and Restore function, do not perform system data backup or restore if a Self-Ordered Backup and Restore process is running on any node of the system.

See Section 2.2.29 for further information about the status of the Self-Ordered Backup and Restore process.


2.2.28.1   Requisites

cudbOperator users have no access to run this command.

2.2.28.2   Syntax

2.2.28.3   Command Options

The following command options can be used:

2.2.28.4   Output

The following output examples are provided below:

CUDB_44 SC_2_2# cudbSystemDataBackupAndRestore -c

Checking /home/cudb/automatedBackupStorage on local node
--------------------------------


        Local node: ... EMPTY

Checking /home/cudb/systemDataBackup on nodes
--------------------------------


        44: ... EMPTY
        43: ... EMPTY

CREATE PART
--------------------------------


/home/cudb/systemDataBackup
Listening for current PLDB and DSGs status reports (may take upto 2 minutes)
Starting backup...
Before calling pg_notification 
BackupStart [INFO] - PS Notification trying stop notification was successfully sent.
PS Notification successfully sent to : 172.31.233.139
PS Notification failed to : NONE
PS not notified : NONE

cudb_backup     ver(1.3.4)

BEGIN MANAGEMENT FOR LAST BACKUP
Backup 2014-05-14_20-11 finished successfuly in :
PLDB in CUDB node 44
    NDB node PL0
    NDB node PL1
    NDB node PL2
    NDB node PL3
DSG#1 in CUDB node 43
    NDB node DS1_0
    NDB node DS1_1
DSG#255 in CUDB node 43
    NDB node DS2_0
    NDB node DS2_1
Attempting to de-block Provisioning Gateway. This may take up to a couple of minutes.

COLLECT PART
--------------------------------


        Copying backup piece from node 44 for dsg 0 for ndb node id 3 ... OK
        Copying backup piece from node 44 for dsg 0 for ndb node id 4 ... OK
        Copying backup piece from node 44 for dsg 0 for ndb node id 5 ... OK
        Copying backup piece from node 44 for dsg 0 for ndb node id 6 ... OK
        Copying backup piece from node 43 for dsg 1 for ndb node id 3 ... OK
        Copying backup piece from node 43 for dsg 1 for ndb node id 4 ... OK
        Copying backup piece from node 43 for dsg 255 for ndb node id 3 ... OK
        Copying backup piece from node 43 for dsg 255 for ndb node id 4 ... OK


Please copy /home/cudb/automatedBackupStorage to a safe location to be able to restore the backup pieces when needed!



ERROR SUMMARY
--------------------------------


None.
CUDB_210 SC_2_2# cudbSystemDataBackupAndRestore --restore /home/cudb/automatedBackupStorage

VALIDATE PART
--------------------------------


NODE: 210
        DSG: 0
                NDB: 3
                NDB: 4
                NDB: 5
                NDB: 6
        DSG: 1
                NDB: 3
                NDB: 4
        DSG: 255
                NDB: 3
                NDB: 4

DISTRIBUTE PART
--------------------------------



Creating /home/cudb/systemDataBackup on node 210

        Node 210 has DSG 0      -> copying ... id3-OK id4-OK id5-OK id6-OK
        Node 210 has DSG 1      -> copying ... id3-OK id4-OK
        Node 210 has DSG 255    -> copying ... id3-OK id4-OK
Creating /home/cudb/systemDataBackup on node 244

        Node 244 has DSG 0      -> copying ... id3-OK id4-OK id5-OK id6-OK
        Node 244 has DSG 1      -> copying ... id3-OK id4-OK
        Node 244 has DSG 255    -> copying ... id3-OK id4-OK
EXTRACT PART
--------------------------------


        Node 210 has DSG 0      -> extracting ... id3-OK id4-OK id5-OK id6-OK
        Node 210 has DSG 1      -> extracting ... id3-OK id4-OK
        Node 210 has DSG 255    -> extracting ... id3-OK id4-OK
        Node 244 has DSG 0      -> extracting ... id3-OK id4-OK id5-OK id6-OK
        Node 244 has DSG 1      -> extracting ... id3-OK id4-OK
        Node 244 has DSG 255    -> extracting ... id3-OK id4-OK

RESTORE PART
--------------------------------


Starting System Data Backup with command cudbDataRestore -B 2015-07-27_12-40 -L 2>&1 ...
cudbDataRestore     ver(1.3.6)

Trying for restore the backup from [2015-07-27_12-40] data files.

Error executing the restore command in PL/DS node in CUDBNode#244
Error executing the restore command in PL/DS node in CUDBNode210. Failed due parallel execution of cudbApplyConfig command.
Check logs and cudbApplyConfig.lock file in /home/cudb/oam/configMgmt/commands/config directory!
Restores process finished with error.



ERROR: unable to restore backup


ERROR SUMMARY
--------------------------------



ERROR: unable to restore backup
WARNING: System data restoration will execute for the complete CUDB system with the data from the system backup. 
Are you sure you want to continue (y/n)? y
ERROR SUMMARY
--------------------------------


Unable to kickstart pl on node 24
Unable to kickstart dsg 1 on node 24
Unable to kickstart dsg 2 on node 24

2.2.28.5   Examples of Use

2.2.29   cudbSystemStatus

The cudbSystemStatus command automatically prints the general system status and the local node information. The output contains information on the following components:

Note:  
If the cudbSystemStatus command is executed on a system with different hardware types, the command output will also contain information on the hardware types used in the system. This hardware information is listed below the CUDB software version output.

2.2.29.1   Requisites

This command can be accessed by cudbOperator user using sudo.

2.2.29.2   Syntax

cudbSystemStatus [-a | --alarms] [-s | --sm-status] [-c | --cluster-status] [-C | --new-cluster-status] [-m | --check-mysql] [-p | --check-cudbprocess] [-b | --bc-status] [-B | --new-bc-status] [-r | --replication-status] [-R | --new-replication-status] [-t | --hardware-type] [-v | --version] [-h | --help]

2.2.29.3   Command Options

The following command options can be used:

Note:  
If no options are used when executing the command, cudbSystemStatus runs as if the -B, -v, -s, -C, -R, -a, -m, and -p options have been used. If at least one command option is executed, the command collects information related only to that command option.

Also, if no options are used when executing the command on a system with different hardware types, cudbSystemStatus runs the -t | --hardware-type option in addition to the options listed above.


Note:  
In hybrid systems, where some DSGs consist of different hardware platforms, memory usage values are recalculated for DSGs located on nodes with higher configured system memory for database cluster. Memory levels shown by the cudbSystemStatus command for those DSGs are different from those reported by database cluster client.

2.2.29.4   Output

The command shows information on the relevant system status. BC cluster information is showing status of the BC clusters in all the sites (unless the node is disabled, then the output is Disabled instead of showing the status of BC servers on that node), while SM information shows the status of the SM leader elected in every site.

If, for any reason, the BC cluster does not have the majority of BC servers, enough to work as a cluster, it is indicated in the output, and no SM leader appears on the affected site.

2.2.29.5   Examples of Use

Perform the following steps to execute the command:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the command as follows:

    <CUDB_node_prompt> cudbSystemStatus

<CUDB_node_prompt> cudbSystemStatus

Execution date: Sat Aug 10 11:41:23 CEST 2013

CUDB Software Version:
!- CUDB DESIGN DISTRIBUTION: CUDB13B CXP9020214/6 R1B549

Checking BC clusters:

[Site 1]

	SM leader: Node 10 OAM1

	Node 10
		BC server in OAM1 ......... running
		BC server in OAM2 ......... running (Leader)
		BC server in PL2 ......... running

[Site 2]

	SM leader: Node 11 OAM1

	Node 11
		BC server in OAM1 ......... running
		BC server in OAM2 ......... running (Leader)
		BC server in PL2 ......... running

[Site 3]

	SM leader: Node 9 OAM2

	Node 9
		BC server in OAM1 ......... running
		BC server in OAM2 ......... running (Leader)
		BC server in PL2 ......... running

Checking System Monitor BC status in local node:

	SM-BC in OAM1 ......... running
	SM-BC in OAM2 ......... running


Checking Clusters status:
Node 9:
	PL Cluster (29%) .............................OK
	DSG1 Cluster (23%) ...........................OK
	DSG255 Cluster (23%) .........................OK
Node 10:
	PL Cluster (29%) .............................OK
	DSG1 Cluster (23%) ...........................OK
	DSG255 Cluster (23%) .........................OK
Node 11:
	PL Cluster (29%) .............................OK
	DSG1 Cluster (22%) ...........................OK
	DSG255 Cluster (22%) .........................OK

Checking NDB status:
	PL NDB's (2/2) ...............................OK
	DS1 NDB's (2/2) ..............................OK
	DS2 NDB's (2/2) ..............................OK

Checking Replication Channels in the System:
	Node    |  9  | 10  | 11  
	==========================
	PLDB ___|__S1_|__S1_|__M__
	DSG 1 __|__S1_|__S1_|__M__
	DSG 255 |__S1_|__S1_|__M__

Printing Detailed Replication Status for the Slave Replicas:
Node 9:
	 Replication in DSG0(Chan=1)   .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
	 Replication in DSG1(Chan=1)   .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
	 Replication in DSG255(Chan=1) .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
Node 10:
	 Replication in DSG0(Chan=1)   .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
	 Replication in DSG1(Chan=1)   .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
	 Replication in DSG255(Chan=1) .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
Node 11:
	 There are no Slave clusters

Printing Alarms...
[Aug 09 16:12:17]( Root Login Failed @172.80.122.10  )
[Aug 09 16:16:00]( Root Login Failed @10.22.10.199  )

Checking MySQL server connection:
	MySQL Master Servers connection ..............OK
	MySQL Slave Servers connection ...............OK
	MySQL Access Servers connection ..............OK

Checking Process:
OAMs....................
	Cluster Supervisor............................Running
	System Monitor BC.............................Running
	Reconciliation process........................Running in: OAM1
	Management Server Process (ndb_mgmd)..........Running
	KeepAlive process.............................Running
	Alarm Correlator..............................Running
	ESA...........................................Running
	LDAP counter..................................Running in: OAM1
	Log Handler process...........................Running
	KpiCentral process............................Running in: OAM1
	Messaging Service servers.....................Running
PLs................
	Storage Engine process (ndbd).................Running
	LDAP FE.......................................Running
	KeepAlive process.............................Running
	MySQL server process (Master).................Running
	MySQL server process (Slave)..................Running
	MySQL server process (Access).................Running
	CudbNotifications process.....................Running
	LDAP FE Monitor process.......................Running
DSs............................
	Storage Engine process (ndbd).................Running
	LDAP FE.......................................Running
	KeepAlive process.............................Running
	MySQL server process (Master).................Running
	MySQL server process (Slave)..................Running
	MySQL server process (Access).................Running
	LDAP FE Monitor process.......................Running
<CUDB_node_prompt> cudbSystemStatus
 Execution date: Sat Mar 12 14:27:48 CET 2016

 CUDB Software Version:
 !- CUDB DESIGN DISTRIBUTION: CUDB16A CXP9020214/9 R2A99

 Checking Hardware Type:
 This system is working on following hardware types: EBS_GEP3, EBS_GEP5.
 ...  
<CUDB_node_prompt> cudbSystemStatus -t
 Execution date: Sat Mar 12 14:43:28 CET 2016

 NodeId hwType
 49 EBS_GEP3
 82 EBS_GEP3
 154 EBS_GEP5
 157 EBS_GEP5

 DsgId isHybrid
 0 true
 11 false
 12 false
 13 false
 14 false
 15 false
 16 false
 21 false
 22 false
 23 false
 24 false
 25 false
 26 false
 27 false
 28 false
Checking Replication Channels in the System:
    Node    |  9  | 10  | 11 
    ==========================
    PLDB ___|__S1_|__S1_|__M__
    DSG 1 __|__S1_|_[S]_|__M__
    DSG 255 |__Xu_|__S1_|__M__

Printing Detailed Replication Status for the Slave Replicas:
Node 9:
     Replication in DSG0(Chan=1)   .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
     Replication in DSG1(Chan=1)   .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
     Replication in DSG255(Chan=1) .... Dsg is Unreachable
                                        Self-Ordered Backup and Restore is running: restoring backup
Node 10:
     Replication in DSG0(Chan=1)   .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
     Replication in DSG1(Chan=1)   .... Stopped - From Thu Apr 14 08:54:02 CEST 2016
                                        Self-Ordered Backup and Restore is running: transferring backup
     Replication in DSG255(Chan=1) .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
Node 11:
     There are no Slave clusters

A) One Unit Disabled (DSG3 on Node 41):

Checking Replication Channels in the System:
       Node    | 40  | 41  
       ====================
       PLDB ___|__S1_|__M_
       DSG 1 __|__S1_|__M__
       DSG 3 __|__M__|__Xu_

Printing Detailed Replication Status for the Slave Replicas:
Node 40:
        Replication in DSG0(Chan=1)   .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
        Replication in DSG1(Chan=1)   .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
Node 41: 
        Replication in DSG3           .... Stopped - Data Store is disabled

B) One Node Disabled (Node 40):

Checking Replication Channels in the System:
       Node    | 40  | 41  | 130 | 131 
       ================================
       PLDB ___|__Xu_|__S1_|__M__|__S1_
       DSG 1 __|__Xu_|_____|__M__|__S1_
       DSG 3 __|__Xu_|__M _|_____|_____
       DSG 255 |_____|__S1_|__M__|__S1_

Printing Detailed Replication Status for the Slave Replicas:
Node 40: Disabled
        Replication in DSG0           .... Stopped - Data Store is disabled
        Replication in DSG1           .... Stopped - Data Store is disabled
        Replication in DSG3           .... Stopped - Data Store is disabled
Node 41:
	Replication in DSG0(Chan=1)   .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
	Replication in DSG255(Chan=1) .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
Node 130:
        There are no Slave clusters
Node 131:
        Replication in DSG0(Chan=1)   .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
        Replication in DSG1(Chan=1)   .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0
        Replication in DSG255(Chan=1) .... Up -- Delay = 0.0 seconds,   no. of pending changes = 0

2.2.30   cudbTakeAllMasters

The cudbTakeAllMasters command is used to change mastership manually to a working site in case of emergency (such as a CUDB system split situation), when a surviving site is available in the CUDB system with reduced service. cudbTakeAllMasters maximizes the service in the surviving site, allowing normal operation with the resources available in this site.

Refer to CUDB High Availability, Reference [15] for further details about the use of this command.

Attention!

Execute this command only as a manual intervention in the special case that some sites are failing, and the system is in a split situation with reduced service.

2.2.30.1   Requisites

The requisites of using this command are as follows:

2.2.30.2   Syntax

cudbTakeAllMasters [-h |--help]

2.2.30.3   Command Options

The following command option can be used:

2.2.30.4   Output

The command can return the following output messages:

Note:  
If the command fails for other reasons than the ones above, contact the next level of maintenance support.

2.2.30.5   Examples of Use

<CUDB_node_prompt> cudbTakeAllMasters

2.2.31   cudbUnitDataBackupAndRestore

The cudbUnitDataBackupAndRestore command resolves data inconsistencies between replicas by creating a data backup on the master replica and restoring this data backup on the inconsistent slave replica. Refer to CUDB Backup and Restore Procedures, Reference [13] for further details about when to use it

2.2.31.1   Requisites

The requisites of using the command are as follows:

2.2.31.2   Syntax

2.2.31.3   Command Options

The following command options can be used:

2.2.31.4   Output

An example output of the command is provided below:

CUDB_41 SC_2_1# cudbUnitDataBackupAndRestore -d 1 -n 42
WARNING: Restoring the DS cluster will replace previous contents. Are you sure you want to continue (y/n)? y 
Action will be executed

CREATE PART
--------------------------------


creating backup on node 41

cudbManageStore stores to process:   ds1 (in dsgroup1).

Starting Backup ...
Launching order Backup for ds1 in dsgroup 1.
Obtaining Mgm Information.
Trying backup on mgmt access 1, wait a moment ...
ndb_mgm 10.22.41.1 2372 -e "START BACKUP 999 WAIT COMPLETED"
..ok
BACKUP-999 renamed in PL_2_7 to /local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46
BACKUP-999 renamed in PL_2_8 to /local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46

Backup finished successfully for store ds1.
Stores where order backup was successfully completed:  ds1.
cudbManageStore command successful.


DIR CREATE PART
--------------------------------


creating backup directory /local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 on node 42 on blade 10.22.42.7
creating backup directory /local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 on node 42 on blade 10.22.42.8

COPY PART
--------------------------------


copying backup files from node 41 blade 10.22.41.7:/local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 ⇒
to node 42 blade 10.22.42.7:/local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 ⇒
copying backup files from node 41 blade 10.22.41.8:/local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 ⇒
to node 42 blade 10.22.42.8:/local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46

RESTORE PART
--------------------------------


Restoring backup on node 42

cudbManageStore stores to process:   ds1 (in dsgroup1).

Launching restore order in CUDB Node 42 to store ds1 in dsgroup 1.
Starting restore in CUDB Node 42 for store ds1, ⇒
backup path /local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46,⇒
sql scripts path /home/cudb/storageEngine/config/schema/ds/internal/restoreTempSql.
Waiting for restore order(s) to be completed in CUDB Node 42 for stores :  ds1.
restore order finished successfully in CUDB Node 42 for store ds1.
restore order(s) completed in CUDB Node 42 for stores :  ds1.
Stores where order restore was successfully completed:  ds1.
Closing connections for all blades of DSUnitGroup 1.
No stored procedures were found to restore.
cudbManageStore command successful.

Performing cleanup
delete backup directory /local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 on node 42 on blade 10.22.42.7
delete backup directory /local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 on node 42 on blade 10.22.42.8
cudbUnitDataBackupAndRestore successfully ended

2.2.31.5   Examples of Use

2.2.32   cudbUpdateUserInfo

Attention!

This command is deprecated. To update the local node configuration with the latest changes of LDAP users, use the administrative operation updateUserInfo. Refer to CUDB Node Configuration Data Model Description, Reference [1] for more information on updateUserInfo.

The cudbUpdateUserInfo command updates the local node configuration with the latest changes of LDAP users in the CUDB node where the command is executed.

2.2.32.1   Requisites

The requisites of using the command are as follows:

2.2.32.2   Syntax

cudbUpdateUserInfo [-D | --Debug] [-h | --help] [-i | --internal]

2.2.32.3   Command Options

The following command options can be used:

2.2.32.4   Output

The output informs about the success or failure of executing the command. An example output for each scenario is shown below:

2.2.32.5   Examples of Use

Perform the following steps to execute the command:

  1. Connect to the CUDB node with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_IP_Address>

  2. Connect to the SC with the following command:

    <CUDB_node_prompt> ssh <admin_user>@<CUDB_Node_OAM_IP_Address>

  3. Execute the command as follows:

    <CUDB_node_prompt> cudbUpdateUserInfo

    The expected output must be similar to the below example:

    Success: User information successfully updated in node <nodeId>

2.3   Internal Commands

This section lists the internal commands of the CUDB system, available for Ericsson personnel only.

2.3.1   cudbAppCheckerManager

This command can be executed by Ericsson personnel only.

2.3.2   cudbBCServersRestart

This command can be executed by Ericsson personnel only.

2.3.3   cudbCopyAclsConfFile

This command can be executed by Ericsson personnel only.

2.3.4   cudbEvipConfigExtension

This command can be executed by Ericsson personnel only.

2.3.5   cudbEvipEncapsulator

This command can be executed by Ericsson personnel only.

2.3.6   cudbExecuteAllBlades

This command can be executed by Ericsson personnel only.

2.3.7   cudbFollowLdapfeLogs

This command can be executed by Ericsson personnel only.

2.3.8   cudbManageDsGroup

This command can be executed by Ericsson personnel only.

2.3.9   cudbManageNode

This command can be executed by Ericsson personnel only.

2.3.10   cudbManageSite

This command can be executed by Ericsson personnel only.

2.3.11   cudbMpstat

This command can be executed by Ericsson personnel only.

2.3.12   cudbOomConfigurator

The cudbOomConfigurator command is used only by a restricted process.

2.3.13   cudbParallelCommandRun

This command can be executed by Ericsson personnel only.

2.3.14   cudbRemoveNode

This command can be executed by Ericsson personnel only.

2.3.15   cudbSdpInfo

This command can be executed by Ericsson personnel only.

2.3.16   cudbSetDsgMaster

This command can be executed by Ericsson personnel only.

2.3.17   cudbSetPartitionStatus

This command can be executed by Ericsson personnel only.

2.3.18   cudbTpsStat

This command can be executed by Ericsson personnel only.


Glossary

For the terms, definitions, acronyms and abbreviations used in this document, refer to CUDB Glossary of Terms And Acronyms, Reference [21].


Reference List

CUDB Documents
[1] CUDB Node Configuration Data Model Description.
[2] CUDB Node Logging Events.
[3] CUDB Security and Privacy Management.
[4] CUDB Users and Passwords, 3/00651-HDA 104 03/10
[5] CUDB System Administrator Guide.
[6] CUDB Logchecker.
[7] CUDB Application Counters.
[8] Storage Engine, Potential Data Inconsistency between Replicas Found in DS.
[9] Storage Engine, Potential Data Inconsistency between Replicas Found in PLDB.
[10] Storage Engine, Replication Stopped Working in DS.
[11] Storage Engine, Replication Stopped Working in PLDB.
[12] CUDB Consistency Check.
[13] CUDB Backup and Restore Procedures.
[14] CUDB Application Schema Update.
[15] CUDB High Availability.
[16] Storage Engine, Unable to Synchronize Cluster in DS, Major.
[17] Storage Engine, Unable to Synchronize Cluster in PLDB, Major.
[18] CUDB Subscription Reallocation.
[19] CUDB Data Storage Handling.
[20] Storage Engine, Backup Notification Failure To Provisioning Gateway.
[21] CUDB Glossary of Terms And Acronyms.
Other Ericsson Documents
[22] LDE Management Guide.