- Introduction
- CUDB Commands
- CUDB Command
- Management Commands
- cudbAnalyser
- cudbApplicationCounters
- cudbApplyConfig
- cudbCheckConsistency
- cudbCheckReplication
- cudbCollectInfo
- cudbConsistencyMgr
- cudbDataBackup
- cudbDbDiskManage
- cudbDsgMastershipChange
- cudbDsgProvisioningManage
- cudbGetLogs
- cudbHaState
- cudbLdapFeRestart
- cudbManageBCServer
- cudbManageLibDataDist
- cudbManageMsgSrvServer
- cudbManageStore
- cudbPmJobReload
- cudbPrepareStore
- cudbReallocate
- cudbReconciliationMgr
- cudbRemoteTrust
- cudbResumeProvisioningNotification
- cudbServiceContinuity
- cudbSwBackup
- cudbSwVersionCheck
- cudbSystemDataBackupAndRestore
- cudbSystemStatus
- cudbTakeAllMasters
- cudbUnitDataBackupAndRestore
- cudbUpdateUserInfo
- Internal Commands
- cudbAppCheckerManager
- cudbBCServersRestart
- cudbCopyAclsConfFile
- cudbEvipConfigExtension
- cudbEvipEncapsulator
- cudbExecuteAllBlades
- cudbFollowLdapfeLogs
- cudbManageDsGroup
- cudbManageNode
- cudbManageSite
- cudbMpstat
- cudbOomConfigurator
- cudbParallelCommandRun
- cudbRemoveNode
- cudbSdpInfo
- cudbSetDsgMaster
- cudbSetPartitionStatus
- cudbTpsStat
- Reference List
1 Introduction
This document describes the Ericsson Centralized User Database (CUDB) commands and command options, along with a complete reference on their options.
1.1 Document Purpose and Scope
This document describes the CUDB commands and command options with a complete reference on their options, syntax, output and some examples. The following topics are out of the scope of this document:
Most of the commands provide the output by means of logging information. Refer to CUDB Node Logging Events for more details about logging events in CUDB.
1.3 Target Groups
1.4 Typographic Conventions
Typographic Conventions can be found in the following document:
2 CUDB Commands
The following sections list and describe all CUDB commands that can be used within a Secure Shell (SSH) session in the CUDB nodes belonging to a CUDB system. The commands are grouped into the following categories:
The commands in this document can be executed from any CUDB node, unless specific conditions or restrictions are specified in the Requisites section of the related command.
For each command, the following information is included:
The commands described in this document can be used by connecting
to any CUDB node through an SSH interface, and getting authenticated
by the system. The commands can be executed with the cudbOperator or cudbadmin users through the sudo Linux command. The available rights of these users are as follows:
By default, the location of the commands described in the document
is set on the PATH system variable during installation time.
Several commands offer a debug option. This option activates the DEBUG log level, whose DEBUG log messages are stored in the same location as the log messages belonging to other levels. The location of the log messages for each component are listed in CUDB Node Logging Events.
| Note: |
The debugging option of the commands is used only when required
by Ericsson personnel for troubleshooting purposes. This is because
the amount of logging information can be very high during debugging,
which can alter system working conditions. |
All commands include a help option as well, used to provide information on the command and its available options.
For further information on CUDB system administration procedures, refer to CUDB System Administrator Guide.
2.1 CUDB Command
2.2 Management Commands
Commands in this section can be executed both by operators and Ericsson personnel.
2.2.1 cudbAnalyser
The cudbAnalyser command performs a log analysis on the logs collected by cudbGetLogs. For more information about this command, refer to CUDB Logchecker.
cudbOperator users
have no access to run this command.
The following command options can be used:
Started on ACTIVE SC... [INFO] Checking files: //home/cudb/monitoring/preventiveMaintenance/ \ /CUDB_50_201307231233.log //home/cudb/monitoring/preventiveMaintenance/ \ /CUDB_50_201307231231.log logfile versions:0.0.60/0.0.60 [ERROR] OS: network stat shows errors (Severity: Minor) > CUDB50 PL0 eth1/statistics/rx_dropped 27 K packets > CUDB50 PL1 eth1/statistics/rx_dropped 27 K packets > CUDB50 DS1_0 eth1/statistics/rx_dropped 27 K packets > CUDB50 DS1_1 eth1/statistics/rx_dropped 27 K packets
2.2.1.1 cudbAnalyser Examples of Use
2.2.2 cudbApplicationCounters
The cudbApplicationCounters command executes application counters defined by the counter configuration file. Refer to CUDB Application Counters for more information.
cudbApplicationCounters Requisites
cudbOperator users
have no access to run this command.
cudbApplicationCounters Syntax
cudbApplicationCounters -C | --Counter-file <counter_configuration_file> -U | --Unique <uniqueNumber> [-f | --file-config <configuration_file>] [-u | --user-facility <facility_level>] [-l | --licensing] [-D | --Debug] [-h | --help]
cudbApplicationCounters Command Options
The following command options can be used:
cudbApplicationCounters Output
The command output shows the command version and OAM library version when executed successfully:
cudbApplicationCounters Ver.(1.3.1) OAM Lib.Version (1.1.0)2.2.2.1 cudbApplicationCounters Examples of Use
Steps
2.2.3 cudbApplyConfig
The cudbApplyConfig command automatically makes any configuration model change persistent. If no successful result is returned by the command, the configuration updates do not take effect, even if they are stored in the model and are visible through new CUDB Configuration CLI sessions. Refer to CUDB Node Configuration Data Model Description for more information on using the command.
cudbOperator users
have no access to run this command.
cudbApplyConfig [-s|--scope <scope>] [-v|--verbose] [-h|--help]
cudbApplyConfig Command Options
The following command options can be used:
The command output is a log of the performed changes. The command result is provided at the end of the log.
2.2.3.1 cudbApplyConfig Examples of Use
Steps
2.2.4 cudbCheckConsistency
The cudbCheckConsistency command checks if the slave Data Store (DS) Units hold approximately as many rows in their tables as the corresponding master DS Unit in the same DS Unit Group (DSG). The lightweight consistency check relates to PLDB units as well. The check is considered to be failed if the difference of the row counts of any Structured Query Language (SQL) table exceeds a certain value, given in percentage. Tables with only a few rows can be skipped.
cudbCheckConsistency can be executed manually, but it also runs regularly as a cron job. When executed manually, the
default configuration values can be overridden with command line options.
When it finds a failure while running as a cron job, it raises an alarm on the node which hosts
the failed slave DS Unit. For further information about alarms related
to this command, refer to
Storage Engine, Potential Data Inconsistency between Replicas Found in DS and
Storage Engine, Potential Data Inconsistency between Replicas Found in PLDB.
| Note: |
If any changes are made in the configuration file on a CUDB
node, then the exact same configuration changes must be performed
on all the other CUDB nodes as well. This is needed because cudbCheckConsistency operates at the
CUDB system level instead of the CUDB node level. After the configuration
is updated in the whole system, the following command must be issued
on every SC of every CUDB node: /etc/init.d/cudbCheckConsistencySrv restart |
cudbCheckConsistency Requisites
cudbOperator users
have no access to run this command.
cudbCheckConsistency Command Options
The following command options can be used:
[info] cudbCheckConsistency is running with options: '', MAXDIFF_LIMIT: 1.00%, CHECK_LIMIT: 100 [info] Acquiring mastership information. [info] Checking consistency in slave DS units. [info] Node42-DSG0 consistency: OK - row count difference 0.00% [info] Node42-DSG1 consistency: OK - row count difference 0.00% [info] Node42-DSG2 consistency: OK - row count difference 0.00% [info] Summary: slaves checked: 3 -> PASSED: 3, FAILED: 0, UNKNOWN: 0.
In this output, Node42-DSG0 refers to the PLDB.
cudbCheckConsistency Configuration File
The configuration file of the command is located in the following directory:
/home/cudb/monitoring/replicaConsistency/config/cudbCheckConsistency.conf
This file is directly included by a Bash shell script with the following prerequisites:
Maximum tolerable row count difference in PERCENTS between tables in the master DS Unit and the slave DS Unit. The difference is counted table-by-table, and if the difference is greater than the configured tolerance limit in any of the tables, the verdict is FAILED.
The value must be a floating point number with exactly two decimal digits, in the [ 0.00 ; 100.00 ] closed interval.
Default value: 1.00
Command line option: -m | --maxdiff-limit <LIMIT>
CHECK_LIMIT=100If a table in the master DS Unit has fewer rows than this check limit, then the table is skipped.
The value must be an integer number in the [ 0 ; INT_MAX ) left-closed right-open interval, where INT_MAX is the constant defined in POSIX.1-2008 Base Specifications, Issue 7.
Default value: 100
Command line option: -c | --check-limit <LIMIT>
CRON_TIMESPEC='37 0 * * *'Cron job time specification. If this value is changed, then the value must also be changed on every other CUDB node of the CUDB system, and the new value must be installed to cron on both SCs of every CUDB node. On a specific CUDB node, after saving the new value, the new settings can be installed to cron by issuing the following command on an SC:
ssh OAM1 '/etc/init.d/cudbCheckConsistencySrv restart'
ssh OAM2 '/etc/init.d/cudbCheckConsistencySrv restart'
The value must be either "disabled" or a valid cron job time specification.
Default value: '37 0 * * *'
Command line option: -
2.2.4.1 cudbCheckConsistency Examples of Use
Check the following examples of executing the command:
2.2.5 cudbCheckReplication
The cudbCheckReplication command checks if the active database cluster replication channels are functional in the CUDB system. cudbCheckReplication can be executed manually, but it also runs regularly as a cron job. When executed manually, the default configuration values can be overridden with command line options. When it finds a failure while running as a cron job, it raises an alarm on the node which hosts the failed slave DS Unit. For further information about alarms related to this command, refer to Storage Engine, Replication Stopped Working in DS, and Storage Engine, Replication Stopped Working in PLDB.
| Note: |
If any changes are made in the configuration file on a CUDB
node, then the exact same configuration changes must be performed
on every other CUDB node as well. This is required because cudbCheckReplication operates on the
CUDB system level instead of the CUDB node level. After the configuration
is updated in the entire system, execute the below command on every
SC of every CUDB node: /etc/init.d/cudbCheckReplicationSrv restart |
cudbCheckReplication Requisites
cudbOperator users
have no access to run this command.
cudbCheckReplication Command Options
The following command options can be used:
The command can display the following example output:
[info] cudbCheckReplication is running with options: '', BEHIND_LIMIT: 10 [info] Acquiring mastership information. [info] Acquiring DSG status information. [info] Injecting verification data to master DS units. [info] Sleeping 10 seconds to wait for replication. [info] Checking replication in slave DS units. [info] Node42-DSG0 replication: OK [info] Node42-DSG1 replication: OK [info] Node42-DSG2 replication: OK [info] Summary: channels checked: 3 -> PASSED: 3, FAILED: 0, UNKNOWN: 0.
cudbCheckReplication Configuration File
The configuration file of the command is located in the following directory:
/home/cudb/monitoring/replicaConsistency/config/cudbCheckReplication.conf
This file is directly included by a Bash shell script with the following prerequisites:
This setting assumes that any data injected to a master DS Unit are replicated within the defined amount of seconds to the slave DS Units. If the verification data does not arrive after the amount of seconds defined with BEHIND_LIMIT, the test verdict is FAILED. The value must be an integer number in the [ 1 ; INT_MAX ) left-closed right-open interval, where INT_MAX is the constant defined in POSIX.1-2008 Base Specifications, Issue 7.
Default value: 10
Command line option: -b | --behind-limit <LIMIT>
CRON_TIMESPEC='7 0 * * *'This setting is the cron job time specification. If this value is changed, then the value must also be changed in every other CUDB node of the CUDB system, and the new value must be installed to cron on both SCs of every CUDB node. On a specific CUDB node, after saving the new value, you can install the new settings to cron by issuing the following command on the SCs:
ssh OAM1 '/etc/init.d/cudbCheckReplicationSrv restart'
ssh OAM2 '/etc/init.d/cudbCheckReplicationSrv restart'
The value must be either "disabled" or a valid cron job time specification.
Default value: '7 0 * * *'
Command line option : -
2.2.5.1 cudbCheckReplication Examples of Use
Use the command as shown below to check the replication channels with a wait limit of 5 seconds, and ensure that the command prints only basic progress information, and does not raise any alarm if a check fails:
<CUDB_node_prompt> cudbCheckReplication -b 5
2.2.6 cudbCollectInfo
The cudbCollectInfo command is used to create a tarball of CUDB logs.
cudbOperator users
have no access to run this command.
Restrictions of cudbCollectInfo command:
cudbCollectInfo [-n | --node <node id>] [-d | --ds <dsg id>] [-a | --action <action name>] [-c | --no-compress] [-e | --no-encrypt] [-h | --help] [-x | --exit-on-error]
cudbCollectInfo Command Options
The following command options can be used:
The following list contains some example output messages:
Creating dir for node 1 (/local/tmp/cudb_collect_info_20120412-104337/1) ... OK Waiting 1 to finish ... OK CREATING ARCHIVE ... OK Encrypting archive ... OK Removing unencrypted archive ... OK REMOVING RAW DATA ... OK Fetch the file: /local/tmp/cudb_collect_info_20120412-104337.c
In case cudbCollectInfo is already running on a local node or on a remote node (from another terminal or from another node in the system), the following output messages are printed to the console:
cudbCollectInfo is already running on this node.
cudbCollectInfo is already running on node <Node ID where the command is executing>.
cudbCollectInfo is already running for node <target node ID> on node <Node ID where the command is executing>.
| Note: |
These last three outputs are displayed if cudbCollectInfo is already running
somewhere in the CUDB system. If no other cudbCollectInfo process is running, this output is
displayed due to a previous cudbCollectInfo process finished with an unexpected error and lock file was not
removed from the CUDB node where it was executing. To unlock cudbCollectInfo, execute the following
command in the CUDB nodes where the lock file is: rm -rf /cluster/tmp/cudbcollectinfo*
Do not delete this file if the command is up and running, to prevent unexpected behaviors. |
2.2.6.1 cudbCollectInfo Examples of Use
Use the command as shown below to perform all log collection actions on Node 39:
<CUDB_node_prompt> cudbCollectInfo -n 39 -a all
Use the command as shown below to perform all log collection actions on all nodes of the system, disabling the GPG encryption of the tgz archive:
<CUDB_node_prompt> cudbCollectInfo -e
2.2.7 cudbConsistencyMgr
The cudbConsistencyMgr command manages consistency check tasks in the CUDB system. Refer to CUDB Consistency Check for more information on how to use the command and perform consistency check tasks.
The user permissions of executing cudbConsistencyMgr are as follows:
cudbConsistencyMgr Command Options
The following command options can be used:
| Note: |
Either the -p | --pl, or -d | --dsg command
option must be specified when executing the command. |
The output of the cudbConsistencyMgr command depends on the specified command options. Some examples are provided below.
Scheduling-Related Output
Task UTC_2014-09-23-17-05-42_N242_U0000001955 is put into the pending task list and will be executed on node 242.
Task Listing-Related Output
[Site 1]
RTL:
UTC_2014-09-09-11-00-10_N121_U0000000849
checkType=ms,source=S1-N121-D1,check=S2-N122-D1,maxReplicaLag=2000,alarmSeverityLimit=500,\
verboseMode=off,debugMode=off
PTL:
[Site 2]
RTL:
UTC_2014-09-09-11-00-16_N122_U0000000793
checkType=ms,source=S2-N122-D2,check=S1-N121-D2,maxReplicaLag=2000,alarmSeverityLimit=500,\
verboseMode=off,debugMode=off
PTL:
UTC_2014-09-09-11-00-17_N122_U0000000794
checkType=ms,source=S2-N122-D2,check=S1-N121-D2,maxReplicaLag=2000,alarmSeverityLimit=500,\
verboseMode=off,debugMode=off
UTC_2014-09-09-11-00-21_N122_U0000000795
checkType=ms,source=S2-N122-D3,check=S1-N121-D3,maxReplicaLag=2000,alarmSeverityLimit=500,\
verboseMode=off,debugMode=off
UTC_2014-09-09-11-00-22_N122_U0000000796
checkType=ms,source=S2-N122-D3,check=S1-N121-D3,maxReplicaLag=2000,alarmSeverityLimit=500,\
verboseMode=off,debugMode=off
UTC_2014-09-09-11-00-23_N122_U0000000797
checkType=ms,source=S2-N122-D3,check=S1-N121-D3,maxReplicaLag=2000,alarmSeverityLimit=500,\
verboseMode=off,debugMode=off
UTC_2014-09-09-11-00-23_N122_U0000000798
checkType=ms,source=S2-N122-D3,check=S1-N121-D3,maxReplicaLag=2000,alarmSeverityLimit=500,\
verboseMode=off,debugMode=off
UTC_2014-09-09-11-00-23_N122_U0000000799
checkType=ms,source=S2-N122-D3,check=S1-N121-D3,maxReplicaLag=2000,alarmSeverityLimit=500,\
verboseMode=off,debugMode=off
Task History-Related Output
The output in this case consists of log lines describing the following information:
The results are ordered by time, and are listed from the oldest to the newest, as shown below:
2014-09-23 15:53:49+02:00 SC_2_1 UTC_2014-09-23-13-53-48_N243_U0000000586: task found in PTL checkType=ms,source=S2-N243-D1,check=S1-N242-D1,maxReplicaLag=2000,alarmSeverityLimit=500,verboseMode=off,debugMode=off 2014-09-23 15:53:49+02:00 SC_2_1 UTC_2014-09-23-13-53-48_N243_U0000000586: task execution started 2014-09-23 15:54:00+02:00 SC_2_1 UTC_2014-09-23-13-53-48_N243_U0000000586: task execution finished with result: Successful completion, no difference found. Exit code: 0.
2.2.7.1 cudbConsistencyMgr Examples of Use
Check the following examples of executing the command:
2.2.8 cudbDataBackup
The cudbDataBackup command executes a data backup in the complete CUDB system. Refer to CUDB Backup and Restore Procedures for further details about when to use it.
cudbOperator users
have no access to run this command.
cudbDataBackup Command Options
The following command options can be used:
The command output is a log of the performed changes. The command result is provided at the end of the log.
2.2.8.1 cudbDataBackup Examples of Use
The following example procedure shows how to execute cudbDataBackup with the -S | --Slack-backup option:
Steps
2.2.9 cudbDbDiskManage
The cudbDbDiskManage command manages the space of the Binary Large Object (BLOB) attributes stored on the disk storage system for certain CUDB object classes.
cudbOperator users
have no access to run this command.
cudbDbDiskManage Command Options
The following command options can be used:
The command reports OK in case of success, and error in case of failure.
2.2.9.1 cudbDbDiskManage Examples of Use
The following example procedure shows how to allocate a set amount of space on the disk storage system for a BLOB object with cudbDbDiskManage:
Steps
2.2.10 cudbDsgMastershipChange
The cudbDsgMastershipChange command is used to move the master of a DSG to the selected node. It must be launched on the CUDB node that will hold the new master replica.
When executed, the command first checks the PLDB replication status on the node where the command is invoked. If the PLDB is unable to synchronize or replication status info is not available, the mastership change will be rejected. After this, the command checks if the new master replica has a longer delay than the time specified by the --time parameter: if it does, it rejects the mastership change. Then, for a period of four seconds, all write operations toward the current master are rejected to allow the new master replica to catch up with the current master. After this four-second period passes, the command checks if the new master is completely synchronized with the current master to avoid any potential data loss. If the check succeeds, the master of the specified DSG or PLDB is changed to the node.
In case Automatic Mastership Change (AMC) is active, the system periodically tries to move the master of each DSG to a DSG replica with higher priority, unless the master was intentionally moved to a degraded replica.
To avoid undesired master movements, disable the AMC feature. Refer to CUDB High Availability for more information.
cudbDsgMastershipChange Requisites
cudbOperator users
have no access to run this command.
cudbDsgMastershipChange Syntax
cudbDsgMastershipChange Command Options
The following command options can be used:
cudbDsgMastershipChange Output
The possible outputs for the command are as follows:
2.2.10.1 cudbDsgMastershipChange Examples of Use
The following example procedure shows how to change the master of DSG 2 with cudbDsgMastershipChange:
Steps
2.2.11 cudbDsgProvisioningManage
The cudbDsgProvisioningManage command is used to disable or enable a specific DSG during the provisioning of Distribution Entries (DEs). When using the command, consider the following:
cudbDsgProvisioningManage Requisites
cudbOperator users
have no access to run this command.
cudbDsgProvisioningManage Syntax
cudbDsgProvisioningManage Command Options
The following command options can be used:
cudbDsgProvisioningManage Output
An example output of the command is shown below:
cudbDsgProvisioningManage: Failed. Invalid syntax cudbDsgProvisioningManage: Failed. Unrecognized command line argument or arguments cudbDsgProvisioningManage: Failed. DSG %d is not valid cudbDsgProvisioningManage: Failed. Unexpected error. cudbDsgProvisioningManage: Success. DSG %d is already enabled. cudbDsgProvisioningManage: Success. DSG %d is already disabled. cudbDsgProvisioningManage: Success. DSG %d is enabled for provisioning. cudbDsgProvisioningManage: Success. DSG %d is disabled for provisioning
2.2.11.1 cudbDsgProvisioningManage Examples of Use
The following example procedure shows how to enable provisioning on DSG 1 with cudbDsgProvisioningManage:
Steps
2.2.12 cudbGetLogs
The cudbGetLogs command collects preventive maintenance logs for log analysis. For more information about this command, refer to CUDB Logchecker.
cudbOperator users
have no access to run this command.
cudbGetLogs [-d | --debug] [-b | --bash-debug] [-c | --consistency-threshold <threshold>] [-f | --force-c-check] [-h | --help]
The following command options can be used:
An example output for the command is provided below:
Starting /opt/ericsson/cudb/OAM/bin/cudbGetLogs ... Grepping logs and creating /home/cudb/monitoring/preventiveMaintenance/CUDB_107_201201241136.log ... The log file is saved as : /home/cudb/monitoring/preventiveMaintenance/CUDB_107_201201241136.log
2.2.12.1 cudbGetLogs Examples of Use
2.2.13 cudbHaState
The cudbHaState commands automatically prints the cluster status.
cudbOperator users
have no access to run this command.
cudbHaState
Not applicable.
The output shows the LDE and AMF cluster state, the CoreMW, COM state and the SI and SU states. The output must be similar to the below example:
LOTC cluster uptime: -------------------- Sat Apr 20 18:33:30 2013 LOTC cluster state: ------------------- Node safNode=SC_2_1 joined cluster | Sat Apr 20 18:33:40 2013 Node safNode=SC_2_2 joined cluster | Sat Apr 20 18:33:30 2013 Node safNode=PL_2_3 joined cluster | Sat Apr 20 18:35:39 2013 Node safNode=PL_2_4 joined cluster | Sat Apr 20 18:35:43 2013 Node safNode=PL_2_5 joined cluster | Sat Apr 20 18:35:43 2013 Node safNode=PL_2_6 joined cluster | Sat Apr 20 18:35:43 2013 Node safNode=PL_2_7 joined cluster | Sat Apr 20 18:35:48 2013 Node safNode=PL_2_8 joined cluster | Sat Apr 20 18:35:48 2013 Node safNode=PL_2_9 joined cluster | Sat Apr 20 18:35:53 2013 Node safNode=PL_2_10 joined cluster | Sat Apr 20 18:35:48 2013 AMF cluster state: ------------------ saAmfNodeAdminState."safAmfNode=SC-1,safAmfCluster=myAmfCluster": Unlocked saAmfNodeOperState."safAmfNode=SC-1,safAmfCluster=myAmfCluster": Enabled saAmfNodeAdminState."safAmfNode=SC-2,safAmfCluster=myAmfCluster": Unlocked saAmfNodeOperState."safAmfNode=SC-2,safAmfCluster=myAmfCluster": Enabled saAmfNodeAdminState."safAmfNode=PL-3,safAmfCluster=myAmfCluster": Unlocked saAmfNodeOperState."safAmfNode=PL-3,safAmfCluster=myAmfCluster": Enabled saAmfNodeAdminState."safAmfNode=PL-4,safAmfCluster=myAmfCluster": Unlocked saAmfNodeOperState."safAmfNode=PL-4,safAmfCluster=myAmfCluster": Enabled saAmfNodeAdminState."safAmfNode=PL-5,safAmfCluster=myAmfCluster": Unlocked saAmfNodeOperState."safAmfNode=PL-5,safAmfCluster=myAmfCluster": Enabled saAmfNodeAdminState."safAmfNode=PL-6,safAmfCluster=myAmfCluster": Unlocked saAmfNodeOperState."safAmfNode=PL-6,safAmfCluster=myAmfCluster": Enabled saAmfNodeAdminState."safAmfNode=PL-7,safAmfCluster=myAmfCluster": Unlocked saAmfNodeOperState."safAmfNode=PL-7,safAmfCluster=myAmfCluster": Enabled saAmfNodeAdminState."safAmfNode=PL-8,safAmfCluster=myAmfCluster": Unlocked saAmfNodeOperState."safAmfNode=PL-8,safAmfCluster=myAmfCluster": Enabled saAmfNodeAdminState."safAmfNode=PL-9,safAmfCluster=myAmfCluster": Unlocked saAmfNodeOperState."safAmfNode=PL-9,safAmfCluster=myAmfCluster": Enabled saAmfNodeAdminState."safAmfNode=PL-10,safAmfCluster=myAmfCluster": Unlocked saAmfNodeOperState."safAmfNode=PL-10,safAmfCluster=myAmfCluster": Enabled CoreMW HA state: ---------------- CoreMW is assigned as ACTIVE in controller SC-2 CoreMW is assigned as STANDBY in controller SC-1 COM state: ---------- COM is assigned as ACTIVE in controller SC-2 COM is assigned as STANDBY in controller SC-1 SI HA state: ------------ saAmfSISUHAState."safSu=SC-2,safSg=2N,safApp=ERIC-CUDB_CUDBOI"."safSi=2N-1": active(1) saAmfSISUHAState."safSu=SC-1,safSg=2N,safApp=ERIC-CUDB_KPICENTRAL"."safSi=2N-1": active(1) saAmfSISUHAState."safSu=SC-2,safSg=DS1_2N,safApp=ERIC-CUDB_CS"."safSi=DS1_2N-1": active(1) saAmfSISUHAState."safSu=SC-2,safSg=DS2_2N,safApp=ERIC-CUDB_CS"."safSi=DS2_2N-1": active(1) saAmfSISUHAState."safSu=SC-2,safSg=PLDB_2N,safApp=ERIC-CUDB_CS"."safSi=PLDB_2N-1": active(1) saAmfSISUHAState."safSu=SC-2,safSg=2N,safApp=ERIC-CUDB_LDAPFE_MONITOR"."safSi=2N-1": active(1) saAmfSISUHAState."safSu=SC-1,safSg=2N,safApp=ERIC-CUDB_CUDBOI"."safSi=2N-1": standby(2) saAmfSISUHAState."safSu=SC-1,safSg=2N,safApp=ERIC-CUDB_LDAPFE_MONITOR"."safSi=2N-1": standby(2) saAmfSISUHAState."safSu=SC-1,safSg=2N,safApp=ERIC-CUDB_MSGSRV_MONITOR"."safSi=2N-1": active(1) saAmfSISUHAState."safSu=SC-1,safSg=DS2_2N,safApp=ERIC-CUDB_CS"."safSi=DS2_2N-1": standby(2) saAmfSISUHAState."safSu=SC-1,safSg=DS1_2N,safApp=ERIC-CUDB_CS"."safSi=DS1_2N-1": standby(2) saAmfSISUHAState."safSu=SC-1,safSg=PLDB_2N,safApp=ERIC-CUDB_CS"."safSi=PLDB_2N-1": standby(2) saAmfSISUHAState."safSu=PL-3,safSg=2N,safApp=ERIC-CUDB_SOAP_NOTIFIER"."safSi=2N-1": active(1) saAmfSISUHAState."safSu=PL-4,safSg=2N,safApp=ERIC-CUDB_SOAP_NOTIFIER"."safSi=2N-1": standby(2) SU States: ---------- Status OK
2.2.13.1 cudbHaState Examples of Use
<CUDB_node_prompt> cudbHaState
2.2.14 cudbLdapFeRestart
The cudbLdapFeRestart command is used to perform a rolling restart on the LDAP Front Ends (FEs).
By default, the LDAP FE processes in the node are restarted one by one. An LDAP FE process is not restarted until the previous one is not fully operative (waiting up to a maximum of 120 seconds). In case the restarted LDAP FE process is not fully operative in period of 120 seconds, restart of the LDAP FEs is aborted. This ensures that the LDAP connections are equally balanced between all LDAP FEs after executing the command.
| Note: |
The current connections of the LDAP FEs are dropped when
the processes are restarted. |
cudbOperator users have no access to run this command.
cudbLdapFeRestart [-f | --force] [-p | --parallel [--no-prompt]] [-h | --help]
cudbLdapFeRestart Command Options
The following command options can be used:
The command output is a log of the command progress. The result of the command is provided at the end of the log.
2.2.14.1 cudbLdapFeRestart Examples of Use
The following example procedure shows how to execute cudbLdapFeRestart:
Steps
2.2.15 cudbManageBCServer
The cudbManageBCServer command manages the Blackboard Coordination (BC) servers in the CUDB node. Refer to CUDB High Availability for further information about BC servers and the BC cluster.
This command acts on the BC server running on the blade or Virtual Machine (VM) where it is executed.
cudbManageBCServer Command Options
The following command options can be used:
cudbManageBCServer -checkMode Actual Host=SC_2_1 Mode: follower
2.2.15.1 cudbManageBCServer Examples of Use
<CUDB_node_prompt> cudbManageBCServer -checkMode
2.2.16 cudbManageLibDataDist
The cudbManageLibDataDist command checks if a specific CUDB node is using the default distribution policy or a custom one. Refer to CUDB System Administrator Guide for further details about using the command.
cudbManageLibDataDist Requisites
cudbOperator users
have no access to run this command.
cudbManageLibDataDist [-l | --load] [-0 | --unload] [-s | --status] [-h | --help]
cudbManageLibDataDist Command Options
The following command options can be used:
The command must return an output similar to the below example:
Customer distribution policy active
2.2.16.1 cudbManageLibDataDist Examples of Use
The following example procedure shows how to check the status of the distribution policy with cudbManageLibDataDist -s:
Steps
2.2.17 cudbManageMsgSrvServer
The cudbManageMsgSrvServer command manages the Messaging Service servers (MsgSrv) in the CUDB node. Refer to CUDB High Availability for further information about BC servers and the BC cluster.
cudbManageMsgSrvServer Requisites
This command acts on the MsgSrv server running on the blade or VM where it is executed.
cudbManageMsgSrvServer Command
Options
The following command options can be used:
The result the command depends on the option used. For example:
2.2.17.1 cudbManageMsgSrvServer Examples of Use
<CUDB_node_prompt> cudbManageMsgSrvServer status
2.2.18 cudbManageStore
The cudbManageStore command executes administrative orders over clusters installed in the CUDB node. Refer to CUDB System Administrator Guide for more information about using the command.
The requisites of executing this command are as follows:
cudbManageStore Command Options
The following command options can be used:
| Note: |
The -l | --location, -S | --Script-path, -n | --no-start-replication and -s | --restore-stored-procedures options are valid only for the restore order. |
An example of command output is shown below:
cudbManageStore stores to process: pl. Store pl is in ready mode. cudbManageStore command successful.
2.2.18.1 cudbManageStore Examples of Use
The following examples show how to execute cudbManageStore. Before all command executions, the following procedure must be performed:
Steps
Example 2 Some examples of using cudbManageStore are provided below:
2.2.19 cudbPmJobReload
The cudbPmJobReload command restarts the PM agent to allow it to get new configuration. Refer to CUDB System Administrator Guide for more information on when to use it.
The requisites of using the command are as follows:
cudbPmJobReload [-t | --time <timeout>] [-D | --Debug] [-u | --user-facility <facility_level>] [-h | --help]
cudbPmJobReload Command Options
The following command options can be used:
When executed, the command must return an output similar to the below example:
Stopping PmAgent in node 10.22.27.10 ... OK Waiting for ESA PmAgent to go off line. ESA PmAgent has been successfully stopped. Starting PmAgent in node 10.22.27.10 ... OK
2.2.19.1 cudbPmJobReload Examples of Use
The following example shows how to execute cudbPmJobReload:
Steps
2.2.20 cudbPrepareStore
The cudbPrepareStore command creates databases and tables for the PLDB or the specified DS Unit. Refer to CUDB System Administrator Guide for further details on when to use it.
cudbOperator users
have no access to run this command.
cudbPrepareStore Command Options
The following command options can be used:
An example output of the command is shown below:
Starting Obtaining SQL Servers Information ... Done Skipping Step 1 ... Executing Step 2 ... ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/cudb-ds.sql on 10.22.23.5:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/identities-ds.sql on 10.22.23.5:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/msc-ds.sql on 10.22.23.5:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/csps-ds.sql on 10.22.23.5:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/nph-ds.sql on 10.22.23.5:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/auth-ds.sql on 10.22.23.5:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ftest-ds.sql on 10.22.23.5:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/avg-ds.sql on 10.22.23.5:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/eps-ds.sql on 10.22.23.5:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ims-ds.sql on 10.22.23.5:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/sm-ds.sql on 10.22.23.5:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ldap-indexes-ds.sql on 10.22.23.5:15010 ..ok ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/cudb-ds.sql on 10.22.23.6:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/identities-ds.sql on 10.22.23.6:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/msc-ds.sql on 10.22.23.6:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/csps-ds.sql on 10.22.23.6:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/nph-ds.sql on 10.22.23.6:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/auth-ds.sql on 10.22.23.6:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ftest-ds.sql on 10.22.23.6:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/avg-ds.sql on 10.22.23.6:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/eps-ds.sql on 10.22.23.6:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ims-ds.sql on 10.22.23.6:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/sm-ds.sql on 10.22.23.6:15010 ..ok Applying /home/cudb/storageEngine/config/schema/ds/appSrv/ldap-indexes-ds.sql on 10.22.23.6:15010 ..ok Done Finished
2.2.20.1 cudbPrepareStore Examples of Use
The following example shows how to prepare the databases and tables on DS1 with cudbPrepareStore in application scope:
Steps
2.2.21 cudbReallocate
The cudbReallocate command performs the Subscription Reallocation (also known as "reallocation") of the Distribution Entries (DEs) between DSGs. When executing the command, the complete DE subtree is moved. If one or more DSGs is disabled for provisioning of DEs, reallocation will not be performed to such DSGs. In case when DSG disabled for provisioning of DEs is specified as reallocation destination, reallocation will be stopped. When the reallocation destination is not specified, the command will select one or more suitable DSGs which are not blocked for provisioning as reallocation targets. In both cases, reallocation can be forced if optional parameter is used.
Refer to CUDB Subscription Reallocation for more information on when to use the command.
Use the reallocation command during the low-traffic hours and use it repeatedly. Either way, use it with small chunks of subscribers (--list) or small percentages (--entriespercentage) at a time, to make sure that reallocating those small chunks of data fits into the low-traffic time window.
The requisites of using the command are as follows:
cudbReallocate Command Options
The following command options can be used:
The following list contains some example output messages:
If destination DSG is blocked for provisioning, the example of output message is:
2.2.21.1 cudbReallocate Examples of Use
2.2.21.1.1 How to Check the Occupancy Percentage and Memory Level of All DSGs
The following example shows how to check the occupancy percentage and memory level of all DSGs in the system with the cudbReallocate command:
Steps
2.2.22 cudbReconciliationMgr
The cudbReconciliationMgr command allows to check, print, and modify the pending reconciliation task list. Refer to CUDB Data Storage Handling for further details about when to use it.
cudbReconciliationMgr Requisites
cudbOperator users
have no access to run this command.
cudbReconciliationMgr Command Options
The following command options can be used:
The command output is a log of the execution progress. The command result is provided at the end of the log.
2.2.22.1 cudbReconciliationMgr Examples of Use
The following examples show how to execute cudbReconciliationMgr. Before all command executions, the following procedure must be performed:
Steps
Some examples of using cudbReconciliationMgr are provided below:
2.2.23 cudbRemoteTrust
The cudbRemoteTrust command is used in scalability procedures during installation to establish new SSH trust relationships between the installed CUDB node and the rest of the CUDB nodes.
It is also used to disable legal warning banner to be displayed for internal CUDB logins. Refer to Ericsson personnel for more information.
The requisites of using the command are as follows:
cudbRemoteTrust [-l | --local] [-a | --all [-p | --persistent]] [-b | --banner] [-h | --help]
cudbRemoteTrust Command Options
The following command options can be used:
Successful execution provides no output.
2.2.23.1 cudbRemoteTrust Examples of Use
2.2.24 cudbResumeProvisioningNotification
The cudbResumeProvisioningNotification command is used to send a notification to the Provisioning Gateway (PG) to resume provisioning. Refer to Storage Engine, Backup Notification Failure To Provisioning Gateway for further details about when to use it.
cudbResumeProvisioningNotification Requisites
cudbOperator users
have no access to run this command.
cudbResumeProvisioningNotification Syntax
cudbResumeProvisioningNotification [-u | --user-facility <facility>] [-D |--debug] [-h |--help]
cudbResumeProvisioningNotification Command Options
The following command options can be used:
cudbResumeProvisioningNotification Output
The command output is a log of the changes performed. The command result is provided at the end of the log.
2.2.24.1 cudbResumeProvisioningNotification Examples of Use
The following example shows how to execute cudbResumeProvisioningNotification:
Steps
2.2.25 cudbServiceContinuity
The cudbServiceContinuity command is used to change DSG mastership manually to working that have been left in minority after a CUDB system split situation. The cudbServiceContinuity command maximizes service, allowing traffic operation with the resources available in the minority partition. New masters will be elected for all DSGs configured in that minority partition.
| Note: |
PLDB mastership is not affected by the command. |
Refer to CUDB High Availability for further details about using the command.
cudbServiceContinuity Requisites
The requisites of using the command are as follows:
cudbServiceContinuity [-h |--help]
cudbServiceContinuity Command Options
The following command option can be used:
The command can return the following output messages:
| Note: |
If the command fails for other reasons than the ones above,
contact the next level of maintenance support. |
2.2.25.1 cudbServiceContinuity Examples of Use
<CUDB_node_prompt> cudbServiceContinuity
2.2.26 cudbSwBackup
The cudbSwBackup command performs a software and configuration backup in the CUDB node. The command rotates software and configuration backup files automatically, and asks for confirmation when the oldest backup is about to be deleted. Refer to CUDB System Administrator Guide for more information on using the command.
The requisites of using the command are as follows:
The following command options can be used:
The output of the command is a log of the performed actions. An example output is provided below:
Pack and compress all directories and files under /home/cudb/* except swbackup, systemDataBackup and automatedBackupStorage directories. IMM data persisted Obtaining SQL Servers Information ... Creating tables backup into file /cluster/home/cudb/swbackup/newbackup-cudbSmpConfig.sql Connecting to host 10.22.49.5:15000 , database: cudb_system_monitor Let's tar the directory /cluster/home/cudb excluding backup directory itself /cluster/home/cudb/swbackup/ ... Let's execute the lde-brf command: cluster brf create -l newbackup -t system -m newbackup The resulting file is saved in /cluster/storage/no-backup/newbackup.tar Snapshot cleanup completed newbackup/config.md5sum newbackup/config.metadata newbackup/config.tar.gz newbackup/software.md5sum newbackup/software.metadata newbackup/software.tar.gz Backup successfully created. The backup files: newbackup, are located in directories: /cluster/home/cudb/swbackup and /cluster/storage/no-backup
2.2.26.1 cudbSwBackup Examples of Use
The following example shows how to create a new software and configuration backup (named newbackup) with cudbSwBackup:
Steps
The following example shows how to restore a previously created software and configuration backup (named newbackup) with cudbSwBackup:
2.2.27 cudbSwVersionCheck
The cudbSwVersionCheck command gathers information on the installed software packages from both the object model and LDE, and then prints the current installation state for each node in the LDE cluster. It also compares the current state to the official package list of the installed CUDB release, and shows the differences between the reference file and the currently installed packages.
The package information listing and the comparison of the installed packages can be requested separately with the use of the --packages and --compare options, respectively. If none of these options are used when executing the command, both actions are performed.
cudbOperator users
have no access to run this command.
cudbSwVersionCheck [-p | --packages] [-c | --compare] [(-f | --reffile <REFFILE>) | (-d | --refdir <REFDIR>)] [-h | --help]
cudbSwVersionCheck Command Options
The following command options can be used:
An example output is provided below:
CUDB_67 SC_2_1# cudbSwVersionCheck
Checking SW on blades.......... ..........................................................
SW num | 2_1 | 2_2 || 2_3 | 2_4 | 2_5 | 2_6 | 2_7 | 2_8 | 2_9 | 2_10 |
-------------------------------------------------------------------------------
1 | XO | XO || | | | | | | | |
2 | | || XO | XO | XO | XO | XO | XO | XO | XO |
3 | XO | XO || | | | | | | | |
4 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
5 | XO | XO || | | | | | | | |
6 | XO | XO || | | | | | | | |
7 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
8 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
9 | XO | XO || | | | | | | | |
10 | XO | XO || | | | | | | | |
11 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
12 | XO | XO || | | | | | | | |
13 | XO | XO || | | | | | | | |
14 | XO | XO || | | | | | | | |
15 | XO | XO || | | | | | | | |
16 | XO | XO || | | | | | | | |
17 | XO | XO || | | | | | | | |
18 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
19 | | || XO | XO | XO | XO | XO | XO | XO | XO |
20 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
21 | XO | XO || | | | | | | | |
22 | | || XO | XO | XO | XO | XO | XO | XO | XO |
23 | XO | XO || | | | | | | | |
24 | XO | XO || | | | | | | | |
25 | XO | XO || | | | | | | | |
26 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
27 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
28 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
29 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
30 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
31 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
32 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
33 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
34 | | || XO | XO | | | | | | |
35 | XO | XO || | | | | | | | |
36 | XO | XO || | | | | | | | |
37 | | || XO | XO | XO | XO | XO | XO | XO | XO |
38 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
39 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
40 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
41 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
42 | XO | XO || | | | | | | | |
43 | | || XO | XO | XO | XO | XO | XO | XO | XO |
44 | XO | XO || | | | | | | | |
45 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
46 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
47 | XO | XO || | | | | | | | |
48 | XO | XO || | | | | | | | |
49 | | || XO | XO | XO | XO | XO | XO | XO | XO |
50 | XO | XO || | | | | | | | |
51 | | XO || | | | | | | | |
52 | XO | XO || XO | XO | XO | XO | XO | XO | XO | XO |
53 | XO | XO || | | | | | | | |
54 | XO | XO || | | | | | | | |
55 | XO | XO || | | | | | | | |
56 | XO | XO || | | | | | | | |
57 | XO | XO || | | | | | | | |
58 | XO | XO || | | | | | | | |
LEGEND:
X: queried by the "cmw-repository-list --node <blade> <sw_string>" command
O: queried by the "immlist <sw_string>" command
SDPs imported to the LOTC cluster:
----------------------------------
SW num | SW string | Used |
------------------------------------------------------------------
1 | ERIC-COREMW_SC-CXP9017565_1-R5K01 | XO |
2 | ERIC-LINUX_PAYLOAD-CXP9013152_3-R4F04 | XO |
3 | ERIC-CUDB_SMBC_PROCESS-CXP9030366_6-R1A02 | XO |
4 | ERIC-CUDB_BC_SERVER-CXP9030364_6-R1A01 | XO |
5 | ERIC-CUDB_BC_CLIENT-CXP9030367_6-R1A02 | XO |
6 | ERIC-CUDB_LDAPFE_MONITOR-CXP9015809_6-R1A10 | XO |
7 | ERIC-CUDB_CBATMPPATCHES-CXP9030357_6-R1A10 | XO |
8 | ERIC-COREMW_OPENSAF-CXP9017656_1-R5K01 | XO |
9 | ERIC-CUDB_APPCOUNTERS-CXP9015318_6-R1A10 | XO |
10 | ERIC-CUDB_BC_SERVER_MONITOR-CXP9030379_6-R1A01 | XO |
11 | ERIC-COREMW_COMMON-CXP9017566_1-R5K01 | XO |
12 | ERIC-COM-CXP9017585_2-R6K01 | XO |
13 | ERIC-ComSa-CXP9017697_3-R2K01 | XO |
14 | ERIC-CUDB_ESA-CXP9015816_6-R1A09 | XO |
15 | ERIC-CUDB_CURATOR-CXP9030365_6-R1A02 | XO |
16 | ERIC-CUDB_BCCLUSTER_CMD-CXP9030380_6-R1A01 | XO |
17 | ERIC-CUDB_BACKUP_RESTORE-CXP9015818_6-R1A10 | XO |
18 | ERIC-CUDB_BACKUP_CMD-CXP9016194_6-R1A05 | XO |
19 | ERIC-CUDB_LDAPFE-CXP9015319_6-R1A10 | XO |
20 | ERIC-CUDB_KEEPALIVE-CXP9015322_6-R1A10 | XO |
21 | ERIC-CUDB_CS_SC_CMD-CXP9030358_6-R1A10 | XO |
22 | ERIC-CUDB_CS_PL_CMD-CXP9019599_6-R1A10 | XO |
23 | ERIC-CUDB_CS-CXP9015812_6-R1A10 | XO |
24 | ERIC-CUDB_COUNTCOLLECTOR-CXP9015814_6-R1A07 | XO |
25 | ERIC-CUDB_LIB_REALLOC-CXP9030372_6-R1A10 | XO |
26 | ERIC-CUDB_LIB_BCCLI-CXP9030363_6-R1A01 | XO |
27 | ERIC-CUDB_JAVA-CXP9015823_6-R1A04 | XO |
28 | ERIC-CUDB_MANAGE_BCSERVER_CMD-CXP9030381_6-R1A01 | XO |
29 | ERIC-CUDB_LIB_BOOST-CXP9030227_6-R1A04 | XO |
30 | ERIC-CUDB_LIB_ALARMS-CXP9030217_6-R1A10 | XO |
31 | ERIC-CUDB_LIB_SMMCP-CXP9015821_6-R1A10 | XO |
32 | ERIC-CUDB_LIB_LDAP-CXP9030233_6-R1A10 | XO |
33 | ERIC-CUDB_LIB_AMF-CXP9017480_6-R1A10 | XO |
34 | ERIC-CUDB_LDAP_SOAP_NOTIF-CXP9016181_6-R1A10 | XO |
35 | ERIC-CUDB_LDAP_CMD-CXP9030234_6-R1A03 | XO |
36 | ERIC-CUDB_REALLOC_CMD-CXP9030373_6-R1A10 | XO |
37 | ERIC-CUDB_MYSQL_NDBD-CXP9030225_6-R1A02 | XO |
38 | ERIC-CUDB_MYSQL_COMMON-CXP9030223_6-R1A10 | XO |
39 | ERIC-CUDB_LIB_LOGGER-CXP9030219_6-R1A10 | XO |
40 | ERIC-CUDB_LIB_CDXQ-CXP9030216_6-R1A10 | XO |
41 | ERIC-CUDB_LIB_CONF_MGMT-CXP9030218_6-R1A10 | XO |
42 | ERIC-CUDB_MYSQL_MGMD-CXP9030224_6-R1A02 | XO |
43 | ERIC-CUDB_PL_PLATFORM-CXP9015820_6-R1A10 | XO |
44 | ERIC-CUDB_LOADMONITOR-CXP9017650_6-R1A10 | XO |
45 | ERIC-CUDB_LIB_XERCES-CXP9030220_6-R1A01 | XO |
46 | ERIC-CUDB_LIB_ZKCCLI-CXP9030362_6-R1A01 | XO |
47 | ERIC-CUDB_PDSH_CMD-CXP9030391_6-R1A01 | XO |
48 | ERIC-CUDB_OI-CXP9015813_6-R1A10 | XO |
49 | ERIC-CUDB_MYSQL_SERV-CXP9030226_6-R1A02 | XO |
50 | ERIC-CUDB_RECONCILIATION-CXP9015808_6-R1A10 | XO |
51 | ERIC-CUDB_NODE_CONFIG-CXP9015320_6-R1A10 | XO |
52 | ERIC-CUDB_SECURITY-CXP9016002_6-R1A10 | XO |
53 | ERIC-CUDB_SYSHEALTH-CXP9030087_6-R1A10 | XO |
54 | ERIC-CUDB_SC_PLATFORM-CXP9015817_6-R1A10 | XO |
55 | ERIC-CUDB_SLES_SCREEN_CMD-CXP9030371_6-R1A01 | XO |
56 | ERIC-LINUX_CONTROL-CXP9013151_3-R4F04 | XO |
57 | ERIC-CUDB_SW_MGMT-CXP9015815_6-R1A10 | XO |
58 | ERIC-CUDB_SM_CLIENT-CXP9015810_6-R1A10 | XO |
SW Version of the CUDB Node:
----------------------------
Reference: /home/coremw_appdata/incoming/cudb-install-temp/cudbReference
Version: CUDB13B CXP9020214/6 R1A10
Differences: reference <|> current sw version
--------------------------------
ERIC-CUDB_SM_PROCESS-CXP9015811_6-R1A10 <
2.2.27.1 cudbSwVersionCheck Examples of Use
Execute the command as follows to print the package installation status and compare the package installation state to the contents of the cudbReference file:
<CUDB_node_prompt> cudbSwVersionCheck --packages --compare
2.2.28 cudbSystemDataBackupAndRestore
The cudbSystemDataBackupAndRestore command automates the system data backup and restore processes. In case of kickstart command failure, the execution is not stopped. Furthermore, a retry mechanism exists, which executes the kickstart command a maximum of 2 more times with a 5-second delay in-between them. The encountered errors are stored and displayed in a summary at the script exit. In case of error, the exit code of cudbSystemDataBackupAndRestore is the number of errors encountered during execution. The cudbSystemDataBackupAndRestore script exits when running kickstart_replication and leaves most of the slave DS replica without the replication started. Refer to CUDB Backup and Restore Procedures for further details on when to use the command.
| Note: |
Do not run the command cudbSystemDataBackupAndRestore while configuration changes are being applied. |
| Note: |
To avoid collision with the Self-Ordered Backup and Restore
function, do not perform system data backup or restore if a Self-Ordered
Backup and Restore process is running on any node of the system. See cudbSystemStatus for further information about the status of the Self-Ordered Backup and Restore process. |
cudbSystemDataBackupAndRestore Requisites
cudbOperator users
have no access to run this command.
cudbSystemDataBackupAndRestore Syntax
cudbSystemDataBackupAndRestore Command Options
The following command options can be used:
cudbSystemDataBackupAndRestore Output
The following output examples are provided below:
2.2.29 cudbSystemStatus
The cudbSystemStatus command automatically prints the general system status and the local node information. The output contains information on the following components:
| Note: |
If the cudbSystemStatus command is executed on a system with different hardware types, the
command output will also contain information on the hardware types
used in the system. This hardware information is listed below the
CUDB software version output. |
This command can be accessed by cudbOperator user using sudo.
cudbSystemStatus [-a | --alarms] [-s | --sm-status] [-c | --cluster-status] [-C | --new-cluster-status] [-m | --check-mysql] [-p | --check-cudbprocess] [-b | --bc-status] [-B | --new-bc-status] [-r | --replication-status] [-R | --new-replication-status] [-t | --hardware-type] [-v | --version] [-h | --help]
cudbSystemStatus Command Options
The following command options can be used:
| Note: |
If no options are used when executing the command, cudbSystemStatus runs as if the -B, -v, -s, -C, -R, -a, -m, and -p options have
been used. If at least one command option is executed, the command
collects information related only to that command option.Also, if no options are used when executing the command on a system with different hardware types, cudbSystemStatus runs the -t | --hardware-type option in addition to the options listed above. |
| Note: |
In hybrid systems, where some DSGs consist of different hardware
platforms, memory usage values are recalculated for DSGs located on
nodes with higher configured system memory for database cluster. Memory
levels shown by the cudbSystemStatus command for those DSGs are different from those reported by database
cluster client. |
The command shows information on the relevant system status. BC cluster information is showing status of the BC clusters in all the sites (unless the node is disabled, then the output is Disabled instead of showing the status of BC servers on that node), while SM information shows the status of the SM leader elected in every site.
If, for any reason, the BC cluster does not have the majority of BC servers, enough to work as a cluster, it is indicated in the output, and no SM leader appears on the affected site.
2.2.29.1 cudbSystemStatus Examples of Use
Perform the following steps to execute the command:
Steps
Checking Replication Channels in the System:
Node | 40 | 41
====================
PLDB ___|__S1_|__M_
DSG 1 __|__S1_|__M__
DSG 3 __|__M__|__Xu_
Printing Detailed Replication Status for the Slave Replicas:
Node 40:
Replication in DSG0(Chan=1) .... Up -- Delay = 0.0 seconds, no. of pending changes = 0
Replication in DSG1(Chan=1) .... Up -- Delay = 0.0 seconds, no. of pending changes = 0
Node 41:
Replication in DSG3 .... Stopped - Data Store is disabled
B) One Node Disabled (Node 40):
Checking Replication Channels in the System:
Node | 40 | 41 | 130 | 131
================================
PLDB ___|__Xu_|__S1_|__M__|__S1_
DSG 1 __|__Xu_|_____|__M__|__S1_
DSG 3 __|__Xu_|__M _|_____|_____
DSG 255 |_____|__S1_|__M__|__S1_
Printing Detailed Replication Status for the Slave Replicas:
Node 40: Disabled
Replication in DSG0 .... Stopped - Data Store is disabled
Replication in DSG1 .... Stopped - Data Store is disabled
Replication in DSG3 .... Stopped - Data Store is disabled
Node 41:
Replication in DSG0(Chan=1) .... Up -- Delay = 0.0 seconds, no. of pending changes = 0
Replication in DSG255(Chan=1) .... Up -- Delay = 0.0 seconds, no. of pending changes = 0
Node 130:
There are no Slave clusters
Node 131:
Replication in DSG0(Chan=1) .... Up -- Delay = 0.0 seconds, no. of pending changes = 0
Replication in DSG1(Chan=1) .... Up -- Delay = 0.0 seconds, no. of pending changes = 0
Replication in DSG255(Chan=1) .... Up -- Delay = 0.0 seconds, no. of pending changes = 0
2.2.30 cudbTakeAllMasters
The cudbTakeAllMasters command is used to change mastership manually to a working partition in case of emergency (such as a CUDB system split situation), when a surviving partition is available in the CUDB system with reduced service. cudbTakeAllMasters maximizes the service in the surviving partition, allowing normal operation with the resources available in this partition.
Refer to CUDB High Availability for further details about the use of this command.
The requisites of using this command are as follows:
cudbTakeAllMasters [-h |--help]
cudbTakeAllMasters Command Options
The following command option can be used:
The command can return the following output messages:
| Note: |
If the command fails for other reasons than the ones above,
contact the next level of maintenance support. |
2.2.30.1 cudbTakeAllMasters Examples of Use
<CUDB_node_prompt> cudbTakeAllMasters
2.2.31 cudbUnitDataBackupAndRestore
The cudbUnitDataBackupAndRestore command resolves data inconsistencies between replicas by creating a data backup on the master replica and restoring this data backup on the inconsistent slave replica. Refer to CUDB Backup and Restore Procedures for further details about when to use it
cudbUnitDataBackupAndRestore Requisites
The requisites of using the command are as follows:
cudbUnitDataBackupAndRestore Syntax
cudbUnitDataBackupAndRestore Command Options
The following command options can be used:
cudbUnitDataBackupAndRestore Output
An example output of the command is provided below:
CUDB_41 SC_2_1# cudbUnitDataBackupAndRestore -d 1 -n 42 WARNING: Restoring the DS cluster will replace previous contents. Are you sure you want to continue (y/n)? y Action will be executed CREATE PART -------------------------------- creating backup on node 41 cudbManageStore stores to process: ds1 (in dsgroup1). Starting Backup ... Launching order Backup for ds1 in dsgroup 1. Obtaining Mgm Information. Trying backup on mgmt access 1, wait a moment ... ndb_mgm 10.22.41.1 2372 -e "START BACKUP 999 WAIT COMPLETED" ..ok BACKUP-999 renamed in PL_2_7 to /local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 BACKUP-999 renamed in PL_2_8 to /local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 Backup finished successfully for store ds1. Stores where order backup was successfully completed: ds1. cudbManageStore command successful. DIR CREATE PART -------------------------------- creating backup directory /local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 on node 42 on blade 10.22.42.7 creating backup directory /local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 on node 42 on blade 10.22.42.8 COPY PART -------------------------------- copying backup files from node 41 blade 10.22.41.7:/local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 ⇒ to node 42 blade 10.22.42.7:/local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 ⇒ copying backup files from node 41 blade 10.22.41.8:/local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 ⇒ to node 42 blade 10.22.42.8:/local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 RESTORE PART -------------------------------- Restoring backup on node 42 cudbManageStore stores to process: ds1 (in dsgroup1). Launching restore order in CUDB Node 42 to store ds1 in dsgroup 1. Starting restore in CUDB Node 42 for store ds1, ⇒ backup path /local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46,⇒ sql scripts path /home/cudb/storageEngine/config/schema/ds/internal/restoreTempSql. Waiting for restore order(s) to be completed in CUDB Node 42 for stores : ds1. restore order finished successfully in CUDB Node 42 for store ds1. restore order(s) completed in CUDB Node 42 for stores : ds1. Stores where order restore was successfully completed: ds1. Closing connections for all blades of DSUnitGroup 1. No stored procedures were found to restore. cudbManageStore command successful. Performing cleanup delete backup directory /local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 on node 42 on blade 10.22.42.7 delete backup directory /local/cudb/mysql/ndbd/backup/BACKUP/BACKUP-2016-04-11_15-46 on node 42 on blade 10.22.42.8 cudbUnitDataBackupAndRestore successfully ended
2.2.32 cudbUpdateUserInfo
The cudbUpdateUserInfo command updates the local node configuration with the latest changes of LDAP users in the CUDB node where the command is executed.
The requisites of using the command are as follows:
cudbUpdateUserInfo [-D | --Debug] [-h | --help] [-i | --internal]
cudbUpdateUserInfo Command Options
The following command options can be used:
The output informs about the success or failure of executing the command. An example output for each scenario is shown below:
2.2.32.1 cudbUpdateUserInfo Examples of Use
Perform the following steps to execute the command:
Steps
2.3 Internal Commands
This section lists the internal commands of the CUDB system, available for Ericsson personnel only.
2.3.1 cudbAppCheckerManager
This command can be executed by Ericsson personnel only.
2.3.2 cudbBCServersRestart
This command can be executed by Ericsson personnel only.
2.3.3 cudbCopyAclsConfFile
This command can be executed by Ericsson personnel only.
2.3.4 cudbEvipConfigExtension
This command can be executed by Ericsson personnel only.
2.3.5 cudbEvipEncapsulator
This command can be executed by Ericsson personnel only.
2.3.6 cudbExecuteAllBlades
This command can be executed by Ericsson personnel only.
2.3.7 cudbFollowLdapfeLogs
This command can be executed by Ericsson personnel only.
2.3.8 cudbManageDsGroup
This command can be executed by Ericsson personnel only.
2.3.9 cudbManageNode
This command can be executed by Ericsson personnel only.
2.3.10 cudbManageSite
This command can be executed by Ericsson personnel only.
2.3.11 cudbMpstat
This command can be executed by Ericsson personnel only.
2.3.12 cudbOomConfigurator
The cudbOomConfigurator command is used only by a restricted process.
2.3.13 cudbParallelCommandRun
This command can be executed by Ericsson personnel only.
2.3.14 cudbRemoveNode
This command can be executed by Ericsson personnel only.
2.3.15 cudbSdpInfo
This command can be executed by Ericsson personnel only.
2.3.16 cudbSetDsgMaster
This command can be executed by Ericsson personnel only.
2.3.17 cudbSetPartitionStatus
This command can be executed by Ericsson personnel only, although option -ps
is available for all system administrators and operators.
cudbSetPartitionStatus Command Options
The following command options can be used:
2.3.18 cudbTpsStat
This command can be executed by Ericsson personnel only.
Reference List
- CUDB Node Configuration Data Model Description
- CUDB Node Logging Events
- CUDB Security and Privacy Management
- CUDB Users and Passwords 3/00651-HDA 104 03/10
- CUDB System Administrator Guide
- CUDB Logchecker
- CUDB Application Counters
- Storage Engine, Potential Data Inconsistency between Replicas Found in DS
- Storage Engine, Potential Data Inconsistency between Replicas Found in PLDB
- Storage Engine, Replication Stopped Working in DS
- Storage Engine, Replication Stopped Working in PLDB
- CUDB Consistency Check
- CUDB Backup and Restore Procedures
- CUDB Application Schema Update
- CUDB High Availability
- Storage Engine, Unable to Synchronize Cluster in DS, Major
- Storage Engine, Unable to Synchronize Cluster in PLDB, Major
- CUDB Subscription Reallocation
- CUDB Data Storage Handling
- Storage Engine, Backup Notification Failure To Provisioning Gateway
- CUDB Glossary of Terms And Acronyms