1 Introduction
The purpose of this document is to instruct what troubleshooting data is to be collected and enclosed in a Customer Service Request (CSR) in case a problem is experienced with the Ericsson Centralized User Database (CUDB).
1.2 Related Information
Definition and explanation of acronyms and terminology, trademark information, and typographic conventions can be found in the following documents:
1.3 Prerequisites
This section describes the prerequisites for performing the data collection procedure.
1.3.1 Documents
Before starting this procedure, ensure that the following information or documents are available:
1.3.2 Conditions
Before starting the data collection procedure, ensure that the following conditions are met:
The below sections describe how to establish the connections and fetch the required information listed above.
1.3.2.1 Naming Conventions
Several instructions in this document require the execution of a command manually. In such cases, the CUDB prompt is indicated as follows:
CUDB<node_id> SC_2_<x>#
In the above output, <node_id> represents the ID of the node to update, while SC_2_<x> represents the active System Controller (SC).
| Note: |
Use the cudbHaState command as shown below to check the active controller for COM: CUDB<node_id> SC_2_<x># cudbHaState | grep COM | grep ACTIVE The output of the command must look similar to the below example: CUDB<node_id> SC_2_<x># cudbHaState | grep COM | grep ACTIVE COM is assigned as ACTIVE in controller SC-2 |
1.3.2.2 Connecting to the Blade or VM of the CUDB Node
For CUDB systems deployed on native BSP 8100, connection to the GEP3, GEP5 or Generic Ericsson Processor version 7, Low Power (GEP7L) boards is available either through serial connection, or through the CUDB CLI. For CUDB systems deployed on a cloud infrastructure, connection to the VMs is available only through CUDB CLI.
1.3.2.2.2 Connecting through the CUDB CLI
1.3.2.3 Obtaining Local and Global DS and DSG ID Allocation
Perform the following steps to obtain the local and global DS-DSG ID allocation:
Steps
Results
In this specific case, the local DS 1 belongs to the global DSG 1.
Refer to CUDB Subscription Reallocation for more information on DSG ID allocation.
1.3.2.4 Obtaining Global DSG ID for Channel Replication
Execute the cudbSystemStatus command from one of the SCs as follows to obtain the global DSG IDs:
CUDB<node_id> SC_2_<x># cudbSystemStatus -R
CUDB<node_id> SC_2_x# cudbSystemStatus -R
Execution date: Mon Mar 20 11:06:39 CET 2017
Checking Replication Channels in the System:
Node | 48 | 49
====================
PLDB ___|__M__|__S1_
DSG 1 __|__M__|__S1_
DSG 255 |__M__|__S2_
Printing Detailed Replication Status for the Slave Replicas:
Node 48:
There are no Slave clusters
Node 49:
Replication in DSG0(Chan=1.... Up -- Delay = 0.0 seconds, no. of pending changes = 0
Replication in DSG1(Chan=1.... Up -- Delay = 0.0 seconds, no. of pending changes = 0
Replication in DSG255(Chan=2) .... Up -- Delay = 0.0 seconds, no. of pending changes = 0
CUDB<node_id> SC_2_x# cudbSystemStatus -R
Execution date: Mon Mar 20 11:14:23 CET 2017
Checking Replication Channels in the System:
Node | 48 | 49
====================
[-E-] PLDB ___|__Xu_|__Xm_
[-E-] DSG 1 __|__Xu_|__M__
[-E-] DSG 255 |__Xu_|__M__
[-E-] [-E-]
Printing Detailed Replication Status for the Slave Replicas:
Node 48:
There are no Slave clusters
Node 49:
There are no Slave clusters
This specific case, has a problem with replication on node 48.
In this specific case, the global DSG ID 8 has problems with the replication in node 109.
2 Workflow
The workflow for collecting troubleshooting data is as follows:
Steps
- Collect mandatory data that is needed in connection with any problems experienced. See Mandatory Data for more information on collecting mandatory data.
- Collect other useful information if available within an acceptable amount of time and effort.
3 Mandatory Data
This section describes how to collect data that is mandatory for every type of problems related to CUDB.
The data described in this section must always be included in a CSR.
3.1 Data Collection
To collect the required data, perform the following procedure:
Steps
- Collect all application and platform logs with the cudbCollectInfo command.
- Make a copy of the log archive directory for later use. See Copying Log Archive Files for more information.
- Collect the SAF logs with the cmw-collect-info command.
- Collect additional core dump files. See Collecting Additional Core Dump Files for more information.
- Collect additional logs required for the CSR. See Collecting Additional Logs for more information.
- Collect all recent activities that can have an impact on the behavior of the CUDB system. See Describing Recent Activities for more information.
- Collect logs using the cudbGetLogs script. Refer to CUDB Logchecker for more information.
After This Task
The exact procedures to perform for the above points are described in the following sections in more detail.
3.1.1 Collecting Application and Platform Logs
The cudbCollectInfo script collects logs from the whole CUDB system, including the following:
Perform the following steps to collect the above logs:
Steps
Results
The output of the command must be similar to the below example:
CUDB_47 SC_2_1# cudbCollectInfo Creating dir for node 47 (/local/tmp/cudb_collect_info_20171003-153852/47) ... OK Creating dir for node 101 (/local/tmp/cudb_collect_info_20171003-153852/101) ... OK Creating dir for node 105 (/local/tmp/cudb_collect_info_20171003-153852/105) ... OK Creating dir for node 106 (/local/tmp/cudb_collect_info_20171003-153852/106) ... OK Waiting 47 to finish ... OK Waiting 101 to finish ... OK Waiting 105 to finish ... OK Waiting 106 to finish ... OK CREATING ARCHIVE ... OK Encrypting archive ... OK Removing unencrypted archive ... OK REMOVING RAW DATA ... OK Fetch the file: /local/tmp/cudb_collect_info_20171003-153852.c
| Note: |
The output file has a .c extension if the default encryption is
enabled. If the default encryption is disabled, the file extension is
.tar.gz instead. The GNU Privacy Guard (GPG) encryption of the
tar.gz file can be disabled by executing the
cudbCollectInfo -e command. |
The size of log files is generally between 100-900 MB. For example, in a three-node system with 8 DSGs, the size of the collected information is approximately 600 MB, which includes all collected logs from all three nodes.
For further information about the use of the command, execute it with the help switch (cudbCollectInfo -h), or refer to CUDB Node Commands and Parameters.
3.1.2 Copying Log Archive Files
Copy the log archive files to an external repository (for example, with the scp command), then send the relevant ones to support if requested. The log archive files are located on the SC in the following directory:
/local/cudb_logarchive/
3.1.3 Collecting SAF Logs
Perform the following steps to collect the SAF logs:
| Note: |
To extract all logs related to the platform, send the command from an SC to generate a
compressed file in the current directory. |
Steps
3.1.4 Collecting Additional Core Dump Files
The cudbCollectInfo command lists the core dumps only. Perform the following steps to collect additional dump files.
Steps
3.1.5 Collecting Additional Logs
Use the following commands to collect additional logs:
tar -czvf /cluster/home/commandlog.tar.gz /var/log/*/commandlog* /var/log/*/kernel*
tar -czvf /cluster/home/prmaint.tar.gz /home/cudb/monitoring/preventiveMaintenance/
cluster alarm -a -l --full
3.1.6 Describing Recent Activities
The problematic behavior of the CUDB system may be caused by certain activities that occurred (or were executed) in the system days or weeks before the problem occurred.
Therefore, check if any of the activities listed below took place during the last 2-4 weeks before the problem appeared. If so, then try to obtain as much information, logs, and documents as possible to facilitate troubleshooting.
Table 1 shows the activities that must be taken into consideration when collecting additional information, if applicable.
|
Activity |
CUDB Nodes / CUDB System Level |
UDC Solution Level (FEs Connected to the CUDB System, Provisioning System, Other Nodes Connected) |
IP Network (Site Routers, Switches, Backbone Connecting the CUDB Nodes)(1) |
Other |
|---|---|---|---|---|
|
Upgrade or Update |
X |
X |
X |
X |
|
Configuration Change |
X |
X |
X |
X |
|
Provisioning-related Activities |
X |
X |
||
|
Schema Change |
X |
X |
||
|
HW Replacement or Expansion |
X |
X |
X |
X |
|
Change in the Amount of Subscribers in the System |
X |
X |
||
|
Script or Tool Execution |
X |
X |
X |
X |
4 Data Collection for Specific Problem Types
This section describes the data to be included in a CSR, depending on what type of problem is experienced.

Contents