User Guide 1/1553-CNL 121 695/10 Uen D

CUDB VNF Lifecycle Management

Contents


1 Introduction

This document describes system administration tasks performed in the Virtualized Network Function (VNF) Lifecycle Manager (VNF-LCM). The VNF-LCM provides a workflow execution environment and a web-based application for managing VNF lifecycle procedures.

The workflows are ordered sequences of steps for automating common use cases of the VNFs. A workflow provides a means to orchestrate simple and complex sequences of manual or automated tasks.

1.1 Purpose and Scope

This document covers the following workflow-based lifecycle management procedures:

  • Instantiate vCUDB system.

  • Terminate vCUDB system.

  • vCUDB Upgrade Preparation.

  • Upgrade vCUDB system.

    Note: Application Schema update is not included.

All manual preparation steps that must be executed by Ericsson personnel are out of the scope of this document. See Prerequisites for more information.

The Instantiate vCUDB system is not applicable for vCUDB system expansions.

Note: A Virtualized CUDB node is referred to as a vCUDB VNF instance throughout the document.

1.2 Revision Information

Rev. A

Rev. B

Rev. C

Rev. D

Other than editorial changes, this document has been updated as follows:

1.3 Target Groups

This document is intended for CUDB Operator.

For some of the actions described in the document, Ericsson personnel must be contacted as:

CUDB Administrator  

The CUDB Administrator prepares configuration files, needed preparation steps and certain troubleshooting tasks.

Cloud Administration  

The Cloud Administrator is the cloud service provider who executes required actions on the cloud infrastructure.

1.4 Typographic Conventions

Typographic conventions can be found in the following document:

1.5 Prerequisites

This section describes the prerequisites that must be fulfilled before executing any of the workflows.

CUDB Administrator and/or Cloud Administrator user must execute all initial steps included in the manual procedures to provide system preparation and configuration files needed to run the workflow-based lifecycle management procedures and, in case of upgrade, to be able to have all preparations ready from upgrade documentation.

1.5.1 Hardware and Software

The following virtual and physical hardware and software are required:

  • Software delivery package (CUDB Workflow pack).

  • VNF-LCM release is Media build version 3.5.3 or higher.

  • VNF-LCM up and running using Ericsson Network Management System (NMS), either Operations Support System for Radio and Core (OSS-RC) or Ericsson Network Manager (ENM).

    For example, in case of OSS-RC, to check the correct functioning of VNF-LCM, follow the steps defined in the Post Installation Verification section of VNF-LCM CEE/Openstack Installation Instructions, 1, in the OSS-RC documentation.

    Note: The Virtualized Infrastructure Manager (VIM) connection information in VNF-LCM framework has to be configured. This information is used by the workflows to connect to VIM and perform operations. Add as many VIMs as needed to VNF-LCM framework and add as many tenants as needed to the previously added VIM. Refer to the VNF-LCM CLI Admin section of VNF-Lifecycle Manager System Administration Guide, in the OSS-RC documentation.
  • VNF-LCM disk must be properly dimensioned and the available size must be sufficient to execute the workflows.

  • Any extra file not mentioned in that document must be stored in VNF-LCM under /vnflcm-ext/ directory to prevent the disk from getting full where workflows are executed.

  • Virtual infrastructure must be prepared for vCUDB deployment, according to the manual installation instruction.

1.5.2 Configuration Files

1.5.2.1 Instantiate Configuration Files

Contact CUDB Administrator to obtain the following files, after all initial steps included in the installation instruction for vCUDB are executed:

  • All schema files required

  • License files named license-key-file.xml

  • sql files

  • sqlList.txt file

  • fixed-entries-pl.ldif file

  • heat_template file named main.yaml

  • environment_template file named env.yaml

  • scaling_template file named DS_scaling.yaml

  • Initial configuration model file named CudbOamModel_Instances_config_imm.xml

1.5.2.2 Upgrade Configuration Files

Contact CUDB Administrator to obtain the following files, after all preparations are ready from upgrade documentation:

  • License files named CUDB_LKF_<cudb_node_id> and fingerprint file named lm_fingerprint_<cudb_node_id> in case the vCUDB software version is previous to 1.10.

    Note: _<cudb_node_id> corresponds to the cudbLocalNodeId/cudbRemoteNodeId defined in the CUDB configuration model for each CUDB node of the system.
  • Upgrade package named CUDB_PACKAGE_x86_64-CXP.

  • preventive_Was.tar, if available.

1.5.3 Other Requirements

Contact CUDB Administrator regarding the following aspects:

  • If vCUDBs are already installed, ensure that property tags included in the stacks are "vCUDB". If not, update them as the following:

    Select the appropriate stack for updating and update the "vCUDB" tag, if needed.

    For example::
    source <openrc>
    heat stack-list
    heat stack-update -x --tags "vCUDB" <stack_name>
  • In the case of instantiate:
    • If the vCUDB system consists of more than 10 vCUDB VNFs, add multiple SITE_VIP IPs in a live vCUDB VNF.

    • When instantiation is finished, change image in System Controllers.

  • In the case of upgrade:

    • If upgrade preparation steps fail during workflow execution.

    • If upgrade steps fail during workflow execution.

    • If upgrade is not performed again after executing an automatic fallback, the system needs to be fully restored to previous CUDB software version.

    • If vCUDB base software version is previous to 1.12, to enable VM evacuation support in Cloud Execution Environment (CEE).

2 Onboard

This section describes how to prepare for workflow-based VNF operations using VNF-LCM. Performing this procedure is a prerequisite for lifecycle operations.

For more information on installing workflows, refer to the Workflow Bundle Administration section of VNF-Lifecycle Manager System Administration Guide document in the OSS-RC documentation.

Execute the following commands on the VNF-LCM Services Virtual Machine (VM):

Steps

  1. Connect to VNF-LCM:
    ssh cloud-user@<VNFLAF-services_ip>
  2. Copy the CUDB Workflow pack CUDB_VNFLCM_WORKFLOWS-CXP9040847.tar file into /home/cloud-user directory.
  3. Decompress the CUDB Workflow pack CUDB_VNFLCM_WORKFLOWS-CXP9040847.tar
    [cloud-user@vnflaf-services ~]$ tar -xvf CUDB_VNFLCM_WORKFLOWS-CXP9040847.tar
  4. Install the CUDB Workflow pack:
    1. Switch to root user on vnflaf-services VM:
      [cloud-user@vnflaf-services ~]$ su – root
      [root@vnflaf-services ~]#
    2. Verify that the pack is not installed, by running the list command:
      rpm -qa |grep ERICvCUDB
    3. Uninstall the previous version, if there is one, and take the input data from the previous printout:
      # wfmgr bundle uninstall --package=<Name> --version=<Version>
    4. To install the Workflow pack, run the install command. The rpm file is located in the /home/cloud-user folder by default.
      # wfmgr bundle install --package=/home/cloud-user/<workflow_bundle_rpm_file>
      The expected output must be similar to the below example:

      Example

      -----------------------------------------------------------------------------------------------------------
      
      package_name                         | pre_install   | install    | post_install    |             message |
      
      ------------------------------------------------------------------------------------------------------------
      
      | ERICvCUDB_CXP9035445-1.9.20-1.noarch.rpm |   success   | success |   success    | package installation successful |
      
      ------------------------------------------------------------------------------------------------------------ 
      
      For more information on the output of the command, go to /var/log/wfmgr-cli-log/logfile.log.

3 Procedures

This section describes how to perform LCM operations. VNF-LCM procedures use workflow instances.

Launch VNF-LCM from web browser:

http://<vnflaf-services_ip>/index.html#workflows

Figure 1 shows the example of VNF Lifecycle Management, where the workflow is shown.

Figure 1   Workflow Overview

3.1 Instantiate vCUDB System

This section describes how to instantiate a VNF using VNF-LCM.

This workflow is used to install a vCUDB node in a CUDB system.

Note: To have an installed a vCUDB system replicating among its vCUDB nodes, this workflow must be executed several times and it has to be launched consecutively without waiting for one VNF to finish before launching the next one (one workfllow per each VNF comprising the vCUDB system running in parallel).

If a vCUDB system consists of more than 10 vCUDB VNFs, once the instantiations are finished, add multiple SITE_VIP IPs in a live CUDB node.

3.1.1 Instantiate vCUDB System Prerequisites

The following configuration files for one vCUDB system must be available:

  • Per vCUDB system:

    • CUDB and application schemas: <Schema1> <Schema2>,...., and <SchemaN>

    • SQLs: <SQL1> <SQL2>,....,<SQLN> , sqlList.txt and fixed-entries-pl.ldif

    Note: These files must be placed into the common_config directory.
  • Per vCUDB VNF:

    • env.yaml: Use the default values supplied in environment_template or populate with customized parameters. Rename the file to env.yaml. Contains the required parameters.

    • main.yaml: HOT file. Rename heat_template file.

    • DS_scaling.yaml: Rename scaling_template, scaling out and scaling in template.

      Note: This file must be placed into the configurations/<VNF-CUDB-name> directory.
    • Rename licenses files to license-key-file.xml

    • Rename initial configuration model to CudbOamModel_Instances_config_imm.xml

      Note: This file must be placed into the configurations/<VNF-CUDB-name>/cudb_config directory.

All the previous files must go under /vnflcm-ext/backups/workflows/cudbvnfd directory.

Note: Remember that all files must have permission to be executed by jboss at least. If it is not the case, change it as follows:
cd /vnflcm-ext/backups/workflows/cudbvnfd
chown -R jboss_user:jboss *

The final structure directory is created manually as shown in Figure 2.

Figure 2   Final Structure Directory for Instantiate vCUDB System
Note: Different vCUDB systems can be defined. Select one during instantiation of a VNF. Moreover, one vCUDB system consists of one or several VNFs, that is, CUDB nodes.

All configuration files must be placed manually in the corresponding directories.

3.1.2 Instantiate vCUDB VNF

3.1.2.1 Start a New Instance

Steps

  1. In the VNF-LCM Workflows screen, select Instantiate vCUDB VNF and click Start a New Instance.
  2. Instance Name field is filled out, click Submit.
  3. Select the newly-created workflow from the Instance with User Tasks panel, and click on the man icon.
  4. On the Workflow Instance screen, add VNF Name, select VNF descriptor ID to instantiate, and click Submit.
    The Select VNF descriptor ID field displays VNF configurations available for instantiation in the /vnflcm-ext/backups/workflows/cudbvnfd/ directory.
  5. On the Get Instance Configuration screen, select a VNF configuration to instantiate, and click Submit.
    The Select Configuration for the VNF instance field displays VNF configurations available for instantiation in the /vnflcmext/backups/workflows/cudbvnfd/vCUDB_1/configurations directory.
    Refresh the web page.
  6. On the Select VIM screen, select a VIM, and click Submit.
  7. On the Select Tenant screen, select a Tenant, and click Submit.

Result: On the Workflow Instance screen, click on Workflow Diagram and Workflow Log to see the progress.

Note: Refresh the web page from time to time.
  1. When instantiation is finished, change the image in the System Controllers, contact CUDB Administrator.

3.1.2.2 Execute Steps

The workflow log shows the ongoing execution steps. The expected progress information output must be similar to the following example:

3.2 Terminate vCUDB System

This section describes how to terminate a VNF using VNF-LCM.

This workflow can be used to decommission a vCUDB system and free the resources by executing it consecutively on each VNF comprising the vCUDB system.

3.2.1 Terminate vCUDB VNF

Steps

  1. In the VNF-LCM Workflows screen, select Terminate vCUDB VNF, and click Start a New Instance.
  2. Instance Name field is filled out, click Submit.
  3. Select the newly-created workflow from the Active Instances panel and click on the man icon.
    Result: Traffic stops after VNF is terminated. On the Workflow Instances screen, click on Workflow Diagram and Workflow Log to see the progress.
    Note: Refresh the web page.
  4. On the Select VIM screen, select a VIM, and click Submit.
  5. On the Select Tenant screen, select a Tenant, and click Submit.
  6. On the Workflow Instances screen, select the VNF to terminate, and click Submit.
    Forceful termination: If VNF is forcefully terminated, all ongoing traffic will be lost. This option must be confirmed on the next screen.
    Result: The VNF instance is terminated. On the Workflow Instances screen, click on Workflow Diagram and Workflow Log to see the progress.
    Note: Refresh the web page.

3.3 vCUDB Upgrade Preparation

This section describes how to execute upgrade preparation for a vCUDB system using VNF-LCM. This workflow is used to prepare vCUDB system for upgrade execution and execute different health checks to verify that the system is working properly before executing upgrade. To execute upgrade preparation phase on a vCUDB system, this workflow must be executed only once for the system on chosen VNF. Operations executed in this workflow are system-level operations.

Before starting the upgrade preparation workflow execution, see Appendix: General Considerations when Upgrading for more information about the upgrade procedure.

Note: The troubleshooting of the upgrade workflow is not within the scope of this document. In case of failure, please contact CUDB Administrator.

3.3.1 vCUDB Upgrade Preparation Prerequisites

The following configuration files for one vCUDB system must be available:

  • Per vCUDB system:

    • upgrade package named CUDB_PACKAGE_x86_64-CXP

    • preventive_WAs.tar , if available.

  • Per vCUDB node:

    • licenses file named CUDB_LKF_<cudb_node_id>.xml

      Example:

      CUDB_LKF_10.xml

    • fingerprint file named lm_fingerprint_<cudb_node_id>

      Example:

      lm_fingerprint_10

    Note: Licenses and fingerprint files are required if the CUDB system version is previous to 1.10.

All the previous files must go under /vnflcm-ext/backups/workflows/cudbvnfd/upgrade directory. The final structure directory is created manually as shown in Figure 3.

Figure 3   Final Structure Directory for vCUDB Upgrade Preparation

All configuration files must be placed manually in the corresponding directories.

3.3.2 vCUDB Upgrade Preparation Steps

Steps

  1. In the VNF-LCM Workflows screen, select vCUDB Upgrade Preparation, and click Start a New Instance.
  2. Instance Name field is filled out, click Submit.
  3. Select the newly-created workflow from the Instance with User Tasks panel, and click on the man icon.
  4. On the Select VIM screen, select a VIM, and click Submit.
  5. On the Select Tenant screen, select a Tenant, and click Submit.
  6. On the Collect User Data for Upgrade Preparation screen, select a folder with upgrade input files, VNF instance and root password for selected VNF instance, and click Submit.
    The Select folder containing input files for upgrade of the vCUDB system field displays VNF configurations available for upgrade preparation in the /vnflcm-ext/backups/workflows/cudbvnfd/upgrade directory.
    The workflow log shows the executed procedures and the progress in percentage. The expected progress information output must be similar to the following example:
    The number of executed procedures and the total number of procedures in this phase depend on the upgrade path. The duration of each procedure is different. It takes more time for some procedures to be executed on all the nodes of the system.

3.4 vCUDB Upgrade System

This section describes how to execute upgrade of CUDB SW for a vCUDB system using VNF-LCM. To execute upgrade on a vCUDB system, this workflow must be executed once for every VNF in the system. Upgrade workflow execution should be performed one by one. Some operations done in this workflow are system wide.

Note: The troubleshooting of the upgrade workflow is not within the scope of this document. In case of failure, please contact CUDB Administrator.

3.4.1 vCUDB Upgrade System Prerequisites

In order to execute vCUDB upgrade workflow, vCUDB Upgrade preparation workflow must be executed previously as described in vCUDB Upgrade Preparation, without any error. See also Appendix: General Considerations when Upgrading.

3.4.2 vCUDB Upgrade VNF Steps

Steps

  1. In the VNF-LCM Workflows screen, select vCUDB Upgrade, and click Start a New Instance.
  2. Instance Name field is filled out, click Submit.
  3. Select the newly-created workflow from the Instance with User Tasks panel, and click on the man icon.
  4. On the Select VIM screen, select a VIM, and click Submit.
  5. On the Select Tenant screen, select a Tenant, and click Submit.
  6. On the Collect User data for Upgrade screen, select VNF instance, check Automatic Fallback to restore the previous CUDB software version in case the upgrade fails, and root password for the selected VNF instances, and click Submit.
    Note: If Automatic Fallback option is chosen, if the upgrade fails, the fallback is executed without stopping. If it is not selected and the upgrade fails, the execution is stopped and manual intervention is needed. Contact CUDB Administrator.
  7. On the Collect user data for Orchestrator Node screen, insert the password for the specified VNF instance that is used as an orchestrator from which the upgrade will be started for the selected VNF, and click Submit.

After This Task

The workflow log shows the executed procedures and the progress in percentage. The expected progress information output must be similar to the following example:

The number of executed procedures and the total number of procedures in this phase depends on the upgrade path. The duration of each procedure is different. It takes more time for some procedures to be executed, for example NI_PROC_IBU_WAIT_AIT and NI_PROC_RESTORE_DATA.

Do!

If vCUDB base software version is previous to 1.12, after the CUDB VNF, deployed on CEE is upgraded, all VMs must be configured with ha-offline High Availability (HA) policy, as stated in the Other Requirements section of CUDB Virtual Infrastructure Requirements. For more information, see Other Requirements.

4 Troubleshooting

If the workflow execution is unsuccessful, see the following options for more information on the cause of failure:

  • Workflow Log view. Check the progress information output on VNF-LCM.

  • Jboss Server log on VNF-LCM. During execution, log information is saved in this file:

    # tail -f /ericsson/3pp/jboss/standalone/log/server.log

    Contact Ericsson personnel if support is needed.

  • If Instantiate VNF workflow was failing during CUDB installation, connect through either sysmgmt or oam vip, and check the automatedInstall.log file in SC_2_1:

    ssh <cudb_user>@<vip> cd /home/coremw_appdata/incoming/cudb-install-temp/automatedInstall.log

  • If vCUDB Upgrade Preparation workflow was failing in a procedure, more information about the error can be found in the /var/tmp/upgradeMonitor.log file on VNF-LCM.

For further help, contact a CUDB Administrator as Ericsson support to follow the corresponding VNF-LCM Administration Guide or Troubleshooting documentation for CUDB.

5 Appendix: General Considerations when Upgrading

5.1 General

During the CUDB upgrade:
  • The installed software is exactly the same as the software in a maiden installed system. Therefore, the same behavior and performance is expected.

  • All previous configuration is preserved, except some hardening parameters.

Upgrade is aimed to run on standard CUDB environments. The existence of any deviation, meaning any customization present in the target system, could affect the Upgrade execution, so it is recommended to contact the next level of support in advance to analyze the case. Deviations can include:
  • Software Delivery Packages (SDPs) or Red Hat Package Manager (RPM) software files: for example, the cudbReference file

  • System configuration: administrators, sudoers, and so on

  • Network configuration: for example, QoS, VLAN IDs and subnets

  • The upgrade must be performed during a low system traffic period (maintenance window) and scheduled according to the estimated times. Both the upgrade and fallback times must be considered during the planning activities.

  • In case the system memory usage is above 85% in any node, contact Ericsson personnel to evaluate a Data Store (DS) expansion procedure before the upgrade.

  • The system is up and running with no relevant alarms issued.

    Examples of alarms not relevant for upgrade:

    • Logchecker found minor error(s), Preventive Maintenance

    • Root Login Failed, Security

    • Fault retrieving subscriber statistics, Application Counters

    • Deleted data due to reconciliation, Storage Engine

    • Memory usage at Warning level

For information on the estimated times, consult with Ericsson personnel.

Both the upgrade and fallback times must be considered during the planning activities.
  • In case the system memory usage is above 85% in any node, contact Ericsson personnel to evaluate a Data Store (DS) expansion procedure before the upgrade.

  • There is enough space in /cluster to store the Data, Software, and Configuration Backups and Upgrade related files. If not, free up storage space by removing existing backups or not needed information, otherwise an external storage is needed:

    available space at /cluster > SW and Configuration backup(1 GB) + Upgrade-created stuff(1GB) + Data backup (250MB (GEP3), 750MB (GEP5), 25MB (small vCUDB)) * number of ndb blades

  • The system is up and running with no relevant alarms issued.

    Examples of relevant alarms:

    • Logchecker found minor error(s), Preventive Maintenance

    • Root Login Failed, Security

    • Fault retrieving subscriber statistics, Application Counters

    • Deleted data due to reconciliation, Storage Engine

    • Memory usage at Warning level

The CUDB system upgrade can include changes in the interfaces at network level that may require further actions external to the CUDB nodes. Refer to the Interface section of CUDB 1.12 Network Impact Report.

  • When a node is being disabled in the system, provisioning traffic is stopped while the masters are moved away of the upgrading node. In case there are no masters, provisioning does not stop.

  • No system downtime is expected during the whole procedure.

  • Minor traffic loss and dropped connections can be experienced in the following cases:
    • while the CUDB nodes are disabled in the system.

    • during the LDAP servers restart performed in the Node upgrade and Fallback phases needed due to the modification of database tables structure.

5.2 Recommendations

  • Stopping the provisioning during CUDB node upgrade process is not compulsory. Bulk provisioning must be avoided.

  • The CUDB node upgrade/fallback disables it in the CUDB system. The system provides for re-routing traffic, so there is no loss of service, but the overall traffic processing capacity of the system decreases significantly.

  • During upgrade execution mastership movements are expected. To avoid mastership movements in some specific cases, move them manually.

5.3 Restrictions

  • No configuration changes are allowed until the system upgrade is completed.

  • Once the system is in mixed state (that is, some nodes are updated, but not all of them), do not use standard CUDB restore commands.

  • Current upgrade/fallback procedure does not support handling of faulty blades such as ignoring or skipping them. If any blade is faulty it must be immediately replaced before continuing with upgrade/fallback.

  • No system task scheduling changes are allowed. Therefore, avoid the execution of any command modifying the crontab, for example: cudbConsistencyMgr.

  • The upgrade phases must not be executed in different nodes at the same time.

  • The cudbCheckReplication command can dump an internal error in the response printout when checking replication in slave DS units that might lead to erroneous interpretation of the command or in the status of the replication:

    awk: fatal: can't open source file `/opt/ericsson/cudb/OAM/bin/cudbSystemStatus.awk' for reading (No such file or directory)
    [info] Node20-DSG0 replication: OK
    

    So, the use of the cudbCheckReplication command is restricted while the system has a mixed status. This restriction only applies if the base release is CUDB 1.2 or a previous version.

  • Multiple LDAP request is not allowed until the system is fully upgraded.

Reference List

CUDB Documents

ENM CPI Library References

  1. VNF-LCM Installation Instructions 1/1531-CNA 403 3313
  2. ENM Configuration System Administration Guide 1/1543-AOM 901 151-1

OSS-RC CPI Library References

  1. VNF-LCM CEE/Openstack Installation Instructions 1/153 72-APR 901 0578
  2. VNF-Lifecycle Manager System Administration Guide 1543-APR 901 0578