This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please.

Child pages
  • NCC Integration With Auto Provision VPX in CloudStack
Skip to end of metadata
Go to start of metadata
 

Introduction

In CloudStack, a NetScaler device is managed directly via Nitro API. CS manages NetScaler to provide LB (load balancing), GSLB (global server load balancing), Static NAT, EIP\ELB (Elastic IP) etc services to the users.

Apart from Nitro API, NetScaler had introduced Netscaler Control Center (NCC) Manger which can manage multiple NetScaler devices from a single point. NCC is a Manager which controls all NS devices' life cycle and Service Configurations on the devices.

Purpose

Cloudstack to leverage NCC and use NCC for managing all NS devices. Provision NS VPX in CS Computing Fleet when NCC requests for deploying new VPX

 

References

 

JIRA Tickets

CLOUDSTACK-8672 - Getting issue details... STATUS

CLOUDSTACK-8673 - Getting issue details... STATUS  

 

Glossary

  • NCC: NetScaler Control Center
  • NS: NetScaler
  • CS: CloudStack

Use cases

  1. Cloud Admin should be able to manage all types of NS devices using NCC
  2. Cloud Admin can choose what kind of an NS offering can be used using Service Package from NCC.

  3. Admin can leverage service package to use different versions of Netscaler devices.
  4. Admin can leverage service package to use Netscaler devices in Shared or Dedicated mode
  5. Admin can choose to provision of NS VPX in CS
  6. Admin can choose to provision of NS VPX with different Compute offerings
  7. CS Accounts and Users should be able to provision these network offerings - which should in-turn provision the Netscaler VPX/SDX instances without any admin intervention - including license application. 
  8. CS Accounts and Users should be able to upgrade from one VPX network offering to another which could be another VPX / SDX / MPX offering
  9. CS Accounts and Users should be able to setup Netscaler VPX in HA Pairing mode

Benefits of Using NCC in CloudStack

Capacity pooling across all NetScaler infrastructure. NetScaler Control Center is designed to efficiently pool and manage capacity across all NetScaler appliances including physical (MPX), virtual (VPX), and multi-tenant (SDX) form factors

 

End-to-end automation across all NetScaler appliances. Using NetScaler Control Center, the complexity of provisioning and deploying ADC functions on a large pool of NetScaler appliances is completely hidden from both the cloud provider and the cloud tenant

 

Guaranteed SLAs through service aware resource allocation. Cloud providers need to guarantee performance and availability SLAs to different cloud tenants. NetScaler Control Center provides granular control over ADC resource allocation policies, giving the provider flexibility in creating differentiated SLAs for cloud tenants based on their application’s needs. A simple and intuitive workflow to construct “service packages” for different tenant tiers simplifies the SLA creation process. Service packages can be defined with the following parameters and are customizable per tenant:

  • Appliance type. The target appliance type on which a logical NetScaler instance for the tenant is created.
  • Isolation type. Option to choose between fully dedicated instances and shared instances.
  • Resource allocation. The amount of CPU, memory, and SSL capacity to be allocated for each tenant’s dedicated instance.
  • Software versions. The specific version of NetScaler firmware for each tenant’s dedicated instance—allows for version and upgrade independence between tenants.

 

Functional requirements

  1. Support of creation of LB rules on Netscaler devices (VPX or MPX)
  2. Support creation of LB HealthChecks policies on Netscaler devices
  3. Support creation of LB Stickiness policies on Netscaler devices.
  4. Support creation of AutoScaler policies on Netscaler devices.
  5. Support creation of GSLB rules on Netscaler devices.
  6. Support metering of public IP usage in Netscaler devices
  7. Support SSL Termination (SSL termination framework is implemented by Apache Cloudstack community for LB on NS devices). Here is the link for the FS https://cwiki.apache.org/confluence/display/CLOUDSTACK/SSL+Termination+Support
  8. Support of deploying guest networks with Managed NetScaler Devices( VPX, MPX, SDX) from NCC.
  9. Support of deploying Auto provisioning NS VPX in CS supported only if the guest network requires Dedicated Mode NetScaler device. NCC will request CloudStack to deployNSVpx in CloudStack only if the network requires dedicated NS VPX 

Assumptions

  1. NCC should provide Rest API  to add NCC details to CS.
  2. NCC should understand CloudStack request for discovering the NCC capabilities.
  3. NCC's management network should be reachable from CS.
  4. There will be a single NCC for an entire CloudStack deployment. 

Work Flow

Pre-setup

  1. Install NCC and Configure NCC.
  2. NCC management IP should be reachable to CS.
  3. CS should be able to reach NCC Manager (NCC manager is up and running).

CS Admin

Network Creation by CS admin

  1. 1. While creating Network offering choose Managed NetScaler from NCC as Service Provider for LB
  2. 2. Choose Service Package and choose other services for guest network and create network offering

NCC User/Admin

Creating Service Package

  1. NCC Admin will create Service Package in NCC.
  2. After Creating Service Package, Admin will register services packages in CS by calling registerServicePackage API 

Registering NS VPX Template

  1. When Service package is created in NCC, NS VPX image will be associated with the SP. 
  2. When NCC Admin registers ServicePackage in CS, NCC will upload the associated VPX Image to the CS via RegisterTemplate API and this template will be cross zones.

Troubleshooting

  1.    Can't see the service packages in the Create Network Offering wizard
    1. Either NCC Admin didn't create any Service Packages or issues in API of Listing the services packages in CS.
  2. Verifying the LB rules created in guest network
    1. Find the network id, go to NCC UI and find the device allocated to the network and verify LB configurations can be seen in the device
  3. Guest Network creation is failed on NetscalerElement
    1. NCC was not able to allocate a device matching to the requirements of services package to the network.
  4. LB Rule creation Failed
    1. Login to NCC and check the NCC log to find the log for the LB rule creation
  5. Auto Provision VPX in CS is failed
    1. Admin can check the reason for failure in management log. 
    2. Reasons could be 
      1. Template not available to use 
      2. Insufficient capacity exception (Compute, IP address etc were not available).

 

Design

Communication Between NCC and CS:

Integration with NCC is via RestAPI with JSON payload.

When cmd has to be sent to NCC, NCCResource will convert the java cmd to JSON payload and send the request as RestAPI with JSON payload.

When NCC sends the response it will send the response in JSON, NCCResource will convert the response associated with java type Answer and uses it.

NCC lifecycle in CS

Admin registers CloudStack in NCC with CS IP, API key, and secret key

Admin registers NCC Manager with CloudStack with NCC IP, username, and password.

When NCC is registered in CS, NCCResource will be created and configured with details.

Admin can delete the NCC if no guest network is using NCC.

NCC takes care of License Management for the devices it is managing. Service Package will tell the information about the license specification provided by NCC for the device. Licensing differs based on the throughput. there will different licenses which NCC manages to assign the device.

Network life cycle

Create Network Offering to Offer Services from NCC:

When creating network offering, choose Services where Netscaler is supported.

When Netscaler is selected, registered Services Packages will be shown to the user/admin to select the service packages.

Enable the NetScaler provider in the Physical network, Network Service Providers and then create guest network otherwise guest network implementation will fail

As the Admin creates the service package in NCC, the title of the Service Package will tell the Capabilities of the Service Package

When the guest network is implemented with above created Network offering, then for the network, NCC device will be mapped.

Regarding NS device which is allocated by NCC to the guest network is abstracted in CloudStack. Admin won't get info from CloudStack.

Admin has to login to NCC and checks the Network to Device Mapping.

Managing Guest Network with Pre-Provisioned devices (VPX/MPX) in NCC:

Discussion Notes:

 

Admin creates a guest network with network offering using NS as Service provider and chooses appropriate Service Package. When the guest network is to be implemented for LB service, NetscalerElement will be called. NetScaler element will check the network offering and see if the service package is present then it will delegate the call to registered NCCManager. NCC Manager will try to implement the network by reserving a pre-existing or already registered device in NCC. Once NCC able to allocate a device for the Network it will send Implement Network as true else it will send false. For the True response from NCC, NetscalerElement will send True and NetworkOrchestrator will continue to implement other network services or it will fail the Network Implementation if the response is false.

Managing Guest Network with Auto Provisioning VPX devices on SDX managed in NCC:

Discussion Notes:

 

 

In the case of VPX auto provisioning in SDX through NCC, when the implement call comes to NCC to allocate a device to the guest network, NCC will auto-provision an NS VPX on the SDX box configured in NCC. 

Once the NS VPX is successfully provisioned and configured with network details then NCC will send True for the Implement network call or else it will send False and the Network Implementation will result in failure.

Managing Guest Network with Auto Provisioning VPX devices in CS and manage in NCC:

Work Flow:



For this case, NCC will register the NS VPX image with CloudStack by call RegisterTemplateApi. In this case, when NCC has to implement the network call, NCC will send a request to the CloudStack to deploy NetScaler VPX with already registered NS VPX image in CS. CS will provision the NS VPX in either XenServer or VMWare hypervisor. CS will create 3 NICs (Management, public, private), reserves a management IP from the pod and pushes it to get configured in management nic in NS Device.

Once the VPX is provisioned CS will mark the status of Service VM as running. NCC keeps polling the job of deploying VPX in CS. Once the job is completed and Successful, NCC will allocate the device to the guest network and returns Implement Network true. if deploy VPX job fails then NCC will return False to the implemented network and CS will fail the Network implementation and shutdowns the network  When CS deploys VPX, it will deploy without HA at the point of this writing. if the VPX goes down for any reason Admin has to take care of recovering the VPX and get it running. Live Migration of VPX running its Compute fleet is supported only if the NS VPX image is supported on Vmware/XenServer.

Managing LB Rules:

Admin/User creates LB rule, then NetScaler will receive CreateLoadBalancerCmd. NetScaler will delegate the cmd to the registered NCC.

when NCC will receive the cmd, it will find the device allocated for the network where LB rule is getting created.

NCC will configure the LB rule configuration in the device and will return the status true/false upon completion. 

if the response is true, then it's LB rule creation is successful else it failed to create the LB rule.

if the LB rule creation is failed, details of the failure will be logged in MS log.

Admin can look at the log and find the issue. If the failure/issues are at NCC, Admin can login to NCC and check the details for the Root cause for failure.

Supported Zone/Network Types (ISOLATION Method: VLAN):

  1. Advance Zone Isolate Network (VLAN Isolation)
    1. Remaining services should be looked at priority. (LB, health checks, autoscale )
    2. GSLB low priority 
  2. Advance zone VPC Public Tier (in the tier where Netscaler is supported as external LB)
  3. Advance Zone Shared Network (stretch goal)
  4. Basic Zone EIP/ELP ( this is stretch goal)
  5. NS (VPX, MPX, VPX on SDX) will be supported with VLAN configuration only.

HA for VPX/MPX/SDX managed by NCC

When NCC deploys VPX, CCP won't enable the HA when deploying the VPX. HA for the services in VPX will be taken care by the NCC. If NC deploys VPX, and for any reason, if the VPX goes down in CCP Admin has to take care of troubleshooting it.

HA will be offered by NCC for the VPX/MPX/Vpx on SDX by Active-Passive mode of HA nodes.

When NCC requests to deploy the VPX in CloudStack for HA mode, the HA node will be deployed in the Same Pod.

When deploying (NS-VPX) HA Node, deployNSVpx will have a param which tells the first Vpx. CloudStack will find the deployment of the first VPX and deploys the HA node in the same pod (but not in the same host)

Out of Scope

  1. KVM, HyperV, and Bare metal hypervisor specific changes
  2. HA offering for the auto provisioned VPX

Known Issues

Usage

No impact.

Security

No impact.

API Changes

New Apis

registerNetscalerControlCenter - api to register the NCC

Parameter
Type
Optional/Required
Comment
ipStringRequiredIP of the NCC
usernameStringRequiredusername provided for cloudstack account in NCC
passwordStringRequiredpassword provided for cloudstack account in NCC
descriptionStringoptionalDescription of the NCC

 

deleteNetscalerControlCenter - api to delete the NCC

Parameter
Type
Optional/Required
Comment
idStringRequiredid of the NCC.

 

listNetscalerControlCenter - api to list NCC

 

registerNetscalerServicePackage - This new API will be used to register the new service packages created Admin in NC

Parameter
Type
Optional/Required
Comment
nameStringRequiredName of the service Package.
descriptionStringOptionaldescription of the service package

 

deleteNetscalerServicePackage : This API is to delete the service package registered in CS

Parameter
Type
Optional/Required
Comment
idLongRequiredid of the service package

listNetscalerServicePackage: it will return the list of service packages or empty list if there are no service packages

 

reserveGuestIP

Parameter
Type
Optional/Required
Comment
networkidStringRequiredid of the network

 

reservePodIP

Parameter
Type
Optional/Required
Comment
zoneidStringRequiredzone id
podidStringRequiredid of the pod

 

createNetworkOffering API: new param is added to createnetworkoffering API.

Parameter
Type
Optional/Required
Comment
servicepackageidStringOptionalid of the service package, default value is empty/null
NSVPX Life Cycle Management API's

 deployNsVpx : this API will deploy the NS VPX in CloudStack. This API will take the same params as deployvirtualmachine API expect the network details (will update more details about params)

 

 startNSVpx:  This API will Start the NS Vpx given the id of the NS Vpx by admin.

Parameter
Type
Optional/Required
Comment
idStringRequiredid of the NSVpx


stopNSVpx: This API will stop the NS Vpx given the id of the NS Vpx by admin.

Parameter
Type
Optional/Required
Comment
idStringRequiredid of the NSVpx vm


destroyNSVpx: This API will destroy the NS Vpx given the id of the NS Vpx by admin only if the NSVpx is in stopped state (running VPX means its providing the service. if admin manually stops and destroy the VPX. It's Admin choice to do it. When NsVpx is not available then admin should try to re-provision the VPX as part of troubleshooting)

Parameter
Type
Optional/Required
Comment
idStringRequiredid of the NSVpx VM
    



listNSVpx :

This API will list Vpx(which VMs running on CloudStack Managed Hypervisors) which are auto-provisioned by CloudStack  on NCC request in CloudStack


DB Changes

New Tables:

Table Name:

netscaler_servicepackage (id, uuid, name, description)

external_netscaler_controlcenter (id, uuid, provider_name, host_id, username, password, ip)

netscaler_vpx(id, uuid, is_redundant,  redundant_state) (this might change during implementation)

change of schema: 

network_offering(servicepackage_id)  (addition of new column to the network offering table.

Other Features affected by this feature 

 

Hypervisors Supported

For auto provisioning VPX in CS

    1. XenServer (POC is done to pass NS IP details to VPX while booting up to setup NSIP)
    2. VMware (No POC is done. KB article available on how to pass the info. depending on this KB article http://support.citrix.com/article/CTX128250 )

UI Flow

    1. Change in Network Offering to show the service packages when Managed NS is selected as Service Provider
    2. New Wizard to Add NCC Manager
    3. Placer holder in the UI to show/list registered NCC Manager
    4. New tab/box to show auto provisioned NS Vpx in CS Fleet

Upgrade

              The seamless upgrade is not part of this release.

              Upgrade process will be documented separately and downtime will be there for the guest networks.

For a customer wants to upgrade an existing Nitro based implemented network to NCC based network offering they have the do the following.

      • Add NS devices with the same interface information like public/private interface VLANs should be appropriately configured in NetScaler ControlCenter.
      • Upgrade the network offering of a network.

For upgrading a network which is using dedicated instance on SDX.
    • Create a ServicePackage with the following spec and create a network offering with the SDX service package.
    • Upgrade the network offerings of the existing network to SDX based network offering.

For upgrading a network which is using shared instance on SDX.
      • Create SDX instances manually on the SDX or can add the existing instances.
      • Create a Servicepack and add these manually created instances.
      • Upgrade the network offering of the existing network to SDX based network offering.

Spec for SDX instance:

License: "Standard"
Memory: 2GB
ThroughPut: 1000;
Packets per second: 1000000;
SSL Cores: 0;

APPENDIX

Command references:

List of Items to Document


  • No labels