This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please.
In CloudStack, a NetScaler device is managed directly via Nitro API. CS manages NetScaler to provide LB (load balancing), GSLB (global server load balancing), Static NAT, EIP\ELB (Elastic IP) etc services to the users.
Apart from Nitro API, NetScaler had introduced Netscaler Control Center (NCC) Manger which can manage multiple NetScaler devices from a single point. NCC is a Manager which controls all NS devices' life cycle and Service Configurations on the devices.
Cloudstack to leverage NCC and use NCC for managing all NS devices. Provision NS VPX in CS Computing Fleet when NCC requests for deploying new VPX
Cloud Admin can choose what kind of an NS offering can be used using Service Package from NCC.
Capacity pooling across all NetScaler infrastructure. NetScaler Control Center is designed to efficiently pool and manage capacity across all NetScaler appliances including physical (MPX), virtual (VPX), and multi-tenant (SDX) form factors
End-to-end automation across all NetScaler appliances. Using NetScaler Control Center, the complexity of provisioning and deploying ADC functions on a large pool of NetScaler appliances is completely hidden from both the cloud provider and the cloud tenant
Guaranteed SLAs through service aware resource allocation. Cloud providers need to guarantee performance and availability SLAs to different cloud tenants. NetScaler Control Center provides granular control over ADC resource allocation policies, giving the provider flexibility in creating differentiated SLAs for cloud tenants based on their application’s needs. A simple and intuitive workflow to construct “service packages” for different tenant tiers simplifies the SLA creation process. Service packages can be defined with the following parameters and are customizable per tenant:
When NCC Admin registers ServicePackage in CS, NCC will upload the associated VPX Image to the CS via RegisterTemplate API and this template will be cross zones.
Integration with NCC is via RestAPI with JSON payload.
When cmd has to be sent to NCC, NCCResource will convert the java cmd to JSON payload and send the request as RestAPI with JSON payload.
When NCC sends the response it will send the response in JSON, NCCResource will convert the response associated with java type Answer and uses it.
Admin registers CloudStack in NCC with CS IP, API key, and secret key
Admin registers NCC Manager with CloudStack with NCC IP, username, and password.
When NCC is registered in CS, NCCResource will be created and configured with details.
Admin can delete the NCC if no guest network is using NCC.
NCC takes care of License Management for the devices it is managing. Service Package will tell the information about the license specification provided by NCC for the device. Licensing differs based on the throughput. there will different licenses which NCC manages to assign the device.
When creating network offering, choose Services where Netscaler is supported.
When Netscaler is selected, registered Services Packages will be shown to the user/admin to select the service packages.
Enable the NetScaler provider in the Physical network, Network Service Providers and then create guest network otherwise guest network implementation will fail
As the Admin creates the service package in NCC, the title of the Service Package will tell the Capabilities of the Service Package
When the guest network is implemented with above created Network offering, then for the network, NCC device will be mapped.
Regarding NS device which is allocated by NCC to the guest network is abstracted in CloudStack. Admin won't get info from CloudStack.
Admin has to login to NCC and checks the Network to Device Mapping.
Admin creates a guest network with network offering using NS as Service provider and chooses appropriate Service Package. When the guest network is to be implemented for LB service, NetscalerElement will be called. NetScaler element will check the network offering and see if the service package is present then it will delegate the call to registered NCCManager. NCC Manager will try to implement the network by reserving a pre-existing or already registered device in NCC. Once NCC able to allocate a device for the Network it will send Implement Network as true else it will send false. For the True response from NCC, NetscalerElement will send True and NetworkOrchestrator will continue to implement other network services or it will fail the Network Implementation if the response is false.
In the case of VPX auto provisioning in SDX through NCC, when the implement call comes to NCC to allocate a device to the guest network, NCC will auto-provision an NS VPX on the SDX box configured in NCC.
Once the NS VPX is successfully provisioned and configured with network details then NCC will send True for the Implement network call or else it will send False and the Network Implementation will result in failure.
For this case, NCC will register the NS VPX image with CloudStack by call RegisterTemplateApi. In this case, when NCC has to implement the network call, NCC will send a request to the CloudStack to deploy NetScaler VPX with already registered NS VPX image in CS. CS will provision the NS VPX in either XenServer or VMWare hypervisor. CS will create 3 NICs (Management, public, private), reserves a management IP from the pod and pushes it to get configured in management nic in NS Device.
Once the VPX is provisioned CS will mark the status of Service VM as running. NCC keeps polling the job of deploying VPX in CS. Once the job is completed and Successful, NCC will allocate the device to the guest network and returns Implement Network true. if deploy VPX job fails then NCC will return False to the implemented network and CS will fail the Network implementation and shutdowns the network When CS deploys VPX, it will deploy without HA at the point of this writing. if the VPX goes down for any reason Admin has to take care of recovering the VPX and get it running. Live Migration of VPX running its Compute fleet is supported only if the NS VPX image is supported on Vmware/XenServer.
Admin/User creates LB rule, then NetScaler will receive CreateLoadBalancerCmd. NetScaler will delegate the cmd to the registered NCC.
when NCC will receive the cmd, it will find the device allocated for the network where LB rule is getting created.
NCC will configure the LB rule configuration in the device and will return the status true/false upon completion.
if the response is true, then it's LB rule creation is successful else it failed to create the LB rule.
if the LB rule creation is failed, details of the failure will be logged in MS log.
Admin can look at the log and find the issue. If the failure/issues are at NCC, Admin can login to NCC and check the details for the Root cause for failure.
When NCC deploys VPX, CCP won't enable the HA when deploying the VPX. HA for the services in VPX will be taken care by the NCC. If NC deploys VPX, and for any reason, if the VPX goes down in CCP Admin has to take care of troubleshooting it.
HA will be offered by NCC for the VPX/MPX/Vpx on SDX by Active-Passive mode of HA nodes.
When NCC requests to deploy the VPX in CloudStack for HA mode, the HA node will be deployed in the Same Pod.
When deploying (NS-VPX) HA Node, deployNSVpx will have a param which tells the first Vpx. CloudStack will find the deployment of the first VPX and deploys the HA node in the same pod (but not in the same host)
registerNetscalerControlCenter - api to register the NCC
|ip||String||Required||IP of the NCC|
|username||String||Required||username provided for cloudstack account in NCC|
|password||String||Required||password provided for cloudstack account in NCC|
|description||String||optional||Description of the NCC|
deleteNetscalerControlCenter - api to delete the NCC
|id||String||Required||id of the NCC.|
listNetscalerControlCenter - api to list NCC
registerNetscalerServicePackage - This new API will be used to register the new service packages created Admin in NC
|name||String||Required||Name of the service Package.|
|description||String||Optional||description of the service package|
deleteNetscalerServicePackage : This API is to delete the service package registered in CS
|id||Long||Required||id of the service package|
listNetscalerServicePackage: it will return the list of service packages or empty list if there are no service packages
|networkid||String||Required||id of the network|
|podid||String||Required||id of the pod|
createNetworkOffering API: new param is added to createnetworkoffering API.
|servicepackageid||String||Optional||id of the service package, default value is empty/null|
deployNsVpx : this API will deploy the NS VPX in CloudStack. This API will take the same params as deployvirtualmachine API expect the network details (will update more details about params)
startNSVpx: This API will Start the NS Vpx given the id of the NS Vpx by admin.
|id||String||Required||id of the NSVpx|
stopNSVpx: This API will stop the NS Vpx given the id of the NS Vpx by admin.
|id||String||Required||id of the NSVpx vm|
destroyNSVpx: This API will destroy the NS Vpx given the id of the NS Vpx by admin only if the NSVpx is in stopped state (running VPX means its providing the service. if admin manually stops and destroy the VPX. It's Admin choice to do it. When NsVpx is not available then admin should try to re-provision the VPX as part of troubleshooting)
|id||String||Required||id of the NSVpx VM|
This API will list Vpx(which VMs running on CloudStack Managed Hypervisors) which are auto-provisioned by CloudStack on NCC request in CloudStack
netscaler_servicepackage (id, uuid, name, description)
external_netscaler_controlcenter (id, uuid, provider_name, host_id, username, password, ip)
netscaler_vpx(id, uuid, is_redundant, redundant_state) (this might change during implementation)
change of schema:
network_offering(servicepackage_id) (addition of new column to the network offering table.
For auto provisioning VPX in CS
The seamless upgrade is not part of this release.
Upgrade process will be documented separately and downtime will be there for the guest networks.
For a customer wants to upgrade an existing Nitro based implemented network to NCC based network offering they have the do the following.
Packets per second: 1000000;
SSL Cores: 0;