Follow the instructions below to deploy Apache Stratos on a preferred IaaS, i.e., Kubernetes, Amazon Elastic Compute Cloud (EC2), OpenStack and Google Compute Engine (GCE), in a single JVM:
For testing purposes you can run your Stratos setup on the internal database (DB), which is the H2 DB. In the latter mentioned scenario, you do not need to setup the internal DB. However, in a production environment it is recommend to use an external RDBMS (e.g., MySQL). Follow the instructions given below to configure Stratos with external databases: Stratos 4.1.0 requires the following external databases: User database, Governance database and Config database. Therefore, before using the above databases, you need to create these DBs and configure Stratos as mentioned below.
|
Stratos uses the Message Broker (MB) to handle the communication among all the components in a loosely coupled manner. Currently, Stratos uses Apache ActiveMQ; however, Stratos supports any Advanced Message Queuing Protocol (AMQP) Message Broker.
Follow the instructions below to run ActiveMQ in a separate host:
Download and unzip Apache ActiveMQ.
Start ActiveMQ.
./activemq start
Follow the instructions below to configure the embedded CEP:
Update the MB_HOSTNAME
and MB_LISTEN_PORT
with relevant values in the JMSOutputAdaptor.xml
file, which is in the <STRATOS_HOME>/repository/deployment/server/outputeventadaptors
directory, as follows:
property name="java.naming.provider.url">tcp://MB_HOSTNAME:MB_LISTEN_PORT</property>
Follow the instructions below to configure CEP with Stratos as an external component:
Enable thrift stats publishing in the thrift-client-config.xml
file, which is in the <STRATOS_HOME>/repository/conf
directory. Here you can set multiple CEP nodes for a High Availability (HA) setup.
<cep> <node id="node-01"> <statsPublisherEnabled>true</statsPublisherEnabled> <username>admin</username> <password>admin</password> <ip>localhost</ip> <port>7611</port> </node> <!--<node id="node-02"> <statsPublisherEnabled>true</statsPublisherEnabled> <username>admin</username> <password>admin</password> <ip>10.10.1.1</ip> <port>7714</port> </node>--> </cep>
If you are configuring the external CEP in the High Availability (HA) mode, create a CEP HA deployment cluster in full-active-active mode. Note that it is recommended to setup CEP in a HA mode.
Skip this step if you are setting up the external CEP in a single node.
For more information on CEP clustering see the CEP clustering guide.
When following the steps in the CEP clustering guide, note that you need to configure all the CEP nodes in the cluster as mentioned in step 3 and only then carryout the preceding steps.
<STRATOS_CEP_DISTRIBUTION>.
stream-manager-config.xml
file from the <STRATOS_CEP_DISTRIBUTION>/wso2cep-3.1.0/streamdefinitions
directory to the <CEP_HOME>/repository/conf
directory. Replace the content in the jndi.properties
file, which is in the <CEP_HOME>/repository/conf
directory, with the following configurations. Update the message-broker-ip
and message-broker-port
values.
connectionfactoryName=TopicConnectionFactory java.naming.provider.url=tcp://[MB_IP]:[MB_Port] java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory # register some topics in JNDI using the form # topic.[jndiName]=[physicalName] topic.lb-stats=lb-stats topic.instance-stats=instance-stats topic.summarized-health-stats=summarized-health-stats topic.topology=topology topic.ping=ping
Add the following content to the siddhi.extension
file, which is in the <CEP_HOME>/repository/conf/siddhi
directory.
org.apache.stratos.cep.extension.GradientFinderWindowProcessor org.apache.stratos.cep.extension.SecondDerivativeFinderWindowProcessor org.apache.stratos.cep.extension.FaultHandlingWindowProcessor org.apache.stratos.cep.extension.ConcatWindowProcessor org.apache.stratos.cep.extension.MemeberRequestHandlingCapabilityWindowProcessor org.apache.stratos.cep.extension.SystemTimeWindowProcessor
Copy the following JARs, which are in the <STRATOS_CEP_DISTRIBUTION>/wso2cep-3.1.0/lib
directory to the <CEP_HOME>/repository/components/lib
directory.
org.apache.stratos.cep.310.extension-4.1.5.jar
Copy the following JARs, which are in the <STRATOS_CEP_DISTRIBUTION>/lib
directory to the <CEP_HOME>/repository/components/lib
directory.
org.apache.stratos.messaging-4.1.x.jar
org.apache.stratos.common-4.1.x.jar
Download any dependencies on ActiveMQ 5.10.0 or the latest stable ActiveMQ TAR file from activemq.apache.org. The folder path of this file is referred to as <ACTIVEMQ_HOME>
. Copy the following ActiveMQ client JARSs from the <ACTIVEMQ_HOME>
/lib
directory to the <CEP_HOME>/repository/components/lib
directory.
activemq-broker-5.10.0.jar
activemq-client-5.10.0.jar
geronimo-j2ee-management_1.1_spec-1.0.1.jar
geronimo-jms_1.1_spec-1.1.1.jar
hawtbuf-1.10.jar
commons-lang3-3.4.jar
files from commons.apache.org and commons-logging-1.2.jar
files from commons.apache.org. Copy the downloaded files to the <CEP_HOME>/repository/components/lib
directory.<STRATOS_CEP_DISTRIBUTION>/wso2cep-3.1.0/eventbuilders
directory, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/eventbuilders
directory:HealthStatisticsEventBuilder.xml
LoadBalancerStatisticsEventBuilder.xml
<STRATOS_CEP_DISTRIBUTION>/wso2cep-3.1.0/inputeventadaptors
directory, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/inputeventadaptors
directory:DefaultWSO2EventInputAdaptor.xml
<STRATOS_CEP_DISTRIBUTION>/wso2cep-3.1.0/outputeventadaptors/JMSOutputAdaptor.xml
file, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/outputeventadaptors
directory:Update the MB_HOSTNAME
and MB_LISTEN_PORT
with relevant values in the JMSOutputAdaptor.xml
file, which you copied in the above step, as follows:
property name="java.naming.provider.url">tcp://MB_HOSTNAME:MB_LISTEN_PORT</property>
<STRATOS_CEP_DISTRIBUTION>/wso2cep-3.1.0/executionplans
directory, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/executionplans
directory:AverageHeathRequest.xml
AverageInFlightRequestsFinder.xml
GradientOfHealthRequest.xml
GradientOfRequestsInFlightFinder.xml
SecondDerivativeOfHealthRequest.xml
SecondDerivativeOfRequestsInFlightFinder.xml
siddhi.enable.distibuted.processing
property, in all the latter mentioned CEP 3.1.0 execution plans, from RedundantMode
to false
.<STRATOS_CEP_DISTRIBUTION>/wso2cep-3.1.0/eventformatters
directory, which you downloaded in step 2.2, to the <CEP_HOME>/repository/deployment/server/eventformatters
directory:AverageInFlightRequestsEventFormatter.xml
AverageLoadAverageEventFormatter.xml
AverageMemoryConsumptionEventFormatter.xml
FaultMessageEventFormatter.xml
GradientInFlightRequestsEventFormatter.xml
GradientLoadAverageEventFormatter.xml
GradientMemoryConsumptionEventFormatter.xml
MemberAverageLoadAverageEventFormatter.xml
MemberAverageMemoryConsumptionEventFormatter.xml
MemberGradientLoadAverageEventFormatter.xml
MemberGradientMemoryConsumptionEventFormatter.xml
MemberSecondDerivativeLoadAverageEventFormatter.xml
MemberSecondDerivativeMemoryConsumptionEventFormatter.xml
SecondDerivativeInFlightRequestsEventFormatter.xml
SecondDerivativeLoadAverageEventFormatter.xml
SecondDerivativeMemoryConsumptionEventFormatter.xml
Add the CEP URLs as a payload parameter to the network partition.
If you are deploying Stratos on Kubernetes, then add the CEP URLs to the Kubernetes cluster.
Example:
{ "name": "payload_parameter.CEP_URLS", "value": "192.168.0.1:7712,192.168.0.2:7711" }
Update the following configuration and artifact files in the Complex Event Processor (CEP):
carbon.xml
file, which is found in the <CEP_HOME>/repository/conf/
directory as follows: <offset>4</offset>
stream-manager-config.xml
file from the <STRATOS_SOURCE_HOME>/extensions/cep/artifacts/stream_definitions
directory to <CEP_HOME>/repository/conf
directory. Where <STRATOS_SOURCE_HOME>
refers to the Apache Stratos source repository.Replace the content in the jndi.properties
file, which is in the <CEP_HOME>/repository/conf
directory, with the following configurations. Update the message-broker-ip
and message-broker-port
values.
connectionfactoryName=TopicConnectionFactory java.naming.provider.url=tcp://[MB_IP]:[MB_Port] java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory # register some topics in JNDI using the form # topic.[jndiName]=[physicalName] topic.lb-stats=lb-stats topic.instance-stats=instance-stats topic.summarized-health-stats=summarized-health-stats topic.topology=topology topic.ping=ping
Add the following content to the siddhi.extension
file, which is in the <CEP_HOME>/repository/conf/siddhi
directory.
org.apache.stratos.cep.extension.GradientFinderWindowProcessor org.apache.stratos.cep.extension.SecondDerivativeFinderWindowProcessor org.apache.stratos.cep.extension.FaultHandlingWindowProcessor org.apache.stratos.cep.extension.ConcatWindowProcessor org.apache.stratos.cep.extension.MemeberRequestHandlingCapabilityWindowProcessor
<STRATOS_SOURCE_HOME>/extensions/cep/stratos-cep-extension
directory. Thereafter, copy the org.apache.stratos.cep.extension-4.1.x.jar
file that can be found in the <STRATOS_SOURCE_HOME>/extensions/cep/stratos-cep-extension/target
directory, to the <CEP_HOME>/repository/components/lib/
directory.Download any dependency on 5.9.1 or any latest stable ActiveMQ TAR file from https://activemq.apache.org/download.html. The folder path of this file will be referred to as <ACTIVEMQ_HOME>
. Copy the following ActiveMQ client JARSs from <ActiveMQ_HOME>
/lib
directory to the <CEP_HOME>/repository/components/lib
directory.
activemq-broker-5.9.1.jar
activemq-client-5.9.1.jar
geronimo-j2ee-management_1.1_spec-1.0.1.jar
geronimo-jms_1.1_spec-1.1.1.jar
hawtbuf-1.9.jar
<CEP_HOME>/repository/components/dropins
directory.andes-client-0.13.wso2v8.1.jar
geronimo-jms_1.1_spec-1.1.0.wso2v1.jar
commons-lang3-3.4.jar
files from commons.apache.org and commons-logging-1.2.jar
files from commons.apache.org. Copy the downloaded files to the <CEP_HOME>/repository/components/lib
directory. <STRATOS_SOURCE_HOME>/extensions/cep/artifacts/eventbuilders
directory to the <CEP_HOME>/repository/deployment/server/eventbuilders
directory:HealthStatisticsEventBuilder.xml
InstanceStatisticsEventBuilder.xml
LoadBalancerStatisticsEventBuilder.xml
<CEP_HOME>/repository/components/lib
directory.org.apache.stratos.messaging-4.1.x-SNAPSHOT.jar
file in the <STRATOS_SOURCE_HOME>/components/org.apache.stratos.messaging/target
directory.org.apache.stratos.common-4.1.x-SNAPSHOT.jar
file in the <STRATOS_SOURCE_HOME>/components/org.apache.stratos.common/target
directory.
<STRATOS_SOURCE_HOME>/extensions/cep/artifacts/inputeventadaptors/
directory to the <CEP_HOME>/repository/deployment/server/inputeventadaptors
directory:DefaultWSO2EventInputAdaptor.xml
<STRATOS_SOURCE_HOME>/extensions/cep/artifacts/outputeventadaptors
directory to the <CEP_HOME>/repository/deployment/server/outputeventadaptors
directory:DefaultWSO2EventOutputAdaptor.xml
JMSOutputAdaptor.xml
Update the MB_HOSTNAME
and MB_LISTEN_PORT
with relevant values in the JMSOutputAdaptor.xml
file that was copied in the above step, as follows:
property name="java.naming.provider.url">tcp://MB_HOSTNAME:MB_LISTEN_PORT</property>
<STRATOS_SOURCE_HOME>/extensions/cep/artifacts/executionplans
directory to the <CEP_HOME>/repository/deployment/server/executionplans
directory:AverageHeathRequest.xml
AverageInFlightRequestsFinder.xml
GradientOfHealthRequest.xml
GradientOfRequestsInFlightFinder.xml
SecondDerivativeOfHealthRequest.xml
SecondDerivativeOfRequestsInFlightFinder.xml
<STRATOS_SOURCE_HOME>/extensions/cep/artifacts/eventformatters
directory to the <CEP_HOME>/repository/deployment/server/eventformatters
directory:AverageInFlightRequestsEventFormatter.xml
AverageLoadAverageEventFormatter.xml
AverageMemoryConsumptionEventFormatter.xml
FaultMessageEventFormatter.xml
GradientInFlightRequestsEventFormatter.xml
GradientLoadAverageEventFormatter.xml
GradientMemoryConsumptionEventFormatter.xml
MemberAverageLoadAverageEventFormatter.xml
MemberAverageMemoryConsumptionEventFormatter.xml
MemberGradientLoadAverageEventFormatter.xml
MemberGradientMemoryConsumptionEventFormatter.xml
MemberSecondDerivativeLoadAverageEventFormatter.xml
MemberSecondDerivativeMemoryConsumptionEventFormatter.xml
SecondDerivativeInFlightRequestsEventFormatter.xml
SecondDerivativeLoadAverageEventFormatter.xml
SecondDerivativeMemoryConsumptionEventFormatter.xml
This step is only relevant to Stratos 4.1.5.
Skip this step if you do not want to enable monitoring and metering in Stratos using DAS. Even though this step is optional we recommend that you enable monitoring and metering in Stratos.
Optionally, you can configure Stratos to work with WSO2 Data Analytics Server (DAS), so that it can handle the monitoring and metering aspect related to Stratos.
If you want to use DAS with Stratos, prior to carrying out the steps below, download WSO2 DAS 3.0.0 and unzip the ZIP file.
These configurations are only valid when using Apache Stratos 4.1.5.
When using Apache Stratos 4.1.5 onwards, you can configure Stratos to work with WSO2 Data Analytics Server (DAS), so that it can handle the monitoring and metering aspect related to Stratos.
Use MySQL 5.6 and the 5.1.x MySQL Connector for Java when carrying out the following configurations.
Follow the instructions below to manually setup DAS with Stratos:
Enable thrift stats publishing with the DAS_HOSTNAME
and DAS_TCP_PORT
values in the thrift-client-config.xml
file, which is in the <STRATOS_HOME>/repository/conf
directory. If needed, you can set multiple DAS nodes for a High Availability (HA) setup.
<!-- Apache thrift client configuration for publishing statistics to WSO2 CEP and WSO2 DAS--> <thriftClientConfiguration> . . . <das> <node id="node-01"> <statsPublisherEnabled>false</statsPublisherEnabled> <username>admin</username> <password>admin</password> <ip>[DAS_HOSTNAME]</ip> <port>[DAS_TCP_PORT]</port> </node> <!--<node id="node-02"> <statsPublisherEnabled>true</statsPublisherEnabled> <username>admin</username> <password>admin</password> <ip>localhost</ip> <port>7613</port> </node>--> </das> </config> </thriftClientConfiguration>
Configure the Stratos metering dashboard URL with the DAS_HOSTNAME
and DAS_PORTAL_PORT
values in the <STRATOS_HOME>/repository/conf/cartridge-config.properties
file as follows:
das.metering.dashboard.url=https://<DAS_HOSTNAME>:<DAS_PORTAL_PORT>/portal/dashboards/metering-dashboard
Configure the Stratos monitoring dashboard URL with the DAS_HOSTNAME
and DAS_PORTAL_PORT
values in the <STRATOS_HOME>/repository/conf/cartridge-config.properties
file as follows:
das.monitoring.dashboard.url=https://<DAS_HOSTNAME>:<DAS_PORTAL_PORT>/portal/dashboards/monitoring-dashboard
Create the ANALYTICS_FS_DB
, ANALYTICS_EVENT_STORE
and ANALYTICS_PROCESSED_STORE
databases in MySQL using the following MySQL scripts:
CREATE DATABASE ANALYTICS_FS_DB; CREATE DATABASE ANALYTICS_EVENT_STORE; CREATE DATABASE ANALYTICS_PROCESSED_DATA_STORE;
Configure DAS analytics-datasources.xml
file, which is in the <DAS_HOME>/repository/conf/datasources
directory, as follows to create the ANALYTICS_FS_DB
, ANALYTICS_EVENT_STORE
and ANALYTICS_PROCESSED_STORE
datasources.
<datasources-configuration> <providers> <provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider> </providers> <datasources> <datasource> <name>WSO2_ANALYTICS_FS_DB</name> <description>The datasource used for analytics file system</description> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_FS_DB</url> <username>root</username> <password>root</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> <defaultAutoCommit>false</defaultAutoCommit> </configuration> </definition> </datasource> <datasource> <name>WSO2_ANALYTICS_EVENT_STORE_DB</name> <description>The datasource used for analytics record store</description> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_EVENT_STORE</url> <username>root</username> <password>root</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> <defaultAutoCommit>false</defaultAutoCommit> </configuration> </definition> </datasource> <datasource> <name>WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</name> <description>The datasource used for analytics record store</description> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://127.0.0.1:3306/ANALYTICS_PROCESSED_DATA_STORE</url> <username>root</username> <password>root</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> <defaultAutoCommit>false</defaultAutoCommit> </configuration> </definition> </datasource> </datasources> </datasources-configuration>
Set the analytics datasources created in above step (WSO2_ANALYTICS_FS_DB, WSO2_ANALYTICS_EVENT_STORE_DB
and WSO2_ANALYTICS_PROCESSED_STORE_DB
) in the DAS analytics-config.xml
file, which is in the <DAS_HOME>/repository/conf/analytics
directory.
<analytics-dataservice-configuration> <!-- The name of the primary record store --> <primaryRecordStore>EVENT_STORE</primaryRecordStore> <!-- The name of the index staging record store --> <indexStagingRecordStore>INDEX_STAGING_STORE</indexStagingRecordStore> <!-- Analytics File System - properties related to index storage implementation --> <analytics-file-system> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsFileSystem</implementation> <properties> <!-- the data source name mentioned in data sources configuration --> <property name="datasource">WSO2_ANALYTICS_FS_DB</property> <property name="category">large_dataset_optimized</property> </properties> </analytics-file-system> <!-- Analytics Record Store - properties related to record storage implementation --> <analytics-record-store name="EVENT_STORE"> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation> <properties> <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property> <property name="category">large_dataset_optimized</property> </properties> </analytics-record-store> <analytics-record-store name="INDEX_STAGING_STORE"> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation> <properties> <property name="datasource">WSO2_ANALYTICS_EVENT_STORE_DB</property> <property name="category">limited_dataset_optimized</property> </properties> </analytics-record-store> <analytics-record-store name = "PROCESSED_DATA_STORE"> <implementation>org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore</implementation> <properties> <property name="datasource">WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB</property> <property name="category">large_dataset_optimized</property> </properties> </analytics-record-store> <!-- The data indexing analyzer implementation --> <analytics-lucene-analyzer> <implementation>org.apache.lucene.analysis.standard.StandardAnalyzer</implementation> </analytics-lucene-analyzer> <!-- The maximum number of threads used for indexing per node, -1 signals to aute detect the optimum value, where it would be equal to (number of CPU cores in the system - 1) --> <indexingThreadCount>-1</indexingThreadCount> <!-- The number of index shards, should be equal or higher to the number of indexing nodes that is going to be working, ideal count being 'number of indexing nodes * [CPU cores used for indexing per node]' --> <shardCount>6</shardCount> <!-- Data purging related configuration --> <analytics-data-purging> <!-- Below entry will indicate purging is enable or not. If user wants to enable data purging for cluster then this property need to be enable in all nodes --> <purging-enable>false</purging-enable> <cron-expression>0 0 0 * * ?</cron-expression> <!-- Tables that need include to purging. Use regex expression to specify the table name that need include to purging.--> <purge-include-tables> <table>.*</table> <!--<table>.*jmx.*</table>--> </purge-include-tables> <!-- All records that insert before the specified retention time will be eligible to purge --> <data-retention-days>365</data-retention-days> </analytics-data-purging> <!-- Receiver/Indexing flow-control configuration --> <analytics-receiver-indexing-flow-control enabled = "true"> <!-- maximum number of records that can be in index staging area before receiving is throttled --> <recordReceivingHighThreshold>10000</recordReceivingHighThreshold> <!-- the limit on number of records to be lower than, to reduce throttling --> <recordReceivingLowThreshold>5000</recordReceivingLowThreshold> </analytics-receiver-indexing-flow-control> </analytics-dataservice-configuration>
Add the MySQL Java connector 5.1.x JAR file, which is supported by MYSQL 5.6, in the <DAS_HOME>/repository/components/lib
directory.
Download the DAS extension from the Stratos product page and uncompress the file. The extracted distribution is referred to as <STRATOS_DAS_DISTRIBUTION>.
org.apache.stratos.das.extension-4.1.5.jar
file, which is in the <STRATOS_DAS_DISTRIBUTION>/lib
directory, into the <DAS_HOME>/repository/components/lib
directory.Add the following Java class path into the spark-udf-config.xml
file in the <DAS_HOME>/repository/conf/analytics/spark
directory.
<class-name>org.apache.stratos.das.extension.TimeUDF</class-name>
Add Jaggery files, which are in the <STRATOS_DAS_DISTRIBUTION>/metering-dashboard/jaggery-files
directory into the <DAS_HOME>/repository/deployment/server/jaggeryapps/portal/controllers/apis
directory.
Manually create MySQL databases and tables using the queries, which are in the <STRATOS_DAS_DISTRIBUTION>/metering-dashboard/metering-mysqlscript.sql
file.
CREATE DATABASE IF NOT EXISTS ANALYTICS_FS_DB; CREATE DATABASE IF NOT EXISTS ANALYTICS_EVENT_STORE; CREATE DATABASE IF NOT EXISTS ANALYTICS_PROCESSED_DATA_STORE; CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_STATUS(Time long, ApplicationId VARCHAR(150), ClusterAlias VARCHAR(150), MemberId VARCHAR(150), MemberStatus VARCHAR(50)); CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_COUNT(Time long, ApplicationId VARCHAR(150), ClusterAlias VARCHAR(150), CreatedInstanceCount int, InitializedInstanceCount int, ActiveInstanceCount int, TerminatedInstanceCount int); CREATE TABLE ANALYTICS_PROCESSED_DATA_STORE.MEMBER_INFORMATION(MemberId VARCHAR(150), InstanceType VARCHAR(150), ImageId VARCHAR(150), HostName VARCHAR(150), PrivateIPAddresses VARCHAR(150), PublicIPAddresses VARCHAR(150), Hypervisor VARCHAR(150), CPU VARCHAR(10) , RAM VARCHAR(10), OSName VARCHAR(150), OSVersion VARCHAR(150));
Apply a WSO2 User Engagement Server (UES) patch to the DAS dashboard.
You need to do this to populate the metering dashboard.
Copy the ues-gadgets.js
and the ues-pubsub.js
files from the
<STRATOS_DAS_DISTRIBUTION>/metering-dashboard/ues-patch
directory into the <DAS_HOME>/repository/deployment/server/jaggeryapps/portal/js
directory.
Copy the dashboard.jag file from the <STRATOS_DAS_DISTRIBUTION>
directory into the <DAS_HOME>/repository/deployment/server/jaggeryapps/portal/theme/templates
directory.
Add the stratos
-metering-service.car
file, which is in the <STRATOS_DAS_DISTRIBUTION>/metering-dashboard
directory, into the <DAS_HOME>/repository/deployment/server/carbonapps
directory to generate the metering dashboard.
If the <DAS_HOME>/repository/deployment/server/carbonapps
folder does not exist, initially create the folder before moving the CAR file.
You can navigate to the metering dashboard from the Stratos application topology view at the application or cluster level as shown below.
The following is a sample metering dashboard:
<STRATOS_DAS_DISTRIBUTION>/
monitoring-dashboard/jaggery-files
directory into the <DAS_HOME>/repository/deployment/server/jaggeryapps/portal/controllers/apis
directory.Manually create the MySQL database and tables using the queries in the <STRATOS_DAS_DISTRIBUTION>/monitoring-dashboard/jaggery-files/monitoring-mysqlscript.sql
file.
CREATE DATABASE IF NOT EXISTS ANALYTICS_FS_DB; CREATE DATABASE IF NOT EXISTS ANALYTICS_EVENT_STORE; CREATE DATABASE IF NOT EXISTS ANALYTICS_PROCESSED_DATA_STORE; CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_MEMORY_CONSUMPTION_STATS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.MEMBER_AVERAGE_MEMORY_CONSUMPTION_STATS(Time long, MemberId VARCHAR(150), ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_LOAD_AVERAGE_STATS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.MEMBER_AVERAGE_LOAD_AVERAGE_STATS(Time long, MemberId VARCHAR(150), ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), Value DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.AVERAGE_IN_FLIGHT_REQUESTS(Time long, ClusterId VARCHAR(150), ClusterInstanceId VARCHAR(150), NetworkPartitionId VARCHAR(150), COUNT DOUBLE); CREATE TABLE ANALYTICS_EVENT_STORE.SCALING_DETAILS(Time VARCHAR(50), ScalingDecisionId VARCHAR(150), ClusterId VARCHAR(150), MinInstanceCount INT, MaxInstanceCount INT, RIFPredicted INT, RIFThreshold INT ,RIFRequiredInstances INT, MCPredicted INT, MCThreshold INT, MCRequiredInstances INT ,LAPredicted INT, LAThreshold INT,LARequiredInstances INT,RequiredInstanceCount INT ,ActiveInstanceCount INT, AdditionalInstanceCount INT, ScalingReason VARCHAR(150));
<STRATOS_DAS_DISTRIBUTION>/wso2cep-<VERSION>/eventformatters
directory, into the <CEP_HOME>/repository/deployment/server/eventformatters
directory.Copy CEP OutputEventAdapter artifacts, which are in the
<STRATOS_DAS_DISTRIBUTION>/wso2cep-<VERSION>/outputeventadaptors
directory, into the <CEP_HOME>/repository/deployment/server/outputeventadaptors
directory and update the receiverURL and authenticatorURL with the DAS_HOSTNAME
and DAS_TCP_PORT and
DAS_SSL_PORT
values as follows:
<outputEventAdaptor name="DefaultWSO2EventOutputAdaptor" statistics="disable" trace="disable" type="wso2event" xmlns="http://wso2.org/carbon/eventadaptormanager"> <property name="username">admin</property> <property name="receiverURL">tcp://<DAS_HOSTNAME>:<DAS_TCP_PORT></property> <property name="password">admin</property> <property name="authenticatorURL">ssl://<DAS_HOSTNAME>:<DAS_SSL_PORT></property> </outputEventAdaptor>
Add the stratos-monitoring-service.car
file, which is in the <STRATOS_DAS_DISTRIBUTION>/metering-dashboard
directory into the <DAS_HOME>/repository/deployment/server/carbonapps
directory to generate the monitoring dashboard.
If the <DAS_HOME>/repository/deployment/server/carbonapps
folder does not exist, initially create the folder before moving the CAR file.
After you have successfully configured DAS in a separate host, start the DAS server:
./wso2server.sh
When using a VM setup or Kubernetes, you need to configure Stratos accurately before attempting to deploy an application on the PaaS.
Some steps are marked as optional as they are not applicable to all IaaS.
Therefore, only execute the instructions that correspond to the IaaS being used!
Ensure that the following prerequisites have been met based on your environment and IaaS.
Install the prerequisites listed below.
Oracle Java SE Development Kit (JDK)
Apache ActiveMQ
For more information on the prerequisites, see Prerequisites.
Download the Stratos binary distribution from Apache Download Mirrors and unzip it.
This step is only mandatory if you are using Kubernetes.
You can setup a Kubernetes cluster using one of the following approaches:
When working in a productions environment, setup the Kubernetes cluster based on your environment requirements. For more information, see the Kubernetes documentation.
Prerequisites
Before starting, download and install the following prerequisites:
Follow the instructions below to setup Kubernetes with Vagrant:
Clone the following Vagrant Git repository. This folder is referred to as <VAGRANT_KUBERNETES_SETUP>
.
git clone https://github.com/imesh/kubernetes-vagrant-setup.git
Disable DHCP server in VirtualBox:
VBoxManage dhcpserver remove --netname HostInterfaceNetworking-vboxnet0
Start a new Kubernetes cluster using the following command, which will start one master node and one minion:
run.sh
If more than one minion is needed, run the following command with the required number of instances. The number of instances you require is defined by n
.
run.sh NUM_INSTANCES=2
If you need to specify the minion's memory and CPU, use the following command:
Example:
run.sh NUM_INSTANCES=2 NODE_MEM=4096 NODE_CPUS=2
Once the nodes are connected to the cluster and the state of the nodes are changed to Ready, the Kubernetes cluster is ready for use.
Execute the following Kubernetes CLI commands and verify the cluster status:
kubectl get nodes NAME LABELS STATUS 172.17.8.102 kubernetes.io/hostname=172.17.8.102 Ready
Access the Kubernetes UI using the following URL http://<HOST>:<HTTP_PORT>/ui
Example:
http://172.17.8.101:8080/ui
If you get a notification mentioning that the \"kube-ui\"
endpoints cannot be found, execute the kube-ui-pod.sh
script.
Follow the instructions below to create an elastic Kubernetes cluster with three worker nodes and a master on a Mac Operating System, which is running in EC2:
The Kubernetes cluster will also include the following sections:
Cluster bootstrapping using cloud-config
Cross container networking with flannel
Auto worker registration with kube-register
Kubernetes v1.0.1 official binaries
Install and configure Kubectl
.
Kubectl is a client command line tool provided by the Kubernetes team. It helps monitor and manage Kubernetes Clusters.
wget https://storage.googleapis.com/kubernetes-release/release/v1.0.1/bin/linux/amd64/kubectl chmod +x kubectl mv kubectl /usr/local/bin/
For more information, see installing and configuring Kubectl.
Install and configure the AWS Command Line Interface.
wget https://bootstrap.pypa.io/get-pip.py sudo python get-pip.py sudo pip install awscli
If you encounter an issue, use the following command to resolve it:
sudo pip uninstall six sudo pip install --upgrade python-heatclient
For more information see, AWS command line interface.
Create the Kubernetes Security Group.
aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group" aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0 aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0 aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp -p 30000-32767 --cidr 0.0.0.0/0 aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes
The port 8080 is not fixed. It will change based on the KUBERNETES_MASTER_PORT
value you define in the Kubernetes Cluster resource definition.
You can configure the KUBERNETES_MASTER_PORT
by defining it under the Kubernetes Master property
parameter.
Example:
{ "name": "KUBERNETES_MASTER_PORT", "value": "8080" }
cloud-configs
file. For more information, see the configuration details for master.yaml
. cloud-configs
. For more information, see the configuration details for node.yaml
. Launch the master.
Replace the <ami_image_id>
with a suitable version of the CoreOS image for AWS. It is recommend to use the following CoreOS alpha channel AMI Image ID: ami-f7a5fec7
Run the instance.
aws ec2 run-instances --image-id <ami_image_id> --key-name <keypair> \ --region us-west-2 --security-groups kubernetes --instance-type m3.medium \ --user-data file://master.yaml
InstanceId
of the master.Gather the public and private IP ranges of the master node:
aws ec2 describe-instances --instance-id <instance-id>
The output:
"Reservations": [ { "Instances": [ { "PublicDnsName": "ec2-54-68-97-117.us-west-2.compute.amazonaws.com", "RootDeviceType": "ebs", "State": { "Code": 16, "Name": "running" }, "PublicIpAddress": "54.68.97.117", "PrivateIpAddress": "172.31.9.9", }
Update the node.yaml
cloud-config
file.
Replace all instances of the <master-private-ip>
in the node.yaml
file with the private IP address of the master node.
Launch the three worker nodes.
Replace the <ami_image_id>
with a suitable version of the CoreOS image, for AWS. It is recommend to use the same AMI image ID used by the master.
aws ec2 run-instances --count 3 --image-id <ami_image_id> --key-name <keypair> \ --region us-west-2 --security-groups kubernetes --instance-type m3.medium \ --user-data file://node.yaml
Configure the Kubectl SSH tunnel.
This command enables a secure communication between the Kubectl client and the Kubernetes API.
ssh -i key-file -f -nNT -L 8080:127.0.0.1:8080 core@<master-public-ip>
List the worker nodes.
Once the worker instances are fully booted, the kube-register service running on the master node will automatically register the Kubernetes API server. This process will take several minutes.
kubectl get nodes
This step is only mandatory if you are deploying Stratos on a Virtual Machine (e.g., EC2, OpenStack, GCE).
Puppet is an open source configuration management utility. In Stratos, Puppet has been used as the orchestration layer. Private PaaS does not have any templates, configurations in puppet, it consists only of the product distributions. Puppet acts as a file server while the Configurator does the configuration in runtime.
Follow the instructions below to setup the Puppet Master.
Follow the instructions below to configure Puppet Master for Apache Stratos on Debian/Ubuntu 12.04.1 LTS based Linux distributions:
sudo -i
apt-get install git
puppetinstall.
git clone https://github.com/thilinapiy/puppetinstall
puppetinstall
folder using the following command: cd puppetinstall
Execute the following command. When you execute this command, your system hostname will get modified../puppetinstall -m -d <PUPPETMASTER-DOMAIN>
-s <PUPPET-MASTER-IP>
Short code | Description |
---|---|
-m | Install Puppet Master on the system. |
-d | Domain name of the environment. This will act as a prefix to all the servers of the domain. |
-s | IP address of the Puppet master server. This IP address will be added to the |
For example:
./puppetinstall -m -d test.org
If requested, press enter. If you have successfully installed Puppet Master, the following message will appear:
“Installation completed successfully
"
hostname
command. This will show that your system hostname has been modified. puppet.test.org
Verify your Puppet Master (v3) installation by running the following command in the puppetinstall
folder:ps -ef | grep puppet
The output will be as follows:
puppet 5324 1 0 14:59 ? 00:00:00 /usr/bin/ruby /usr/bin/puppet master --masterport=8140 root 5332 1071 0 15:05 pts/0 00:00:00 grep --color=auto puppet
puppet.test.org
" to /etc/hostspuppet.test.org
to change the hostname in ubuntu 14 cd
git clone https://github.com/apache/stratos.git
stratos>/tools/puppet3/
directory. cd stratos/tools/puppet3/
ls
auth.conf autosign.conf fileserver.conf manifests modules puppet.conf
puppet
folder. cd /etc/puppet/
puppet
folder:ls
The output will be as follows: auth.conf autosign.conf fileserver.conf manifests modules puppet.conf templates
/root/stratos/tools/puppet3/manifests/
directory to the /etc/puppet/manifests/
directory.
cp -R /root/stratos/tools/puppet3/manifests/* manifests/
Copy the content from the /root/stratos/tools/puppet3/modules/
directory to the /etc/puppet/modules/
directory.
For example:
cp -R /root/stratos/tools/puppet3/modules/* modules/
/etc/puppet/manifests/
directory. ls
manifests/
nodes.pp site.pp nodes
Check the list of files in the /etc/puppet/manifests/nodes
directory.
ls manifests/nodes
The output should be as follows:
base.pp default.pp haproxy.pp lb.pp mysql.pp nodejs.pp php.pp ruby.pp tomcat.pp wordpress.pp
/etc/puppet/modules/
directory. ls
modules/
agent java lb mysql nodejs php python_agent ruby tomcat wordpress
Change the $mb_url, $cep_port
and $cep_ip
values in the base.pp
file according to your setup.
vi /etc/puppet/manifests/nodes/base.pp
#following directory is used to store binary packages $local_package_dir = '/mnt/packs' # Stratos message broker IP and port $mb_url = 'tcp://127.0.0.1:1883' $mb_type = 'activemq' # Stratos CEP IP and port $cep_ip = '10.4.128.10' $cep_port = '7611' # Stratos Cartridge Agent’s trust store password $truststore_password = 'wso2carbon'
/etc/puppet/
directory.cd /etc/puppet/
autosign.conf
file and save the file.autosign.conf
file as follows:cat autosign.conf
*.test.org
Download a Java distribution and define the Java distribution in the /etc/puppet/manifests/
directory.
Create the files
folder in the /etc/puppet/modules/java/
directory.
mkdir /etc/puppet/modules/java/files
Download a Java distribution (e.g., jdk-7u51-linux-x64.tar.gz
) and copy it to the /etc/puppet/modules/java/files/
directory.
To get support for 32 bits, download the Java 32-bit distribution and change the $java_distribution
parameter in the nodes.pp
file accordingly.
Update the the following two values in your /etc/puppet/manifests/nodes/base.pp
file based on your Java distribution. Where $java_distribution
is the downloaded Java distribution name and $java_name
is the the name of the unzipped Java distribution.
$java_distribution = 'jdk-7u51-linux-x64.tar.gz' $java_name = 'jdk1.7.0_51'
Build the Python cartridge agent.
Checkout the Python cartridge agent source from Apache Stratos remote repository to a folder of your choice.
git clone https://git-wip-us.apache.org/repos/asf/stratos.git <local-folder-name>
For example:
git clone https://git-wip-us.apache.org/repos/asf/stratos.git myLocalRepo
cd <local-folder-name>
cd myLocalRepo
Use Maven to build the source distribution of the release.
mvn clean install
If Stratos has been built successfully, the deployable cartridge agent ZIP file named apache-stratos-python-cartridge-agent-<VERSION>-SNAPSHOT.zip
(e.g., apache-stratos-python-cartridge-agent-4.1.x-SNAPSHOT.zip
) can be found in the /products/python-cartridge-agent/target/
directory.
Copy the Python Cartridge Agent distribution (apache-stratos-python-cartridge-
agent-4.1.x-SNAPSHOT.zip
), which is in the <STRATOS_HOME>/products/python-cartridge-agent/
target/
directory, to the /etc/puppet/modules/python_agent/files/
directory.
Copy the Apache Stratos Load Balancer distribution (apache-stratos-load-balancer-
4.1.x-SNAPSHOT.zip
), which is in the <source-home>/products/load-balancer/modules/distribution/target/
directory, to the /etc/puppet/modules/lb/files/
directory.
Download any dependency on 5.9.1 or any latest stable ActiveMQ TAR file from https://activemq.apache.org/download.html. The folder path of this file will be referred to as <ActiveMQ_HOME>
. Copy the following ActiveMQ client JARSs from <ActiveMQ_HOME>
/lib/
directory to the /etc/puppet/modules/lb/files/activemq/
directory.
activemq-broker-5.9.1.jar
activemq-client-5.9.1.jar
geronimo-j2ee-management_1.1_spec-1.0.1.jar
geronimo-jms_1.1_spec-1.1.1.jar
hawtbuf-1.9.jar
/etc/puppet/modules/lb/files/
activemq/
directory.cd /etc/puppet/modules/lb/files/activemq
puppet
folder:ls
The output will be as follows:activemq-broker-5.9.1.jar
activemq-client-5.9.1.jar geronimo-j2ee-management_1.1_spec-1.0.1.jar geronimo-jms_1.1_spec-1.1.1.jar hawtbuf-1.9.jar
cartridge-config.properties
fileUpdate the values of the following parameters in the cartridge-config.properties
file, which is in the <STRATOS_HOME>/repository/conf
directory.
The values are as follows:
[PUPPET_IP] -
The IP address of the running Puppet instance.
[PUPPET_HOST_NAME] -
The host name of the running Puppet instance.
This step is only mandatory if you are deploying Stratos on a Virtual Machine (e.g., EC2, OpenStack, GCE).
Create the cartridge base image based on the IaaS that you are using to run Stratos.
To follow this guide, you need an EC2 account. If you do not have an account, create an AWS account. For more information, see Sign Up for Amazon EC2. This account must be authorized to manage EC2 instances (including starting and stopping instances, and creating security groups and key pairs).
Before launching the instance, you need to create the right security group. This security group defines firewall rules for your instances, which are a list of ports that are used as part of the default Stratos deployment. These rules specify which incoming network traffic is delivered to your instance. All other traffic is ignored. The ports that should be defined are listed in as default ports.
Follow the instructions below to create the security group and configure it:
Select Custom TCP rule.
Click Add Rule and then click Apply Rule Changes.
Always apply rule changes, as your rule will not get saved unless the rule changes are applied.
Repeat steps 6 to 8 to add all the ports mentioned, as each port or port range has to be added as a separate rule.
Write down the names of your security groups if you wish to enter your user data in the wizard.
Save your private key in a safe place on your computer. Note down the location, because you will need the key pair to connect to your instance.
Follow the instructions below to create a key pair, download it and secure it:
Protect your key pair by executing the following command in your terminal.
By default, your PEM file will be unprotected. Use the following command to secure your PEM file, so that others will not have access to it:
chmod 0600 <path-to-the-private-key>
Follow the instructions below to spawn an instance on EC2:
Click Launch Instance.
Select Quick Launch Wizard.
Name your instance, for example StratosCartridgeInstance
.
Select More Amazon Machine Images and click on Continue.
Click Launch to start the EC2 instance.
Click Close.
This will redirect you to the instance page. It takes a short time for an instance to launch. The instance's status appears as pending while it is launching. After the instance is launched, its status changes to running.
Follow the steps given below to configure a base Image:
Start up a virtual machine (VM) instance using a preferred OS, on a preferred IaaS.
Install the Puppet agent.
If you are using Ubuntu 12, you will require the following Puppet repository to install the Puppet agent.
wget http://apt.puppetlabs.com/puppetlabs-release-precise.deb dpkg -i puppetlabs-release-precise.deb sudo apt-get update sudo apt-get install puppet
If you are using Ubuntu 14, you will require the following Puppet repository to install the Puppet agent.
wget http://apt.puppetlabs.com/puppetlabs-release-trusty.deb dpkg -i puppetlabs-release-trusty.deb sudo apt-get update sudo apt-get install puppet
Enable dependencies and Puppet labs repository on Master.
# rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm
# rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
# rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-5.noarch.rpm
Install and upgrade Puppet on the agent node.
# yum install puppet # puppet resource package puppet ensure=latest # /etc/init.d/puppet restart
For more information on installing the Puppet agent on CentOS, see installing Puppet on CentOS.
Open the puppet
file, which is in the
<PUPPET_AGENT>/etc/default
directory and configure it as follows:
START=yes
Add the following to the puppet.conf
file, which is in the <PUPPET_AGENT>
/etc/puppet
directory:
[main] server=puppet.stratos.org
If you are unsure of the server name, use a dummy hostname. Stratos will update the above with the respective server name, when it starts running.
Stop the puppet instance or instances that are running.
cd /etc/init.d/puppet stop
Execute the following command to identify the running puppet instances:
ps -ef | grep puppet
The following output will be given, if any Puppet instances are running.
Example:
root 1321 1 0 Sep09 ? 00:00:17 /usr/bin/ruby /usr/bin/puppet agent root 12149 12138 0 05:44 pts/0 00:00:00 grep --color=auto puppet
Copy the init.sh
script into the
<PUPPET_AGENT>/root/bin
directory.
You can find the init.sh
script for the respective IaaS here.
The init.sh
file differs based on the IaaS. If you wish to find the init.sh
script for a different IaaS, go to init-scripts. You can find the respective init.sh
script by navigating to the
init-script/<IAAS>/<OS>
path.
Update the /etc/rc.local
file.
/root/bin/init.sh > /tmp/puppet_log exit 0
Execute the following commands:
rm -rf /var/lib/puppet/ssl/* rm -rf /tmp/*
By executing the above commands you will be cleaning up the base image, for Stratos to install the required certificates and payloads. This is done to avoid any errors that will be given, if Stratos starts installing a certificate or payload that already exists in the base image.
Follow the instructions below to create a snapshot of the instance on EC2:
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
Make sure the appropriate Region is selected in the region selector of the navigation bar.
Click Instances in the navigation pane.
Fill in a unique image name and an optional description of the image (up to 255 characters), and click Create Image.
In Amazon EC2 instance store-backed AMIs, the image name replaces the manifest name (such as s3_bucket/something_of_your_choice.manifest.xml
), which uniquely identifies each Amazon Amazon EC2 instance store-backed AMI.
Amazon EC2 powers down the instance, takes images of any volumes that were attached, creates and registers the AMI, and then reboots the instance.
Go to the AMIs page and view the AMI's status. While the new AMI is being created, its status is pending
.
It takes a few minutes for the whole process to finish.
available
, go to the Snapshots page and get the Snapshot ID of the new snapshot that was created for the new AMI that will be used in the Sample Cartridge Definition JSON file. Any instance you launch from the new AMI uses this snapshot for its root device volume. After you have finished creating the cartridge base image, make a note of the image ID as you will need this later when creating a cartridge.
Follow the instructions below to spawn a configured instance of Debian/Ubuntu based Linux 12.04.1 LTS distributions on OpenStack:
Protect your key pair by executing the following command in your terminal.
By default, your PEM file will be unprotected. Use the following command to secure your PEM file so that others will not have access to it:
chmod 0600 <path to the private key>
Follow the steps given below to configure a base Image:
Start up a virtual machine (VM) instance using a preferred OS, on a preferred IaaS.
Install the Puppet agent.
If you are using Ubuntu 12, you will require the following Puppet repository to install the Puppet agent.
wget http://apt.puppetlabs.com/puppetlabs-release-precise.deb dpkg -i puppetlabs-release-precise.deb sudo apt-get update sudo apt-get install puppet
If you are using Ubuntu 14, you will require the following Puppet repository to install the Puppet agent.
wget http://apt.puppetlabs.com/puppetlabs-release-trusty.deb dpkg -i puppetlabs-release-trusty.deb sudo apt-get update sudo apt-get install puppet
Enable dependencies and Puppet labs repository on Master.
# rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm
# rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
# rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-5.noarch.rpm
Install and upgrade Puppet on the agent node.
# yum install puppet # puppet resource package puppet ensure=latest # /etc/init.d/puppet restart
For more information on installing the Puppet agent on CentOS, see installing Puppet on CentOS.
Open the puppet
file, which is in the
<PUPPET_AGENT>/etc/default
directory and configure it as follows:
START=yes
Add the following to the puppet.conf
file, which is in the <PUPPET_AGENT>
/etc/puppet
directory:
[main] server=puppet.stratos.org
If you are unsure of the server name, use a dummy hostname. Stratos will update the above with the respective server name, when it starts running.
Stop the puppet instance or instances that are running.
cd /etc/init.d/puppet stop
Execute the following command to identify the running puppet instances:
ps -ef | grep puppet
The following output will be given, if any Puppet instances are running.
Example:
root 1321 1 0 Sep09 ? 00:00:17 /usr/bin/ruby /usr/bin/puppet agent root 12149 12138 0 05:44 pts/0 00:00:00 grep --color=auto puppet
Copy the init.sh
script into the
<PUPPET_AGENT>/root/bin
directory.
You can find the init.sh
script for the respective IaaS here.
The init.sh
file differs based on the IaaS. If you wish to find the init.sh
script for a different IaaS, go to init-scripts. You can find the respective init.sh
script by navigating to the
init-script/<IAAS>/<OS>
path.
Update the /etc/rc.local
file.
/root/bin/init.sh > /tmp/puppet_log exit 0
Execute the following commands:
rm -rf /var/lib/puppet/ssl/* rm -rf /tmp/*
By executing the above commands you will be cleaning up the base image, for Stratos to install the required certificates and payloads. This is done to avoid any errors that will be given, if Stratos starts installing a certificate or payload that already exists in the base image.
Follow the instructions below to create a snapshot of the instance on OpenStack:
After you have finished creating the cartridge, make a note of the image ID you created for the cartridge, as you will need this when you use Stratos Manager to add a cartridge.
SSH to the spawned instance and make relevant changes to the base image (e.g., If you need a PHP cartridge, install PHP related libraries).
Follow the steps given below to configure a base Image:
Start up a virtual machine (VM) instance using a preferred OS, on a preferred IaaS.
Install the Puppet agent.
If you are using Ubuntu 12, you will require the following Puppet repository to install the Puppet agent.
wget http://apt.puppetlabs.com/puppetlabs-release-precise.deb dpkg -i puppetlabs-release-precise.deb sudo apt-get update sudo apt-get install puppet
If you are using Ubuntu 14, you will require the following Puppet repository to install the Puppet agent.
wget http://apt.puppetlabs.com/puppetlabs-release-trusty.deb dpkg -i puppetlabs-release-trusty.deb sudo apt-get update sudo apt-get install puppet
Enable dependencies and Puppet labs repository on Master.
# rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-7.noarch.rpm
# rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
# rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-5.noarch.rpm
Install and upgrade Puppet on the agent node.
# yum install puppet # puppet resource package puppet ensure=latest # /etc/init.d/puppet restart
For more information on installing the Puppet agent on CentOS, see installing Puppet on CentOS.
Open the puppet
file, which is in the
<PUPPET_AGENT>/etc/default
directory and configure it as follows:
START=yes
Add the following to the puppet.conf
file, which is in the <PUPPET_AGENT>
/etc/puppet
directory:
[main] server=puppet.stratos.org
If you are unsure of the server name, use a dummy hostname. Stratos will update the above with the respective server name, when it starts running.
Stop the puppet instance or instances that are running.
cd /etc/init.d/puppet stop
Execute the following command to identify the running puppet instances:
ps -ef | grep puppet
The following output will be given, if any Puppet instances are running.
Example:
root 1321 1 0 Sep09 ? 00:00:17 /usr/bin/ruby /usr/bin/puppet agent root 12149 12138 0 05:44 pts/0 00:00:00 grep --color=auto puppet
Copy the init.sh
script into the
<PUPPET_AGENT>/root/bin
directory.
You can find the init.sh
script for the respective IaaS here.
The init.sh
file differs based on the IaaS. If you wish to find the init.sh
script for a different IaaS, go to init-scripts. You can find the respective init.sh
script by navigating to the
init-script/<IAAS>/<OS>
path.
Update the /etc/rc.local
file.
/root/bin/init.sh > /tmp/puppet_log exit 0
Execute the following commands:
rm -rf /var/lib/puppet/ssl/* rm -rf /tmp/*
By executing the above commands you will be cleaning up the base image, for Stratos to install the required certificates and payloads. This is done to avoid any errors that will be given, if Stratos starts installing a certificate or payload that already exists in the base image.
Set the auto-delete state of the root persistent disk to false
as follows:
This is done to avoid the persistent disk from being automatically deleted when you terminate the instance.
Click on the name of the instance.
Edit the settings related to the instance.
Uncheck the Delete boot disk when instance is deleted option. This is done to ensure that all the data is not deleted when you terminate the instance.
Click Save.
If you wish to view details on the disk related to the instance, click Compute Engine and then click Disks.
Delete the instance.
Initially, you need to terminate the spawned instance using the root persistent disk to be able to create an image. When you are terminating the instance make sure that the persistent disk is not attached to any other virtual machines.
Create a new image as follows:
Disk
and select the relevant disk name from the dropdown menu. You need to do this to create the image based on the persistent disk.
Mock IaaS is enabled by default. Therefore, if you are running Stratos on another IaaS, you need to disable the Mock IaaS.
Follow the instructions below to disable the Mock IaaS:
Navigate to the <STRATOS_HOME>/repository/conf/mock-iaas.xml
file and disable the Mock IaaS.
<mock-iaas enabled="false">
Navigate to the <STRATOS_HOME>/repository/deployment/server/webapps
directory and delete the mock-iaas.war
file.
When Private PaaS is run the mock-iaas.wa
r is extracted and the mock-iaas
folder is created. Therefore, if you have run Stratos previously, delete the mock-iaas
folder as well.
This step is only applicable if you are using GCE.
When working on GCE carryout the following instructions:
This step is only mandatory if you are deploying Stratos on a Virtual Machine (e.g., EC2, OpenStack, GCE).
Follow the instructions given below to configure the Cloud Controller (CC):
Configure the IaaS provider details based on the IaaS.
You need to configure details in the <STRATOS_HOME>/repository/conf/cloud-controller.xml
file and comment out the IaaS provider details that are not being used.
Update the values of the MB_IP
and MB_PORT
in the jndi.properties
file, which is in the <STRATOS_HOME>/repository/conf
directory.
The default value of the message-broker-port=
61616.
The values are as follows:
MB_IP:
The IP address used by ActiveMQ.
MB_PORT:
The port used by ActiveMQ.connectionfactoryName=TopicConnectionFactory java.naming.provider.url=tcp://[MB_IP]:[MB_Port] java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
This step is only mandatory if you have setup the Message Broker (MB), in this case ActiveMQ, in a separate host.
If you have setup ActiveMQ, which is the Stratos Message Broker, in a separate host you need to define the Message Broker IP, so that the MB can communicate with Stratos.
Update the value of the MB_IP
in the JMSOutputAdaptor
file, which is in the <STRATOS_HOME>/repository/deployment/server/outputeventadaptors
directory.
[MB_IP]:
The IP address used by ActiveMQ.<property name="java.naming.provider.url">tcp://[MB_IP]:61616</property>
The way in which you need to start the Stratos server varies based on your settings as follows:
We recommend to start the Stratos server in background mode, so that the instance will not
If you want to use the internal database (H2) and the embedded CEP, start the Stratos server as follows:
sh <STRATOS_HOME>/bin/wso2server.sh start
If you want to use an external database, start the Stratos server with the -Dsetup
option as follows:
This creates the database schemas in <STRATOS_HOME>/dbscripts
directory.
sh <STRATOS_HOME>/bin/wso2server.sh start -Dsetup
If you want to use an external CEP, disable the embedded CEP when starting the Stratos server as follows:
sh <STRATOS_HOME>/bin/wso2server.sh start -Dprofile=cep-excluded
If you want to use an external database, together with an external CEP, start the Stratos server as follows:
This creates the database schemas in <STRATOS_HOME>/dbscripts
directory.
sh <STRATOS_HOME>/bin/wso2server.sh start -Dsetup -Dprofile=cep-excluded
You can tail the log, to verify that the Stratos server starts without any issues.
tail -f <STRATOS_HOME>/repository/logs/wso2carbon.log
Powered by a free Atlassian Confluence Open Source Project License granted to Apache Software Foundation. Evaluate Confluence today.