Friday, July 5, 2013

Clustering WSO2 ESB - Part 2 - Distributed worker-manager setup with one manager and two workers

In the previous post I explained how you can cluster two WSO2 ESB servers where  one ESB node will be playing a dual role as a worker as well as a manager. The setup which I am going to explain in this post is similar to the above, but it has three ESB servers where one server will be dedicated only for management purposes while the other two will be dedicated workers.

As the previous one, this setup also can be used as a Active/Passive setup where the active-passiveness can be decided by the load balancer based on a predefined mechanism(echo test or a heartbeat mechanism). 

The overall setup will contain three ESB servers (nodes), in which one server will act as the Manager while other two will act as the workers. The Manager will be used for deploying artifacts (CAR files and proxies) and managing the cluster.

The  synchronization  of artifacts between the servers will be achieved through the SVN based Deployment Synchronizer.
The overall view of the cluster will be as in the following figure.



Configuring the Manager

Download and extract the WSO2 ESB distribution (referred as MANAGER_HOME ).

axis2.xml configuration

Clustering   should  be  enabled  at  axis2  level  in  order  for  management  node  to  communicate with  the   worker   nodes.  
Open  MANAGER_HOME/repository/conf/axis2/axis2.xml  and  update the clustering configuration as follows:

<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent"  enable="true">
<parameter name="membershipScheme">wka</parameter>

Specify the cluster domain as follows;

<parameter name="domain">wso2.esb.domain</parameter>

Uncomment localmemberhost element in axis2.xml and specify the IP address (or host name) to be exposed to members of the cluster. This address can be an internal address used only for clustering.

<parameter name="localMemberHost">127.0.0.1</parameter>

Define the port through which other nodes will contact this member. If several instances are running in the same host make sure to use ports which will not conflict.

<parameter name="localMemberPort">4001</parameter>

Comment out the static or well­-known members, because this will be the well­-known member. (i.e do not need to have a list of members)
Add a new property "subDomain" under properties and set it to "mgt" to denote that this node belongs to management subdomain.  

<property name="subDomain" value="mgt"/> 

carbon.xml configuration

If multiple WSO2 Carbon ­based products are running in same host, to avoid possible port conflicts, the port offset of MANAGER_HOME/repository/conf/carbon.xml should be changed as follows. Ignore this if you are using three separate hosts.

<Offset>1</Offset>

Configure the Deployment Synchronizer as follows. Make sure both AutoCommit and
AutoCheckout are ‘true’

<DeploymentSynchronizer>
     <Enabled>true</Enabled>
     <AutoCommit>true</AutoCommit>
     <AutoCheckout>true</AutoCheckout>
     <RepositoryType>svn</RepositoryType>
     <SvnUrl>http://svnrepo.example/repos/esb</SvnUrl>
     <SvnUser>USERNAME</SvnUser>
     <SvnPassword>PASSWORD</SvnPassword>
     <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
</DeploymentSynchronizer>

Start the ESB manager node

 ./bin/wso2server.sh

You can access the management console by: https://localhost:9444/carbon/ within the managers node. (Use the correct port according to the port offset which was defined earlier) 

Add a sample proxy though the console and check whether it is getting committed to the SVN correctly. Proxy will be committed to SvnUrl/-1234/synapse-configs/default/proxy-services/ if you created the proxy as the admin.

Configuring the Worker node 1

Download and extract the WSO2 ESB distribution(referred as WORKER_HOME).

axis2.xml configuration

Clustering   should  be  enabled  at  axis2  level  in  order  for  management  node  to  communicate with  the  worker  nodes. Open  WORKER_HOME/repository/conf/axis2/axis2.xml and update the clustering configuration as follows:

<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
<parameter name="membershipScheme">wka</parameter>

Specify the cluster domain as follows;


<parameter name="domain">wso2.esb.domain</parameter>

Define  the  port  through  which  other  nodes  will  contact  this  member.  If several instances are running in the same host make sure to use ports which will not conflict.

<parameter name="localMemberPort">4002</parameter>

Add managers(Node 1) IP address (or host name) and the port as the well known member.
  • hostName: defined as localMemberHost in managers axis2.xml
  • port: defined as localMemberPort in managers axis2.xml.
Make  sure  this  address  can  be  accessed  through  the  workers  host. If  you  use  host name, map it to managers IP in the ‘etc/hosts’ file of the workers machine.

<members>
   <member>
   <hostName>127.0.0.1</hostName>
      <port>4001</port>
   </member>
</members> 


Add a new property "subDomain" under properties and set it to "worker" to denote that this node belongs to worker subdomain.  

<property name="subDomain" value="worker"/> 

carbon.xml configuration

If multiple WSO2 Carbon­ based products are running in same host, to avoid possible port conflicts, the port offset of WORKER_HOME/repository/conf/carbon.xml should be changed as follows. Ignore this if you are using three separate hosts.

<Offset>2</Offset>

Configure the Deployment Synchronizer as follows. Make sure AutoCommit is set to ‘false’ and AutoCheckout to ‘true’

<DeploymentSynchronizer>
     <Enabled>true</Enabled>
     <AutoCommit>false</AutoCommit>
     <AutoCheckout>true</AutoCheckout>
     <RepositoryType>svn</RepositoryType>
     <SvnUrl>http://svnrepo.example/repos/esb</SvnUrl>
     <SvnUser>USERNAME</SvnUser>
     <SvnPassword>PASSWORD</SvnPassword>
     <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
</DeploymentSynchronizer> 

Start the ESB worker node. The workerNode system property must be set to true when starting the workers. 
Therefore worker should be started as follows (in linux),

./bin/wso2server.sh -DworkerNode=true 

Configuring Worker node 2 

With the consideration of the following points, follow the exact steps as the configuration the worker node 1.

  • Decide on a port Offset value and update the port Offset element in WORKER_HOME/repository/conf/carbon.xml file. If you are using separate hosts ignore this.
  • Change the localMemberPort value of WORKER_HOME/repository/conf/axis2/axis2.xml to a port that is not used already.
Start the ESB worker node. The workerNode system property must be set to true when starting the workers in a cluster. Therefore worker should be started as follows,


./bin/wso2server.bat -DworkerNode=true

How to test the setup..

  • Start the servers as described above. 
  • Add a sample proxy and save it. Add a log mediator to the inSequence so that logs will be displayed in the workers terminals
  • Observe cluster messages(through the terminal or log) which is send by the manager and received by the worker. Worker will then synchronize with the SVN and deploy the proxy
  • Send a request to the end point through the load balancer. Load Balancer should point to the active nodes end point. For example, for external clients, the EP of the proxy should be as  
http://{Load_Balancer_Mapped_URL_for_worker}/services/{Sample_Proxy_Name}
  • In Active workers logs you will see the proxy being invoked (You need to add a log mediator as in number 3 in the proxy to see this)

Wednesday, July 3, 2013

Clustering WSO2 ESB - Part 1 - Distributed worker-manager setup with one node acting the dual role, as a manager as well as a worker

There are several ways of clustered deployment of WSO2 ESB (or any WSO2 server). You can find more about ESB clustering from here. The deployments introduced in that document is fronted by WSO2 ELB (Elastic Load Balancer). In this blog post I will explain the minimum configurations required to configure WSO2 ESB in a worker/manager distributed setup without having a WSO2 ELB at front so that any load balancer can be used. This setup can be also used as a Active/Passive setup where the active-passiveness can be decided by the load balancer based on a predefined mechanism(echo test or a heartbeat mechanism). 

In this setup one ESB node will be playing a dual role as a worker as well as a manager. The  overall  setup  will contain  two  ESB  servers  (nodes), in which  one  server (node 1) will act as  both  Manager  and  Worker while the other server(node 2) will act as only a worker. Node 1 will be used for deploying artifacts (CAR files and proxies) and managing the cluster.

The  synchronization  of artifacts between the servers will be achieved through the SVN based Deployment Synchronizer.
The overall view of the cluster will be as in the following figure.


Configuring the Manager/Worker Node (node 1)

Download and extract the WSO2 ESB distribution (referred as MANAGER_HOME ).

axis2.xml configuration

Clustering   should  be  enabled  at  axis2  level  in  order  for  management  node  to  communicate with  the   worker   nodes.  
Open  MANAGER_HOME/repository/conf/axis2/axis2.xml  and  update the clustering configuration as follows:

<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent"  enable="true">
<parameter name="membershipScheme">wka</parameter>

Specify the cluster domain as follows;

<parameter name="domain">wso2.esb.domain</parameter>

Uncomment localmemberhost element in axis2.xml and specify the IP address (or host name) to be exposed to members of the cluster. This address can be an internal address used only for clustering.

<parameter name="localMemberHost">127.0.0.1</parameter>

Define the port through which other nodes will contact this member. If several instances are running in the same host make sure to use ports which will not conflict.

<parameter name="localMemberPort">4001</parameter>

Comment out the static or well­-known members, because this will be the well­-known member. (i.e do not need to have a list of members)

carbon.xml configuration

If multiple WSO2 Carbon ­based products are running in same host, to avoid possible port conflicts, the port offset of MANAGER_HOME/repository/conf/carbon.xml should be changed as follows. Ignore this if you are using three separate hosts.

<Offset>1</Offset>

Configure the Deployment Synchronizer as follows. Make sure both AutoCommit and
AutoCheckout are ‘true’

<DeploymentSynchronizer>
     <Enabled>true</Enabled>
     <AutoCommit>true</AutoCommit>
     <AutoCheckout>true</AutoCheckout>
     <RepositoryType>svn</RepositoryType>
     <SvnUrl>http://svnrepo.example/repos/esb</SvnUrl>
     <SvnUser>USERNAME</SvnUser>
     <SvnPassword>PASSWORD</SvnPassword>
     <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
</DeploymentSynchronizer>

Start the ESB manager node(./bin/wso2server.sh ).

You can access the management console by: https://localhost:9444/carbon/ within the managers node. (Use the correct port according to the port offset which was defined earlier) 

Add a sample proxy though the console and check whether it is getting committed to the SVN correctly. Proxy will be committed to SvnUrl/-1234/synapse-configs/default/proxy-services/ if you created the proxy as the admin.

Configuring the worker node (Node 2)

Download and extract the WSO2 ESB distribution(referred as WORKER_HOME).

axis2.xml configuration

Clustering   should  be  enabled  at  axis2  level  in  order  for  management  node  to  communicate with  the  worker  nodes. Open  WORKER_HOME/repository/conf/axis2/axis2.xml and update the clustering configuration as follows:

<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
<parameter name="membershipScheme">wka</parameter>

Specify the cluster domain as follows;


<parameter name="domain">wso2.esb.domain</parameter>

Define  the  port  through  which  other  nodes  will  contact  this  member.  If several instances are running in the same host make sure to use ports which will not conflict.

<parameter name="localMemberPort">4002</parameter>

Add managers(Node 1) IP address (or host name) and the port as the well known member.
  • hostName: defined as localMemberHost in managers axis2.xml
  • port: defined as localMemberPort in managers axis2.xml.
Make  sure  this  address  can  be  accessed  through  the  workers  host. If  you  use  host name, map it to managers IP in the ‘etc/hosts’ file of the workers machine.

<members>
   <member>
   <hostName>127.0.0.1</hostName>
      <port>4001</port>
   </member>
</members> 


carbon.xml configuration

If multiple WSO2 Carbon­ based products are running in same host, to avoid possible port conflicts, the port offset of WORKER_HOME/repository/conf/carbon.xml should be changed as follows. Ignore this if you are using three separate hosts.

<Offset>2</Offset>

Configure the Deployment Synchronizer as follows. Make sure AutoCommit is set to ‘false’ and AutoCheckout to ‘true’

<DeploymentSynchronizer>
     <Enabled>true</Enabled>
     <AutoCommit>false</AutoCommit>
     <AutoCheckout>true</AutoCheckout>
     <RepositoryType>svn</RepositoryType>
     <SvnUrl>http://svnrepo.example/repos/esb</SvnUrl>
     <SvnUser>USERNAME</SvnUser>
     <SvnPassword>PASSWORD</SvnPassword>
     <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
</DeploymentSynchronizer> 

Start the ESB worker node. The workerNode system property must be set to true when starting the workers. 
Therefore worker should be started as follows (in linux),

./bin/wso2server.sh -DworkerNode=true 

How to test the setup..

  • Start the servers as described above. 
  • Add a sample proxy and save it. Add a log mediator to the inSequence so that logs will be displayed in the workers terminals
  • Observe cluster messages(through the terminal or log) which is send by the manager and received by the worker. Worker will then synchronize with the SVN and deploy the proxy
  • Send a request to the end point through the load balancer. Load Balancer should point to the active nodes end point. For example, for external clients, the EP of the proxy should be as  
http://{Load_Balancer_Mapped_URL_for_worker}/services/{Sample_Proxy_Name}
  • In Active nodes logs you will see the proxy being invoked (You need to add a log mediator as in number 3 in the proxy to see this)