Sample 58: Static Load Balancing Between 3 Nodes

<definitions xmlns="http://ws.apache.org/ns/synapse"> <sequence name="main" onError="errorHandler"> <in> <send> <endpoint> <loadbalance failover="true"> <member hostName="127.0.0.1" httpPort="9001" httpsPort="9005"/> <member hostName="127.0.0.1" httpPort="9002" httpsPort="9006"/> <member hostName="127.0.0.1" httpPort="9003" httpsPort="9007"/> </loadbalance> </endpoint> </send> <drop/> </in> <out> <!-- Send the messages where they have been sent (i.e. implicit To EPR) --> <send/> </out> </sequence> <sequence name="errorHandler"> <makefault response="true"> <code xmlns:tns="http://www.w3.org/2003/05/soap-envelope" value="tns:Receiver"/> <reason value="COULDN'T SEND THE MESSAGE TO THE SERVER."/> </makefault> <send/> </sequence> </definitions>

Objective

Demonstrate the ability of Synapse to act as a load balancer for a set of servers hosting stateless services. This sample is very similar to sample 52 but uses a different syntax style to configure the load balance endpoint.

Pre-requisites

  • Deploy the LoadbalanceFailoverService in the sample Axis2 server (go to samples/axis2Server/src/LoadbalanceFailoverService and run 'ant')
  • Start 3 instances of the Axis2 server on different ports as follows
    ./axis2server.sh -http 9001 -https 9005 -name MyServer1
    ./axis2server.sh -http 9002 -https 9006 -name MyServer2
    ./axis2server.sh -http 9003 -https 9007 -name MyServer3
  • Start Synapse using the configuration numbered 52 (repository/conf/sample/synapse_sample_52.xml)
    Unix/Linux: sh synapse.sh -sample 52
    Windows: synapse.bat -sample 52

Executing the Client

Invoke the sample client as follows

ant loadbalancefailover -Di=100

This will send 100 requests to the LoadbalanceFailoverService through Synapse. Synapse will distribute the load among the three endpoints mentioned in the configuration in round-robin manner. LoadbalanceFailoverService appends the name of the server to the response, so that client can determine which server has processed the message. If you examine the console output of the client, you can see that requests are processed by three servers as follows:

[java] Request: 1 ==> Response from server: MyServer1 [java] Request: 2 ==> Response from server: MyServer2 [java] Request: 3 ==> Response from server: MyServer3 [java] Request: 4 ==> Response from server: MyServer1 [java] Request: 5 ==> Response from server: MyServer2 [java] Request: 6 ==> Response from server: MyServer3 [java] Request: 7 ==> Response from server: MyServer1 ...

Now run the client without the -Di=100 parameter to send requests indefinitely. While running the client shutdown the server named MyServer1. Then you can observe that requests are only distributed among MyServer2 and MyServer3. Console output before and after shutting down MyServer1 is listed below (MyServer1 was shutdown after request 63):

... [java] Request: 61 ==> Response from server: MyServer1 [java] Request: 62 ==> Response from server: MyServer2 [java] Request: 63 ==> Response from server: MyServer3 [java] Request: 64 ==> Response from server: MyServer2 [java] Request: 65 ==> Response from server: MyServer3 [java] Request: 66 ==> Response from server: MyServer2 [java] Request: 67 ==> Response from server: MyServer3 ...

Now restart MyServer1. You can observe that requests will be again sent to all three servers.

Back to Catalog