MobileFirst Analytics - Quick & Dirty Clusters

Since you've read the MobileFirst Analytics - Planning for Production blog post, you're ready to get some practice setting up a cluster.

Before we start, be sure you've noted and understood from the planning blog post that ElasticSearch is the underlying cluster manager. You do not need to deploy your application servers in a cluster! In fact, you should avoid doing so, because the configuration of the MobileFirst Analytics server is quite often unique per node. If you clustered your application servers, the application server would share configuration information and the unique configuration you tried to apply would be overwritten. You don't want that.

Grab a few computers and let's go!

Scenario One: You have a VLAN

This is the recommended scenario! Put all of your MobileFirst Analytics servers in the same private local area network (LAN) or VLAN dedicated to the job of hosting the cluster nodes. That way you can safely use multicast zen discovery to dynamically add and remove nodes from the cluster without having to know all of the IP addresses of all the nodes up front. You can fit as many nodes in your cluster as your subnet mask allows.

In this scenario, there is only one required JNDI properties:

Key Value Notes
multicast true equivalent to ElasticSearch's

In WebSphere Liberty's server.xml file, the JNDI property looks like this:

That's it? Yeah, that's it!

Bring up a MobileFirst Analytics server, visit the analytics console (to trigger the underlying technology stack to initialize ElasticSearch), and click over to the Administration page. It will show you yellow status because no other node has been allocated to hold the replica shards. Your cluster has only one node, and is therefore lacking redundancy and is at risk of going down. Bring up a second server, visit the analytics console, and watch that status change to green!

Scenario Two: You're Starting With a Fixed Number of Nodes

In some cases, you don't have a private LAN or VLAN, or have nodes spanning data centers. In this case, you will be starting with a set of known hostnames or IP addresses, and perhaps sharing the LAN with other systems. You may want to avoid multicast in this case.

For this example, let's say we know the IP addresses of four nodes that will make up our initial cluster. They are:


We'll consider each node to be master-eligible and have multicast turned off, which just means we'll be using the default configuration from MobileFirst Analytics. Since multicast is off, ElasticSearch needs to be told about all the nodes initially making up the cluster. I also recommend explicitly setting the transport port so your firewall rules can be hardened. Here is the configuration:

Key Value Notes
transportport 9600 equivalent to ElasticSearch's transport.tcp.port
masternodes,,, equivalent to ElasticSearch's

It is safe to for a given node to list itself in the masternodes.

That's it? Yes! Use the same steps as in scenario one to bring up your servers, and check that administration tab.

Required Setting

While the above scenarios are functionally complete, they are not safe! You need to protect yourself from split brain (again, see the planning blog post). Basically, the master-eligible nodes in a cluster need to reach a quorum to vote on who will be the master node, especially in node down situations where the node that went down was the master! The rule is that you should set the discover.zen.minimum_master_nodes to ((number_of_master-eligible_nodes / 2) + 1). So in each scenario you need the following property, and to set it you need to know how many master-eligible nodes are in the cluster. By default, all MobileFirst Analytics servers are master-eligible, so in scenario two, it would be:

Key Value
discovery.zen.minimum_master_nodes 3

Since the configuration requires at least three master-eligible nodes to be available in your cluster before ElasticSearch will reply to queries, you'll need to bring up three analytics servers, visit each console to trigger the initialization, wait for ElasticSearch to establish a quorum and vote for a master, then you'll see green status.

(Nearly) Required Settings

Remember all those times I said "yeah, that's it"? Well, about that. You are strongly recommended to set additional configuration. Since you know what shards are and how the setting affects performance (you did read the planning blog post, right?), you really should set that property. Also, you should consider setting the nodename so each node can be easily identified in the logs and Administration tab, and set the datapath so you know in which directory your data is stored.

Key Value Notes
shards 12 you got this number by using the capacity calculator; equivalent to ElasticSearch's index.number_of_shards
replicas 2 equivalent to ElasticSearch's index.number_of_replicas
nodename shrute set a unique name per node if you wish; equivalent to ElasticSearch's
datapath /mfp_data_goes_here equivalent to ElasticSearch's

I would also recommend you consider setting a few nodes to be master-eligible only and all others to be data only. That way the master nodes are solely responsible for cluster management and balance, and the data nodes are solely responsible for answering queries. That way you never need to be concerned with updating the discovery.zen.minimum_master_nodes.

For example, if you have an eight-node cluster, set four nodes with:

Key Value Notes
nodetype master equivalent to setting ElasticSearch's node.master to true and to false

And the rest of the nodes with:

Key Value Notes
nodetype data equivalent to setting ElasticSearch's node.master to false and to true

Then set discovery.zen.minimum_master_nodes to 3 and you won't ever need to update it because three is always a satisfactory quorum.


We could get into some much more complex scenarios like when your cluster spans more than one data center, and the latency between the two exceeds ElasticSearch's ping timeout tolerance, or when you need dedicated master nodes for failover or performance. You can even set up nodes that hold no data, but simply offload some CPU burden when your cluster is extremely busy answering queries. That sounds like another good article!

Now you know how to quickly set up a MobileFirst Analytics cluster! Go forth and practice!

Inclusive terminology note: The Mobile First Platform team is making changes to support the IBM® initiative to replace racially biased and other discriminatory language in our code and content with more inclusive language. While IBM values the use of inclusive language, terms that are outside of IBM's direct influence are sometimes required for the sake of maintaining user understanding. As other industry leaders join IBM in embracing the use of inclusive language, IBM will continue to update the documentation to reflect those changes.
Last modified on May 01, 2016