In This Topic




 In a standard setup, the XperienCentral application is installed on a single node. It is also possible to install XperienCentral on more than one node in order to create a distributed XperienCentral environment. The principle reasons for creating a clustered XperienCentral environment are:      

  • Performance: the capacity of the node which generates pages for visitors of the website and that is used by the editors is insufficient to handle the generated loads.
  • Security: For security reasons, the XperienCentral Edit node is installed on a different node than the ones that generate pages for the website visitors.
  • Failover: To prevent the website from becoming unavailable when one node goes offline, multiple nodes can be configured to ensure that the website keeps running.

This topic describes how to set up a distributed XperienCentral environment. Distributed XperienCentral deployments can consist of a primary read/write node and one or more read-only nodes or two read/write nodes and one or more read-only nodes. An example of a clustered XperienCentral deployment is shown below:



In the above illustration, all the XperienCentral installations are identical except that the read/write node(s) have read and write permissions to the index and the read-only nodes have just read access to the index. The dashed arrow between the load balancer and the read/write node(s) represents the choice of using the read/write node(s) only for the Edit environment and/or also being used to generate pages for the frontend.

A distributed XperienCentral deployment can contain two or more read/write nodes, all of which have write access to the index and which contain local files for the website as well as several read-only nodes. In order to keep all the files on the nodes synchronized with each other, a file store mechanism (the File Distribution Service) is used. The File Distribution Service manages a central store for all files contained in the web roots of the websites in the clustered environment and monitors the creation/deletion of files on a read/write node and then distributes to or deletes the file on all the other nodes.


Back to top



Distributed XperienCentral Setup in a Nutshell

The installation of XperienCentral on a node in a distributed setup is the same as installing XperienCentral in a non-distributed setup except for the following four differences:

  • Database configuration
  • Write access to the index
  • The sharing of static content
  • If you use the File Distribution Service to synchronize files between all the nodes in the cluster, set the maximum allowable file size in the Setup Tool.

Set up the read/write node in the cluster just like it is a standalone setup; there are two differences in setting up a read/write node in a clustered environment compared to setting up a standalone node:

  • Modify clustering properties in the settings.xml file in order to define the cluster.
  • Set up the synchronization of static files between the read/write and read-only nodes.

The setup of each subsequent node is identical to setting up the (first) read/write node, except:

  • The database is already in place: actions related to creating the database do not need to be performed.
  • You need to assign each node in the cluster a unique identifier and set the read/write properties in the settings.xml file or
  • Add two Tomcat startup parameters to identify the each node in the cluster (cluster ID) and to set the read/write properties.
  • Check the firewall settings.
  • Start all the nodes in the cluster.


Configure the settings.xml Files

There are three settings that need to be changed/checked in the settings.xml file on each node in the clustered setup:

  • <activeProfiles>
  • <webmanager.clustering.id>
  • <webmanager.clustering.readonly>


Clustering-specific Settings for the Read/Write Node

In the settings.xml of the read/write node, change/check the following properties:

  • The <activeProfiles> database parameter must be changed from “standalone” to “clustered”, for example,

    <activeProfile>jcr-clustered-mssql</activeProfile>

    instead of

    <activeProfile>jcr-standalone-mssql</activeProfile>.

    for an MS-SQL database. The same applies to MySQL and Oracle ― change standalone to clustered (jcr-clustered-mysql and jcr-clustered-oracle).

  • Check the webmanager.cluster.syncDelay parameter (should be set to 500 miliseconds).
  • Set the clustering ID of the node. Choose a unique identifier for each node in the cluster. It is good standard practice to use the node’s hostname as the clustering ID. For example:

     <webmanager.clustering.id>edit01</webmanager.clustering.id>

  • Be sure the read-only setting is set to false ― This is the default setting, but check it anyway. For example:

     <webmanager.clustering.readonly>false</webmanager.clustering.readonly>


Complete the configuration of the node just like a standalone XperienCentral node by issuing the following command from a Command prompt:

mvn –s settings.xml –P configure-jcr-repository


Clustering-specific Settings for all Nodes

Once the read/write node is running properly, use its settings.xml for the other read/write node (if you have a dual read/write node environment) as well as for all the read-only node(s). Two properties in the settings.xml of the read/write node have to be changed for the other read/write node (if you are setting up a dual read/write node cluster) as well as for the read-only node(s):

  • The clustering.id which defines the clustering ID of each node. Choose a unique ID for each node. It is good standard practice to use the node’s hostname as the clustering ID. For example:

    <webmanager.clustering.id>www01</webmanager.clustering.id>

  • Check the webmanager.cluster.syncDelay parameter (should be set to 500 miliseconds).
  • For each read/write node, make sure the clustering.readonly parameter is set to false.

    <webmanager.clustering.readonly>false</webmanager.clustering.readonly>

  • For each read-only node, set the clustering.readonly setting to true:

    <webmanager.clustering.readonly>true</webmanager.clustering.readonly>
  • For a dual read/write node configuration, set the clustering.filestore setting to true on all nodes. If you use a central storage location for the static content of your website and log files set this to false:

    No central storage location used:
    <webmanager.clustering.filestore>true</webmanager.clustering.filestore>

    Central storage location used:
    <webmanager.clustering.filestore>false</webmanager.clustering.filestore>


Finish the configuration of the node just like a standalone XperienCentral node by issuing the following command from a command line prompt:

mvn –s settings.xml –P configure-jcr-repository


Back to top



Make Static Content Available to all Nodes

When an editor places an image on a page within XperienCentral, this image will initially be only available on the read/write node on which it was placed. Through the use of a file store mechanism (The File Distribution Service), XperienCentral synchronizes static content between all the read/write nodes and read-only nodes in the cluster.


If your clustered XperienCentral environment contains more than one read/write node, GX Software strongly recommends that you use the File Distribution Service.


In a single read/write nodec onfiguration, if you do not use the XperienCentral File Distribution Service to synchronize static content between the read/write and read-only node(s) in the cluster, some sort of mechanism must be configured in order to make the static content (such as images) available to all read-only nodes in the cluster. Using Robocopy, Rsync or another file synchronization tool, you must synchronize the following directory between the single read/write node and all read-only nodes in the cluster:


D:\GX-WebManager\configuration
      \wwwroot



Back to top



Check/Modify the Tomcat Parameters

The Tomcat servlet container on every node needs to know whether it should startup as a read/write node or a read-only node. This can be defined in the Java runtime options of Tomcat. To change the startup parameters of Tomcat, click the “Monitor Tomcat” icon in the system tray. After activating the Monitor, a pop-up appears. Switch to the [Java] tab.

Check the line on the webmanager.clustering.readonly property.

In case of a read/write node set this to:


–Dwebmanager.clustering.readonly=false


In case of a read-only node set this to:


–Dwebmanager.clustering.readonly=true


Assign a unique clustering ID to each node in the cluster:


–Dwebmanager.clustering.id=x


where “x” is the clustering identifier for the machine. If you want to use the XperienCentral File Distribution Service in a single read/write node configuration, add the following line to the XperienCentral-specific Tomcat parameters:


-Dwebmanager.clustering.filestore=true


Back to top



Check the Database and Memory Settings in an Environment that uses the File Distribution Service

The XperienCentral File Distribution Service synchronizes files between the nodes in the cluster and cleans up files that have been deleted. The File Distribution Service uses the Jackrabbit DataStore supported by your relational database. For this reason you must ensure that your database is able to store a binary large object (BLOB) greater than or equal to the size of the largest file that exists or can exist in the web root. Check the following settings for the supported databases:

MySQL

The setting max_allowed_packet should be set to a value higher than the largest file that exists or can exist in the web root directory. Additionally, the XperienCentral node and the MySQL node must be able to handle the largest file on the file system. Because MySQL stores the complete file in memory, you could encounter out of memory errors if this setting is set too low. For more information, see the MySQL documentation for the max_allowed_packet property.

MSSQL

For MSSQL versions 2008 10.0.5 and higher, the maximum file size that has been successfully tested is up to and including 2 GB. For more information, contact your GX Software consultant.

Oracle

The maximum file size that has been successfully tested is up to and including 4 GB. For more information, contact your GX consultant.


Back to top



Set the File Distribution Service Maximum File Size

If you use the XperienCentral File Distribution Service to synchronize files between the nodes in a clustered environment, check and/or modify the maximum allowed file size for your database based on the settings described above in the Setup Tool. Follow these steps:

  1. Log in to the XperienCentral Setup Tool.
  2. Click [General].
  3. Locate the section file_distribution:



  4. If you want to change the default maximum allowed file size, enter a new value in the “Value” text box (in MB).

  5. Scroll down to the bottom of the page and click [Save Changes].


Synchronizing Uncontrolled Files

Uncontrolled files are files created by custom plugins. In order for these files to be distributed to all nodes in the cluster, you must add a Tomcat parameter or add a setting to the settings.xml file to enable the synchronization of uncontrolled files.



GX recommends that you do not create uncontrolled files in a clustered environment. If you do create uncontrolled files in a clustered environment, it is best to store them in the index. To enable the synchronization of uncontrolled files, add the following Tomcat parameter:

-Dwebmanager.clustering.scanner.interval=x

where “x” is the number of milliseconds between scans.


Or, to enable the synchronization of uncontrolled files in the settings.xml file, add the following line to the “Clustering Properties” section:

<webmanager.clustering.scanner.interval>x</webmanager.clustering.scanner.interval>

where “x” is the number of milliseconds between scans. The default value is 60000 ms (60 seconds). A value that is too high will cause a delay between the synchronization of the files. A value that is too low can cause performance issues when the number of files and/or the file sizes are large.


Back to top



Check the Firewall Settings

In many configurations, firewalls are placed between the nodes. Below is an overview of the connections necessary to ensure the proper functioning of XperienCentral in a clustered environment:


FromToProtocolDescription
read/write + read-only(s)databaseDB protocolThe connection between the read/write and read-only node(s) to the database server for performing queries.
read/write noderead-only nodeXperienCentral File Distribution Service or file synchronization mechanism such as Robocopy or RsyncS.To synchronize files between all nodes in the clustered environment.
Internetread-only nodeHTTPTo handle page requests from the frontend.
Intranet + VPNread/write nodeHTTPAccess to the editors' read/write node(s).


Back to top



Check the Cluster Lock Mechanism Setting

In a cluster of multiple read/write nodes, each read/write node has a lock on a particular task to prevent multiple nodes from interfering with each other, for example modifying the same index entry at the same time. Each read/write node regularly updates a special lock timestamp in the database related to the tasks it is currently performing. When a read/write node detects a lock on a task it wants to perform, it compares the lock timestamp with the current time to determine whether it is “stale”. If the timestamp is stale, the read/write node then waits for a set amount of time and then rechecks the lock timestamp. If the timestamp is still stale, the read/write node removes the lock from the task at which time another read/write node can put a lock on the task.

To define the time interval for the timestamp check, open the Setup Tool and locate the setting stale_cluster_lock_retry_time on the “General configuration” tab. The default time interval is 60 seconds. Modify the time interval to suit the conditions of your clustered deployment. In general, the time setting should be higher than the longest time that your database is normally unavailable. GX recommends a time interval between 30 and 60 seconds.  The default is 60 seconds. For example:



Back to top



Start all Nodes in the Cluster (Dual Read/Write Node Configuration)

Start XperienCentral on one of the read/write nodes. XperienCentral is completely started when the following messages appears in the Tomcat log:

26-jan-2010 13:05:35 nl.gx.webmanager.startup.impl.Startup start
INFO: XperienCentral started successfully in x ms

After the first read/write node has started successfully, it will begin the one-time only process of uploading the files in the upload and upload_mm directories to the file store. Depending on the number of/size of files on the website, this process could take some time.


It is not possible to start the other read/write node until this process is completed. After the file store has been populated with all files in the upload and upload_mm directories, the message will appear in the log.

INFO: Finished scanning the web roots for files for the File Distribution Service

Depending on how your log levels are set, you may not see this message.


It is now possible to start the second read/write node.


When the second read/write node starts, it will begin the process of synchronizing files from the central store. Until that process is complete, it will not be available. Once the synchronization is complete, start the read-only node(s).


Back to top



Start all Nodes in the Cluster (Single Read/Write Node Configuration)

Start XperienCentral on the read/write node. Be sure to wait until XperienCentral has completely started on the read/write node before starting the read-only node(s). XperienCentral is completely started when the following messages appears in the Tomcat log:

[date] [time] nl.gx.webmanager.startup.impl.Startup start
INFO: XperienCentral started successfully in x ms

If you are using the XperienCentral File Distribution Service to synchronize files between the nodes, the File Distribution Service begins the process of uploading the files in the
upload and upload_mm directories to the file store. Depending on the number of/size of files on the website, this process could take some time.


It is not possible to start the read-only node(s) until this process is completed.


After the file store has been populated with all files in the upload and upload_mm directories, the following message will appear in the log. Depending on how your log levels are set, you may not see this message:

INFO: Finished scanning the web roots for files for the File Distribution Service

You can now start the read-only node(s).


Back to top