Skip to content
OCEE documentation
OCEE documentation

Setup/Configuration

This page describes how to set up the cluster manager, and which configuration options are available.

To configure an OpenCms Cluster you will need two or more OpenCms instances installed on separated machines, all instances accessing the same database. To achieve this configuration you will need to install every instance, making a distinction during the setup process of the first instance and the others:

Install the first machine as if it were a standalone installation, including the OCEE module, or you could also import the module in a previously installed OpenCms instance. Then edit the configuration files, located under the ${OPENCMS_HOME}/WEB-INF/config/ folder, as following:

Add the cluster configuration class to the opencms.xml configuration file, adding it under the <configuration> node, as follows:

<configuration>
    ...
    <config class="org.opencms.ocee.cluster.CmsClusterConfiguration"/>
</configuration>

Change the memory monitor class as follows:

...
<memorymonitor class="org.opencms.ocee.cluster.CmsClusterMemoryMonitor" >
...

Create a new ocee-cluster.xml configuration file, following the indications under the next section.

Install the other instances keeping in mind following instructions:

  • Before installing, but after unzipping the war-file:
    Delete all files under your ${OPENCMS_HOME}/WEB-INF/packages/modules/ folder. Alternatively, you could also deselect all modules during the module selection dialog.
  • Then begin to install normally, and on the database setup dialog provide the same database connection data as for the first instance, but disable all options related to creating or dropping databases and tables (this may vary from selected database). Ignore the warning on the next screen.
  • Skip the empty module selection dialog.
  • Finish the setup, but be aware that the installation will not work properly until
    restarting the servlet container.
  • Copy the opencms-modules.xml and ocee-cluster.xml files from a previously installed instance to this one. And keep in mind that every time a module is added, removed or modified, you have to distribute the opencms-modules.xml file to all the servers.
  • Apply the same changes to the opencms.xml and opencms-system.xml files indicated for the first instance, in the previous section.
  • Copy all files from WEB-INF/lib of the previous installed instance to the new one, replacing all older versions.
  • Restart the servlet container.

After installing all OpenCms instances, you should synchronize the exported resources by using the “Update Export Points” tool of the Administration View described in section Cluster Manager Administration.

The Cluster Manager configuration is maintained in the ${OPENCMS_HOME}/WEB-INF/config/ocee-cluster.xml XML file. This section describes the configuration options for this file. Be aware that the cluster configuration has to be the same on all OpenCms instances in the cluster.

  • Element: /opencms/cluster/cluster-name
    Description: The name of the cluster. If you have multiple OpenCms clusters running in the same LAN using UDP communication (see below), the cluster name must be different for each cluster.
  • Element: /opencms/cluster/communication-config
    Description: The cluster communication method. Either tcp, udp or custom. See the section "Cluster communication methods" for more information.
  • Element: /opencms/cluster/passphrase
    Description: A password phrase for security reasons.
    Default Value: This is the secret cluster event passphrase!
  • Element: /opencms/cluster/eventhandler
    Description: A fully qualified class name, which will handle the cluster events.
    Default Value: org.opencms.ocee.cluster.CmsClusterEventHandler
  • Element: /opencms/cluster/eventstore
    Description: Optional configuration of cluster event store and error/warning mail settings.
  • Element: /opencms/cluster/servers
    Description: A list of all configured servers in the cluster. See next section.
  • Element: /opencms/cluster/wp-server
    Description: Indicates the name of the workplace server, the only server that is able to
    forward events into the cluster.

Within a cluster it may happen that a single server is not accessible for short period of time due to maintenance issues or network glitches. To ensure that this server still processes all relevant events, the other servers may store the event data for a certain amount of time to retry sending the data once the server is accessible again. If the event store is enabled a scheduled job will be set to check periodically for inaccessible servers and stored events to send to those. A server that can not be reached after a configured set of retries will be marked as dead and won’t be receiving any events until the cluster is reinitialized.

If configured the cluster module will send warning emails once a server was inaccessible for a short period of time but could be reached later on.

If configured the cluster module will send error emails once a server is inaccessible and could not be reached after the configured number of retries.

The cluster event store configuration has the following structure:

  • Element: /opencms/cluster/eventstore/enabled
    Description: Boolean value that enables/disables the event store.
    Default Value: false
  • Element: /opencms/cluster/eventstore/connection-retries
    Description: The number of connection retries before an inaccessible server is marked
    as dead.
    Default Value: 3
  • Element: /opencms/cluster/eventstore/mailsettings/warnings-enabled
    Description: Boolean value that enables/disables the warning emails.
    Default Value: false
  • Element: /opencms/cluster/eventstore/mailsettings/error-enabled
    Description: Boolean value that enables/disables the error emails.
    Default Value: false
  • Element: /opencms/cluster/eventstore/mailsettings/recipirents
    Description: A comma separated list of email addresses the warning and error emails
    will be send to.

Every server in the cluster has to be configured in the configuration file, under a separate /opencms/cluster/servers/server node with the following structure:

  • Element: /opencms/cluster/servers/server/name
    Description: A nice name for the server, e.g. the domain name. It must be the same as the property "server.name" in the opencms.properties for that server.
  • Element: /opencms/cluster/servers/server/host
    Description: The IP address of the machine. Can also be a host name, but it will be resolved to an IP address only once, at startup time.
  • Element: /opencms/cluster/servers/server/event-source
    Description: Boolean value that allows non-wp servers to forward events.
    Default Value: false, in most cases, the only meaningful value.
  • Element: /opencms/cluster/servers/server/delay
    Description: Number of seconds this server should delay the processing of cache relevant events.
    Default Value: 0
  • Element: /opencms/cluster/servers/server/ugc-server
    Description: Boolean flag (‘true’ or ‘false’) which indicates whether this server is used for user generated content in OpenCms 9.5 or later, and thus should send events relevant to user generated content (resource created/modified/deleted, project published).
    Default Value: false
  • Element: /opencms/cluster/servers/server/replication-group
    Description: Optional string that can be configured to improve the cluster performance when also using the OCEE Replication feature, especially in large clusters. Normally, events are sent to all cluster servers after replication finishes, but when this option is set, events are only forwarded to cluster servers with the same replication group as that of the cluster server selected for replication. Cluster servers should be assigned the same replication group if they access the same database.
  • Element: /opencms/cluster/servers/server/user-events
    Description: Specifies which user modification events should be broadcasted to the given cluster server. Valid values are [all, additionalinfo, coredata, none].
    Default Value: all

Sample configuration

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE opencms SYSTEM "http://www.alkacon.com/dtd/6.0/ocee-cluster.dtd">

<opencms>
    <cluster>
        <cluster-name>OCEE cluster</cluster-name>
        <communication-config>udp</communication-config>
        <passphrase>This is the secret cluster passphrase!</passphrase>
        <eventstore>
        	<enabled>true</enabled>
        	<connection-retries>3</connection-retries>
        	<mailsettings>
        		<warnings-enabled>true</warnings-enabled>
        		<errors-enabled>true</errors-enabled>
        		<recipients>your.address@company.com</recipients>
        	</mailsettings>
        </eventstore>
        <servers>
            <server>
                <name>opencms1</name>
                <host>workplace.mycompany.com</host>
                <event-source>true</event-source>
            </server>
            <server>
                <name>opencms2</name>
                <host>live.mycompany.com</host>
            </server>
        </servers>
        <wp-server>opencms1</wp-server>
    </cluster>
</opencms> 

The cluster manager uses the JGroups library for cluster network communication. Depending on the <communication-config> setting from the configuration, OpenCms can either use a predefined JGroups configuration for TCP or UDP, or a custom JGroups configuration file. The values are:

  • tcp:  Cluster members will contact each other over TCP. The ip address given in the <host> setting for each server is used to address that server. If a host name is given instead of an IP address, OpenCms will resolve the host name to the IP address once at startup and continue using that IP address. The port number used for communication is 7800.
  • udp: Cluster members will contact each other using UDP multicast. For this setting, the multicast address 228.8.8.8 and UDP port 45588 are used. If you use this setting, make sure that the cluster servers are in the same network and can reach each other via multicast.
  • custom: A custom JGroup protocol stack XML configuration will be loaded from ${OPENCMS_HOME}/WEB-INF/config/ocee-cluster-communication.xml. This is an advanced option that should not normally be used unless you have no choice. See the JGroups manual for further information.

It is possible to test the OCEE cluster communication layer from the command line independently from OpenCms and without a servlet container.

To run the test class, copy the org.opencms.ocee.base.jar and the jgroups-4.0.13.Final.jar from the OCEE distribution's lib/ folder into a folder on each server of the cluster.  Start the test class on each by navigating into the folder with the JARs and running:

java -cp org.opencms.ocee.base.jar:jgroups-4.0.13.Final.jar org.opencms.ocee.cluster.CmsClusterCommunicationTester "[SERVER_NAME]" "udp"

This uses the UDP configuration type messaging via UDP multicast, suitable for cluster where all nodes are located in the same LAN.

In case the cluster nodes are not located in the same LAN or the UDP configuration does not work, using the TCP configuration type is advised.
Start the test class like this:

java -cp org.opencms.ocee.base.jar:jgroups-4.0.13.Final.jar org.opencms.ocee.cluster.CmsClusterCommunicationTester "[SERVER_NAME]" "tcp" "[LOCAL_HOST_NAME]" "[REMOTE_HOST_NAME]" "[REMOTE_HOST_NAME]" ...

For the TCP configuration to work, the cluster nodes need to be able to access port 7800 on the other cluster nodes.

In case the provided configuration variants for JGroups do not work for you or you have special requirements, you can use a custom configuration file. Place the custom configuration file named "ocee-cluster-communication.xml" in the same folders as the JARs and run the test with

java -cp org.opencms.ocee.base.jar:jgroups-4.0.13.Final.jar org.opencms.ocee.cluster.CmsClusterCommunicationTester "[SERVER_NAME]" "custom"

Once the test classes are running on all involved servers, you should see output like:

View changed:
Server 'server one' using configuration type 'tcp'.
Server 'server two' using configuration type 'tcp'.
...

You can type in a message, that should show on all connected servers like this:

Message from server one:
Hello World!

Type 'quit' to exit the test.