Commit ef18c2ef authored by Tom Evans's avatar Tom Evans Committed by tevans

OF-205: Updated Hazelcast clustering plugin to version 2.4.1

git-svn-id: http://svn.igniterealtime.org/svn/repos/openfire/trunk@13378 b35dd754-fafc-0310-a699-88a17e54d16e
parent 384b403a
......@@ -26,7 +26,7 @@
border-bottom : 1px #ccc solid;
padding-bottom : 2px;
}
TT {
font-family : courier new;
font-weight : bold;
......@@ -44,6 +44,12 @@
Hazelcast Clustering Plugin Changelog
</h1>
<p><b>1.0.1</b> -- December 14, 2012</p>
<ul>
<li>Upgraded Hazelcast to version 2.4.1.</li>
</ul>
<p><b>1.0.0</b> -- September 22, 2012</p>
<ul>
......
......@@ -95,7 +95,7 @@
This defines a distributed map with a local (near cache) component,
suitable for stable caches having frequent reads and relatively
few updates. The cluster-wide limit for items in the map is
10000, with up to 1000 items available in the local cache. Items
100000, with up to 1000 items available in the local cache. Items
in the distributed map will be evicted after an hour of idle time,
and items in the local cache(s) will be evicted after 10 minutes
of idle time.
......@@ -149,7 +149,7 @@
Any integer between 0 and Integer.MAX_VALUE. 0 means
Integer.MAX_VALUE. Default is 0.
-->
<max-size policy="cluster_wide_map_size">10000</max-size>
<max-size policy="cluster_wide_map_size">100000</max-size>
<!--
When max. size is reached, specified percentage of
the map will be evicted. Any integer between 0 and 100.
......@@ -295,14 +295,6 @@
<async-backup-count>5</async-backup-count>
<read-backup-data>true</read-backup-data>
</map>
<map name="Published Items">
<backup-count>1</backup-count>
<async-backup-count>5</async-backup-count>
<read-backup-data>true</read-backup-data>
<max-size>10000</max-size>
<time-to-live-seconds>900</time-to-live-seconds>
<eviction-policy>LRU</eviction-policy>
</map>
<!--
Partitioned Openfire caches; entries copied to a single backup node and
replicated as needed in each node using near-cache configuration.
......@@ -317,7 +309,7 @@
<max-size policy="cluster_wide_map_size">100000</max-size>
<eviction-percentage>10</eviction-percentage>
<near-cache>
<max-size>10000</max-size>
<max-size>1000</max-size>
<max-idle-seconds>1800</max-idle-seconds>
<eviction-policy>LRU</eviction-policy>
<invalidate-on-change>true</invalidate-on-change>
......@@ -328,11 +320,11 @@
<read-backup-data>true</read-backup-data>
<max-idle-seconds>1800</max-idle-seconds>
<eviction-policy>LRU</eviction-policy>
<max-size policy="cluster_wide_map_size">10000</max-size>
<max-size policy="cluster_wide_map_size">100000</max-size>
<eviction-percentage>10</eviction-percentage>
<near-cache>
<max-size>1000</max-size>
<max-idle-seconds>600</max-idle-seconds>
<max-idle-seconds>300</max-idle-seconds>
<eviction-policy>LRU</eviction-policy>
<invalidate-on-change>true</invalidate-on-change>
</near-cache>
......@@ -342,7 +334,7 @@
<read-backup-data>true</read-backup-data>
<max-idle-seconds>1800</max-idle-seconds>
<eviction-policy>LRU</eviction-policy>
<max-size policy="cluster_wide_map_size">10000</max-size>
<max-size policy="cluster_wide_map_size">100000</max-size>
<eviction-percentage>10</eviction-percentage>
<near-cache>
<max-size>1000</max-size>
......@@ -355,12 +347,16 @@
<backup-count>1</backup-count>
<read-backup-data>true</read-backup-data>
<max-idle-seconds>1800</max-idle-seconds>
</map>
<map name="Published Items">
<backup-count>1</backup-count>
<read-backup-data>true</read-backup-data>
<max-size>100000</max-size>
<time-to-live-seconds>900</time-to-live-seconds>
<eviction-policy>LRU</eviction-policy>
<max-size policy="cluster_wide_map_size">10000</max-size>
<eviction-percentage>10</eviction-percentage>
<near-cache>
<max-size>1000</max-size>
<max-idle-seconds>600</max-idle-seconds>
<max-idle-seconds>60</max-idle-seconds>
<eviction-policy>LRU</eviction-policy>
<invalidate-on-change>true</invalidate-on-change>
</near-cache>
......
After you have licensed and downloaded Coherence EE from Oracle, place
the following jar files in this folder:
coherence.jar
coherence-work.jar
To build the clustering plugin, issue the following command from
the Openfire (source) /build/ folder:
$OPENFIRE_SRC/build> ant -Dplugin=clustering plugin
Also note that due to classpath loading order, it may be necessary to
either remove the coherence-cache-config.xml file from the Coherence
runtime JAR, or rename the plugin-clustering.jar file to force it to
load before coherence.jar (e.g. "clustering-plugin.jar" or similar).
In order to run Oracle Coherence in production mode, you will need to
secure licensing for the Enterprise Edition (EE) of Coherence. While
clustered caching for Openfire is available in the Standard Edition (SE),
per the Oracle Fusion licensing docs the InvocationService (which is
used by Openfire to distribute tasks among the cluster members) is only
available in EE or Grid Edition (GE).
Note that Coherence is configured to run GE in development mode by default.
You can change this setting by overriding the following Java system properties
via /etc/sysconfig/openfire (RPM) or openfired.vmoptions (Windows):
-Dtangosol.coherence.edition=EE
-Dtangosol.coherence.mode=prod
The current Coherence release is version 3.7.1.
\ No newline at end of file
......@@ -5,7 +5,7 @@
<name>${plugin.name}</name>
<description>${plugin.description}</description>
<author>Tom Evans</author>
<version>1.0.0</version>
<date>09/10/2012</date>
<version>1.0.1</version>
<date>12/14/2012</date>
<minServerVersion>3.7.2</minServerVersion>
</plugin>
......@@ -57,11 +57,11 @@ servers together in a cluster. By running Openfire in a cluster, you can
distribute the connection load among several servers, while also providing
failover in the event that one of your servers fails. This plugin is a
drop-in replacement for the original Openfire clustering plugin, using the
open source <a href="http://www.hazelcast.com">Hazelcast</a> data distribution
open source <a href="http://www.hazelcast.com">Hazelcast</a> data distribution
framework in lieu of an expensive proprietary third-party product.
</p>
<p>
The current Hazelcast release is version 2.3.1.
The current Hazelcast release is version 2.4.1.
</p>
<h2>Installation</h2>
<p>
......@@ -73,19 +73,19 @@ remove the clustering plugin before installing Hazelcast into your Openfire inst
<p>
To create an Openfire cluster, you will need at least two separate Openfire servers,
and each server must have the Hazelcast plugin installed. By default, the servers
will discover each other by exchanging UDP (multicast) packets via a configurable
will discover each other by exchanging UDP (multicast) packets via a configurable
IP address and port, but other initialization options are available if your network
does not support multicast communication (see "Configuration" below).
</p>
<p>
In addition, you will need some form of load balancer to distribute the connection
load among the members of your Openfire cluster. There are several commercial and
open source alternatives for this, including the Apache web server (httpd) plus
open source alternatives for this, including the Apache web server (httpd) plus
<a href="http://httpd.apache.org/docs/current/mod/mod_proxy_balancer.html">mod_proxy_balancer</a>
(if you are using the HTTP/BOSH Openfire connector). Some popular solutions include the
<a href="http://www.f5.com/products/big-ip/big-ip-local-traffic-manager/overview/">F5 LTM</a>
(commercial) and <a href="http://haproxy.1wt.eu/">HAProxy</a> (open source), among
<a href="http://en.wikipedia.org/wiki/Load_balancing_%28computing%29">many others</a>.
<a href="http://en.wikipedia.org/wiki/Load_balancing_%28computing%29">many others</a>.
</p>
<h2>Configuration</h2>
<p>
......@@ -110,7 +110,7 @@ directory, or in the classpath of your own custom plugin.</li>
<p>
The Hazelcast plugin uses the <a href="http://www.hazelcast.com/docs/2.3/manual/single_html/#Config">
XML configuration builder</a> to initialize the cluster from the XML file described above.
By default the cluster members will attempt to discover each other via multicast at the
By default the cluster members will attempt to discover each other via multicast at the
following location:
<ul>
<li>IP Address: 224.2.2.3</li>
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment