#
# README.txt
#
# Copyright 2001-2007 by Oracle. All rights reserved.
#
# Oracle is a registered trademarks of Oracle Corporation and/or its affiliates.
#
# This software is the confidential and proprietary information of
# Oracle Corporation. You shall not disclose such confidential and
# proprietary information and shall use it only in accordance with the
# terms of the license agreement you entered into with Oracle.
#
# This notice may not be removed or altered.
#

README
======

Synchronized-site Coherence*Extend Example

Contents
========

    * Overview
    * Prerequisites
    * Build Instructions
    * Running the Example

Overview
========

This example demonstrates how to use the features of Coherence*Extend to
allow one cluster to maintain a synchronized subset of a remote cluster
cache(s). Use cases for this capability include hot/hot disaster recovery
strategies and read-write access to a locally cached replica of remote data.

Assume that there are two sites, one in Boston and the other in London, which
are connected via a WAN. Each site is part of a separate Coherence cluster
(i.e. Coherence unicast and multicast UDP traffic cannot be sent over the
WAN). Each cluster runs two distributed cache services, one that manages
"local" data and one that caches "remote" data. Storage-enabled members
running the "local" cache service work in concert to manage all data local to
the site, whereas storage-enabled members running the "remote" cache service
work in concert to maintain a distributed near cache of remote data. Storage-
disabled clients in each site can access either "local" or "remote" data.

All access to remote data is performed via a Coherence*Extend backing map.
Since Coherence*Extend fully supports the ObservableMap interface, the near
caches of remote data are kept in sync with the "master" copies maintained by
the remote cluster. Local data can be accessed and updated at "cluster-local"
speed. Onced cached, remote data can be accessed at "cluster-local" speed.
Initial access and update of remote data (initiated by either site) are the
only operations that must traverse the WAN.

If the WAN ever fails, the replica sites still are able to both read and
write their copy of the "master" data set. When disconnected, the Coherence*
Extend backing map maintains a delta map of all changes made to the replica
while in a disconnected state. When the WAN comes back up, a customizable
distributed reconciliation policy is automatically executed to resolve local
changes with the "master" copy. The default reconciliation policy is to
simply resynchronize the replica with the master, but more advance policies
that take advantage of the delta map can be implemented.

Prerequisites
=============

  To build the example, you must have the following software installed:

    * J2SE SDK 1.4 or later  (http://java.sun.com/)
    * Apache Ant             (http://ant.apache.org/)
    * Oracle Coherence 3.3   (http://www.oracle.com/technology/products/coherence/index.html)

Build Instructions
==================

  * Update bin/set-env.sh to reflect your system environment.

  * Open a shell and execute the following command in the bin directory:

      ./ant build

  * To completely remove all build artifacts from your filesystem, run:

      ./ant clean

Running the Example
===================

  * Start the Boston cluster by executing the following scripts:

      (1) ./start-server boston
      (2) ./start-client boston

  * Start the London cluster by executing the following scripts:

      (1) ./start-server london
      (2) ./start-client london

  * To access local data in the Boston cluster, run the following command:

      Map (?): cache boston-test

    To access London data in the Boston cluster, run the following command:

      Map (?): cache london-test

  * To access local data in the London cluster, run the following command:

      Map (?): cache london-test

    To access Boston data in the London cluster, run the following command:

      Map (?): cache boston-test