This chapter provides instructions for configuring Coherence*Extend. The instructions provide basic setup and do not represent a complete configuration reference. In addition, refer to the platform-specific parts of this guide for additional configuration instructions. For a complete Java example that also includes configuration and setup, see Chapter 4, "Building Your First Extend Client."
This chapter includes the following sections:
Coherence*Extend requires configuration both on the client side and the cluster side. On the cluster side, extend proxy services are setup to accept client requests. Proxy services provide access to cache service instances and invocation service instances that run on the cluster. On the client side, remote cache services and the remote invocation services are configured and used by clients to access cluster data through the extend proxy service. Extend clients and extend proxy services communicate using TCP/IP.
Extend proxy services are configured in a cache configuration deployment descriptor. This deployment descriptor is often referred to as the cluster-side cache configuration file. It is the same cache configuration file that is used to set up caches on the cluster. Extend clients are also configured using a cache configuration deployment descriptor. This deployment descriptor is deployed with the client and is often referred to as the client-side cache configuration file. See Oracle Coherence Developer's Guide for detailed information about the cache configuration deployment descriptor
A Coherence cluster must include an extend proxy service to accept extend client connections and must include a cache that is used by clients to retrieve and store data. Both the extend proxy service and caches are configured in the cluster's cache configuration deployment descriptor. Extend proxy services and caches are started as part of a cache server (DefaultCacheServer) process.
The following topics are included this section:
The extend proxy service (ProxyService) is a cluster service that allows extend clients to access a Coherence cluster using TCP/IP. A proxy service includes proxies for two types of cluster services: the CacheService cluster service, which is used by clients to access caches; and, the InvocationService cluster service, which is used by clients to execute Invocable objects on the cluster.
The following topics are included in this section:
Extend proxy services are configured within a <caching-schemes> node using the <proxy-scheme> element. The <proxy-scheme> element has a <tcp-acceptor> child element that includes the address (IP or DNS name) and port that an extend proxy service listens to for TCP/IP client communication. See the "proxy-scheme" element reference in the Oracle Coherence Developer's Guide for a complete list and description of all <proxy-scheme> subelements.
Example 3-1 defines a proxy service named ExtendTcpProxyService and is set up to listen for client requests on a TCP/IP ServerSocket that is bound to 198.168.1.5 and port 9099. Both the cache and invocation cluster service proxies are enabled for client requests. In addition, the <autostart> element is set to true so that the service automatically starts at a cluster node.
Example 3-1 Extend Proxy Service Configuration
...
<caching-schemes>
<proxy-scheme>
<service-name>ExtendTcpProxyService</service-name>
<acceptor-config>
<tcp-acceptor>
<local-address>
<address>192.168.1.5</address>
<port>9099</port>
</local-address>
</tcp-acceptor>
</acceptor-config>
<proxy-config>
<cache-service-proxy>
<enabled>true</enabled>
</cache-service-proxy>
<invocation-service-proxy>
<enabled>true</enabled>
</invocation-service-proxy>
</proxy-config>
<autostart>true</autostart>
</proxy-scheme>
</caching-schemes>
...
|
Note: For clarity, the above example explicitly enables the cache and invocation cluster service proxies. However, both proxies are enabled by default and do not require a<cache-service-proxy> and <invocation-service-proxy> element to be included in the proxy scheme definition. |
Multiple extend proxy service instances can be defined in order to support an expected number of client connections and to support fault tolerance and load balancing. Client connections are automatically balanced across proxy service instances. The algorithm used to balance connections depends on the load balancing strategy that is configured. See "Load Balancing Connections", for more information on load balancing.
To define multiple proxy service instances, include a proxy service definition in multiple cache servers and use the same service name for each proxy service. Proxy services that share the same service name are considered peers.
The following examples define two instances of the ExtendTcpProxyService proxy service that are set up to listen for client requests on a TCP/IP ServerSocket that is bound to port 9099. The proxy service definition is included in each cache server's respective cache configuration file within the <proxy-scheme> element.
On cache server 1:
...
<caching-schemes>
<proxy-scheme>
<service-name>ExtendTcpProxyService</service-name>
<acceptor-config>
<tcp-acceptor>
<local-address>
<address>192.168.1.5</address>
<port>9099</port>
</local-address>
</tcp-acceptor>
</acceptor-config>
<autostart>true</autostart>
</proxy-scheme>
</caching-schemes>
...
On cache server 2:
...
<caching-schemes>
<proxy-scheme>
<service-name>ExtendTcpProxyService</service-name>
<acceptor-config>
<tcp-acceptor>
<local-address>
<address>192.168.1.6</address>
<port>9099</port>
</local-address>
</tcp-acceptor>
</acceptor-config>
<autostart>true</autostart>
</proxy-scheme>
</caching-schemes>
...
Multiple extend proxy services can be defined in order to provide different applications with their own proxies. Extend clients for a particular application can be directed toward specific proxies to provide a more predictable environment.
The following example defines two extend proxy services. ExtendTcpProxyService1 is set up to listen for client requests on a TCP/IP ServerSocket that is bound to 198.168.1.5 and port 9099. ExtendTcpProxyService2 is set up to listen for client requests on a TCP/IP ServerSocket that is bound to 198.168.1.5 and port 9098.
...
<caching-schemes>
<proxy-scheme>
<service-name>ExtendTcpProxyService1</service-name>
<acceptor-config>
<tcp-acceptor>
<local-address>
<address>192.168.1.5</address>
<port>9099</port>
</local-address>
</tcp-acceptor>
</acceptor-config>
<autostart>true</autostart>
</proxy-scheme>
<proxy-scheme>
<service-name>ExtendTcpProxyService2</service-name>
<acceptor-config>
<tcp-acceptor>
<local-address>
<address>192.168.1.5</address>
<port>9098</port>
</local-address>
</tcp-acceptor>
</acceptor-config>
<autostart>true</autostart>
</proxy-scheme>
</caching-schemes>
...
The cache service and invocation service proxies can be disabled within an extend proxy service definition. Both of these proxies are enabled by default and can be explicitly disabled if a client does not require a service.
Cluster service proxies are disabled by setting the <enabled> element to false within the <cache-service-proxy> and <invocation-service-proxy> respectively.
The following example disables the inovcation service proxy so that extend clients cannot execute Invocable objects within the cluster:
<proxy-scheme>
...
<proxy-config>
<invocation-service-proxy>
<enabled>false</enabled>
</invocation-service-proxy>
</proxy-config>
...
</proxy-scheme>
Likewise, the following example disables the cache service proxy to restrict extend clients from accessing caches within the cluster:
<proxy-scheme>
...
<proxy-config>
<cache-service-proxy>
<enabled>false</enabled>
</cache-service-proxy>
</proxy-config>
...
</proxy-scheme>
By default, extend clients are allowed to both read and write data to proxied NamedCache instances. The <read-only> element can be specified within a <cache-service-proxy> element to prohibit extend clients from modifying cached content on the cluster. For example:
<proxy-scheme>
...
<proxy-config>
<cache-service-proxy>
<read-only>true</read-only>
</cache-service-proxy>
</proxy-config>
...
</proxy-scheme>
By default, extend clients are not allowed to acquire NamedCache locks. The <lock-enabled> element can be specified within a <cache-service-proxy> element to allow extend clients to perform locking. For example:
<proxy-scheme>
...
<proxy-config>
<cache-service-proxy>
<lock-enabled>true</lock-enabled>
</cache-service-proxy>
</proxy-config>
...
</proxy-scheme>
If client-side locking is enabled and a client application uses the NamedCache.lock() and unlock() methods, it is important that a member-based (rather than thread-based) locking strategy is configured when using a partitioned or replicated cache. The locking strategy is configured using the <lease-granularity> element when defining cluster-side caches. A granularity value of thread (the default setting) means that locks are held by a thread that obtained them and can only be released by that thread. A granularity value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release the lock. Because the extend proxy clustered service uses a pool of threads to execute client requests concurrently, it cannot guarantee that the same thread executes subsequent requests from the same extend client.
The following example demonstrates setting the lease granularity to member for a partitioned cache
...
<distributed-scheme>
<scheme-name>dist-default</scheme-name>
<lease-granularity>member</lease-granularity>
<backing-map-scheme>
<local-scheme/>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
...
Extend clients read and write data to a cache on the cluster. Any of the cache types can store client data. For extend clients, the cache on the cluster must have the same name as the cache that is being used on the client side; see "Defining a Remote Cache". For more information on defining caches, see "Using Caches" in the Oracle Coherence Developer's Guide.
The following example defines a partitioned cache named dist-extend.
<?xml version="1.0"?>
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
coherence-cache-config.xsd">
<caching-scheme-mapping>
<cache-mapping>
<cache-name>dist-extend</cache-name>
<scheme-name>dist-default</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<distributed-scheme>
<scheme-name>dist-default</scheme-name>
<backing-map-scheme>
<local-scheme/>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>
</cache-config>
Extend clients use the remote cache service and the remote invocation service to interact with a Coherence cluster. The services must be configured to connect to extend proxy services that run on the cluster. Both remote cache services and remote invocation services are configured in a cache configuration deployment descriptor that must be found on the classpath when an extend-based client application starts.
The following topics are included in this section:
A remote cache is specialized cache service that routes cache operations to a cache on the cluster. The remote cache and the cache on the cluster must have the same name. Extend clients use the NamedCache interface as normal to get an instance of the cache. At run time, the cache operations are not executed locally but instead are sent using TCP/IP to an extend proxy service on the cluster. The fact that the cache operations are delegated to a cache on the cluster is transparent to the extend client.
A remote cache is defined within a <caching-schemes> node using the <remote-cache-scheme> element. A <tcp-initiator> element is used to define the address (IP or DNS name) and port of the extend proxy service on the cluster to which the client connects. See the "remote-cache-scheme" element reference in the Oracle Coherence Developer's Guide for a complete list and description of all <remote-cache-scheme> subelements.
Table 3-0 defines a remote cache named dist-extend that connects to an extend proxy service that is listening on address 198.168.1.5 and port 9099. To use this remote cache, there must be a cache defined on the cluster that is also named dist-extend. See "Defining Caches for Use By Extend Clients" for more information on defining caches on the cluster.
Example 3-2 Remote Cache Definition
<?xml version="1.0"?>
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
cache-config.xsd">
<caching-scheme-mapping>
<cache-mapping>
<cache-name>dist-extend</cache-name>
<scheme-name>extend-dist</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<remote-cache-scheme>
<scheme-name>extend-dist</scheme-name>
<service-name>ExtendTcpCacheService</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>198.168.1.5</address>
<port>9099</port>
</socket-address>
</remote-addresses>
<connect-timeout>10s</connect-timeout>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>5s</request-timeout>
</outgoing-message-handler>
</initiator-config>
</remote-cache-scheme>
</caching-schemes>
</cache-config>
Extend clients typically use remote caches as part of a near cache. In such scenarios, a local cache is used as a front cache and the remote cache is used as the back cache. For more information, see "Defining a Near Cache for C++ Clients" and "Defining a Near Cache for .NET Clients", respectively.
The following example creates a near cache that uses a local cache and a remote cache.
<?xml version="1.0"?>
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
coherence-cache-config.xsd">
<caching-scheme-mapping>
<cache-mapping>
<cache-name>dist-extend-near</cache-name>
<scheme-name>extend-near</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<near-scheme>
<scheme-name>extend-near</scheme-name>
<front-scheme>
<local-scheme>
<high-units>1000</high-units>
</local-scheme>
</front-scheme>
<back-scheme>
<remote-cache-scheme>
<scheme-ref>extend-dist</scheme-ref>
</remote-cache-scheme>
</back-scheme>
<invalidation-strategy>all</invalidation-strategy>
</near-scheme>
<remote-cache-scheme>
<scheme-name>extend-dist</scheme-name>
<service-name>ExtendTcpCacheService</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>localhost</address>
<port>9099</port>
</socket-address>
</remote-addresses>
<connect-timeout>10s</connect-timeout>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>5s</request-timeout>
</outgoing-message-handler>
</initiator-config>
</remote-cache-scheme>
</caching-schemes>
</cache-config>
A remote invocation scheme defines an invocation service that is used by clients to execute tasks on the remote Coherence cluster. Extend clients use the InvocationService interface as normal. At run time, a TCP/IP connection is made to an extend proxy service and an InvocationService implementation is returned that executes synchronous Invocable tasks within the remote cluster JVM to which the client is connected.
Remote invocation schemes are defined within a <caching-schemes> node using the <remote-invocation-scheme> element. A <tcp-initiator> element is used to define the address (IP or DNS name) and port of the extend proxy service on the cluster to which the client connects. See the "remote-invocation-scheme" element reference in the Oracle Coherence Developer's Guide for a complete list and description of all <remote-invocation-scheme> subelements.
Example 3-3 defines a remote invocation scheme that is called ExtendTcpInvocationService and connects to an extend proxy service that is listening on address 198.168.1.5 and port 9099.
Example 3-3 Remote Invocation Scheme Definition
...
<caching-schemes>
<remote-invocation-scheme>
<scheme-name>extend-invocation</scheme-name>
<service-name>ExtendTcpInvocationService</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>198.168.1.5</address>
<port>9099</port>
</socket-address>
</remote-addresses>
<connect-timeout>10s</connect-timeout>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>5s</request-timeout>
</outgoing-message-handler>
</initiator-config>
</remote-invocation-scheme>
</caching-schemes>
...
Remote cache schemes and remote invocation schemes can include multiple extend proxy service addresses to ensure a client can always connect to the cluster. The algorithm used to balance connections depends on the load balancing strategy that is configured. See "Load Balancing Connections", for more information on load balancing.
To configure multiple addresses, add additional <socket-address> child elements within the <tcp-initiator> element of a <remote-cache-scheme> and <remote-invocation-scheme> node as required. The following example defines two extend proxy addresses for a remote cache scheme. See "Defining Multiple Proxy Service Instances", for instructions on setting up multiple proxy addresses.
...
<caching-schemes>
<remote-cache-scheme>
<scheme-name>extend-dist</scheme-name>
<service-name>ExtendTcpCacheService</service-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>192.168.1.5</address>
<port>9099</port>
</socket-address>
<socket-address>
<address>192.168.1.6</address>
<port>9099</port>
</socket-address>
</remote-addresses>
</tcp-initiator>
</initiator-config>
</remote-cache-scheme>
</caching-schemes>
...
When a Coherence*Extend service detects that the connection between the client and cluster has been severed (for example, due to a network, software, or hardware failure), the Coherence*Extend client service implementation (that is, CacheService or InvocationService) dispatches a MemberEvent.MEMBER_LEFT event to all registered MemberListeners and the service is stopped. For cases where the application calls CacheFactory.shutdown(), the service implementation dispatches a MemberEvent.MEMBER_LEAVING event followed by a MemberEvent.MEMBER_LEFT event. In both cases, if the client application attempts to subsequently use the service, the service automatically restarts itself and attempts to reconnect to the cluster. If the connection is successful, the service dispatches a MemberEvent.MEMBER_JOINED event; otherwise, a irrecoverable error exception is thrown to the client application.
A Coherence*Extend service has several mechanisms for detecting dropped connections. Some are inherent to the underlying TCP/IP protocol, whereas others are implemented by the service itself. The latter mechanisms are configured within the <outgoing-message-handler> element.
The <request-timeout> element is the primary mechanism used to detect dropped connections. When a service sends a request to the remote cluster and does not receive a response within the request timeout interval, the service assumes that the connection has been dropped.
WARNING:
If a <request-timeout> value is not specified, a Coherence*Extend service uses an infinite request timeout. In general, this is not a recommended configuration, as it could result in an unresponsive application. For most use cases, specify a reasonable finite request timeout.