This appendix provides a reference of the elements that can be used in a cache configuration deployment descriptor and includes a brief overview of the descriptor. See Chapter 11, "Configuring Caches," for details on how to configure caches and complete usage instructions.
The cache configuration deployment descriptor specifies the various types of caches that can be used within a cluster. The name and location of the descriptor is specified in the operational deployment descriptor and defaults to coherence-cache-config.xml. A sample configuration descriptor (packaged in coherence.jar) is used unless a custom coherence-cache-config.xml file is found within the application's classpath. It is recommended that all nodes within a cluster use identical cache configuration descriptors.
The cache configuration deployment descriptor is defined in the cache-config.dtd file, which is located in the root of the coherence.jar library. The descriptor should begin with the following DOCTYPE declaration:
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
The root element of the configuration descriptor is the <cache-config> element. All caches are defined within the root element.
Note:
When deploying Coherence into environments where the default character set is EBCDIC rather than ASCII, make sure that this descriptor file is in ASCII format and is deployed into its runtime environment in the binary format.The following table lists all non-terminal elements which may be used within a cache configuration deployment descriptor.
Table B-1 Cache Configuration Elements
Used in: proxy-scheme
The acceptor-config element specifies the configuration information for a TCP/IP connection acceptor. The connection acceptor is used by a proxy service to enable Coherence*Extend clients to connect to the cluster and use the services offered by the cluster without having to join the cluster.
Table B-2 describes the elements you can define within the acceptor-config element.
Table B-2 acceptor-config Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
The maximum number of simultaneous connections allowed by this connection acceptor. Valid values are positive integers and zero. A value of zero implies no limit. Default value is zero. |
|
Optional |
Specifies the configuration information used by the connection acceptor to detect dropped client-to-cluster connections. |
|
|
Optional |
Specifies the class configuration information for a
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>string</param-type>
<param-value>my-pof-types.xml</param-value>
</init-param>
</init-params>
</serializer>
|
|
|
Optional |
Specifies the configuration information for a connection acceptor that enables Coherence*Extend clients to connect to the cluster over TCP/IP. |
|
|
< |
Optional |
Contains the list of filter names to be used by this connection acceptor. For example, specifying
<use-filters>
<filter-name>gzip</filter-name>
</use-filters>
|
Used in: tcp-acceptor, remote-addresses
Contains the configuration information for an address factory that implements the com.tangosol.net.AddressProvider interface.
Table B-3 describes the subelements you can define within the address-provider element.
Table B-3 address-provider Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the fully qualified name of a class that implements the This element cannot be used together with the |
|
< |
Optional |
Specifies the fully qualified name of a factory class for creating address provider instances. The instances must implement the This element cannot be used together with the |
|
< |
Optional |
Specifies the name of a static factory method on the factory class which will perform object instantiation. |
|
Optional |
Specifies initialization parameters which are accessible by implementations which support the |
Used in: external-scheme, paged-external-scheme.
The async-store-manager element adds asynchronous write capabilities to other store manager implementations. Supported store managers include:
custom-store-manager—allows definition of custom implementations of store managers
bdb-store-manager—uses Berkeley Database JE to implement an on disk cache
lh-file-manager—uses a Coherence LH on disk database cache
nio-file-manager—uses NIO to implement memory-mapped file based cache
nio-memory-manager—uses NIO to implement an off JVM heap, in-memory cache
This store manager is implemented by the com.tangosol.io.AsyncBinaryStoreManager class.
Table B-4 describes the subelements you can define within the async-store-manager element.
Table B-4 async-store-manager Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the maximum number of bytes that will be queued to be written asynchronously. Setting the value to zero does not disable the asynchronous writes; instead, it indicates that the implementation default for the maximum number of bytes are necessaries value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. Valid values are any positive memory sizes and zero. Default value is 4MB. |
|
Optional |
Configures the external cache to use Berkeley Database JE on disk databases for cache storage. |
|
|
< |
Optional |
Specifies a custom implementation of the |
|
Optional |
Configures the external cache to use a custom storage manager implementation. |
|
|
Optional |
Specifies initialization parameters, for use in custom async-store-manager implementations which implement the |
|
|
Optional |
Configures the external cache to use a Coherence LH on disk database for cache storage. |
|
|
Optional |
Configures the external cache to use a memory-mapped file for cache storage. |
|
|
Optional |
Configures the external cache to use an off JVM heap, memory region for cache storage. |
Used in: tcp-acceptor.
This element contains the collection of IP addresses of TCP/IP initiator hosts that are allowed to connect to the cluster using a TCP/IP acceptor. If this collection is empty no constraints are imposed. Any number of host-address and host-range elements may be specified.
Table B-5 describes the subelements you can define within the authorized-hosts element.
Table B-5 authorized-hosts Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies an IP address or hostname. If any are specified, only hosts with specified host-addresses or within the specified host-ranges will be allowed to join the cluster. The content override attributes |
|
|
Optional |
Specifies a range of IP addresses. If any are specified, only hosts with specified host-addresses or within the specified host-ranges will be allowed to join the cluster. |
|
|
Optional |
Specifies class configuration information for a |
The content override attributes xml-override and id can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document. See "Element Attributes".
Used in: distributed-scheme, optimistic-scheme, replicated-scheme
Specifies what type of cache will be used within the cache server to store the entries.
When using an overflow-based backing map, it is important that the corresponding backup-storage be configured for overflow (potentially using the same scheme as the backing-map). See "Partitioned Cache with Overflow" for an example configuration.
Note:
Thepartitioned subelement is used if and only if the parent element is the distributed-scheme.Table B-6 describes the subelements you can define within the backing-map-scheme element. :
Table B-6 backing-map-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
Optional |
Specifies whether the backing map itself is partitioned. It is respected only within a distributed-scheme. See Chapter 12, "Implementing Storage and Backing Maps." |
|
|
Optional |
Class schemes provide a mechanism for instantiating an arbitrary Java object for use by other schemes. The scheme which contains this element will dictate what class or interface(s) must be extended. |
|
|
Optional |
External schemes define caches which are not JVM heap based, allowing for greater storage capacity. |
|
|
Optional |
Local cache schemes define in-memory "local" caches. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near scheme. |
|
|
Optional |
As with external-scheme, |
|
|
Optional |
The |
|
|
Optional |
The read-write-backing-map-scheme defines a backing map which provides a size limited cache of a persistent store. |
|
|
Optional |
The |
Used in: distributed-scheme.
The backup-storage element specifies the type and configuration of backup storage for a partitioned cache.
The following table describes the elements you can define within the backup-storage element.
Table B-7 backup-storage Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Required |
Specifies the type of the storage used to hold the backup data. Legal values are:
Default value is the value specified in the |
|
< |
Optional |
Only applicable with the custom type. Specifies a class name for the custom storage implementation. If the class implements |
|
< |
Optional |
Only applicable with the file-mapped type. Specifies the path name for the directory that the disk persistence manager ( |
|
< |
Optional |
Only applicable with the off-heap and file-mapped types.Specifies the initial buffer size in bytes.The value of this element must be in the following format: [\d]+[[.][\d]]?[K|k|M|m|G|g]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. Legal values are positive integers between 1 and |
|
< |
Optional |
Only applicable with the off-heap and file-mapped types. Specifies the initial buffer size in bytes.The value of this element must be in the following format: [\d]+[[.][\d]]?[K|k|M|m|G|g]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. Legal values are positive integers between 1 and |
|
< |
Optional |
Only applicable with the scheme type. Specifies a scheme name for the |
Used in: external-scheme, paged-external-scheme, async-store-manager.
Note:
Berkeley Database JE Java class libraries are required to use abdb-store-manager, see the Berkeley Database JE product page for additional information.
http://www.oracle.com/technology/documentation/berkeley-db/je/index.html
The BDB store manager is used to define external caches which will use Berkeley Database JE on disk embedded databases for storage. See the examples of Berkeley-based store configurations in "Persistent Cache on Disk" and "In-memory Cache with Disk Based Overflow".
This store manager is implemented by the com.tangosol.io.bdb.BerkeleyDBBinaryStoreManager class, and produces BinaryStore objects implemented by the com.tangosol.io.bdb.BerkeleyDBBinaryStore class.
Table B-8 describes the elements you can define within the bdb-store-manager element.
Table B-8 bdb-store-manager Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies a custom implementation of the Berkeley Database |
|
< |
Optional |
Specifies the path name to the root directory where the Berkeley Database JE store manager will store files. If not specified or specified with a non-existent directory, a temporary directory in the default location will be used. |
|
Optional |
Specifies additional Berkeley DB configuration settings. See the Berkeley DB Configuration instructions: Also used to specify initialization parameters, for use in custom implementations which implement the |
|
|
< |
Optional |
Specifies the name for a database table that the Berkeley Database JE store manager will use to store data in. Specifying this parameter will cause the When specifying this property, it is recommended to use the |
Used in: operation-bundling.
The bundle-config element specifies the bundling strategy configuration for one or more bundle-able operations.
Table B-9 describes the subelements you can define within the bundle-config element.
Table B-9 bundle-config Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies whether the auto adjustment of the preferred-size value (based on the run-time statistics) is allowed. Valid values are |
|
< |
Optional |
Specifies the maximum amount of time in milliseconds that individual execution requests are allowed to be deferred for a purpose of "bundling" them together and passing into a corresponding bulk operation. If the preferred-size threshold is reached before the specified delay, the bundle is processed immediately. Valid values are positive numbers. Default value is 1. |
|
< |
Required |
Specifies the operation name for which calls performed concurrently on multiple threads will be "bundled" into a functionally analogous "bulk" operation that takes a collection of arguments instead of a single one. Valid values depend on the bundle configuration context. For the <
For the <
In all cases there is a pseudo operation named |
|
< |
Optional |
Specifies the bundle size threshold. When a bundle size reaches this value, the corresponding "bulk" operation will be invoked immediately. This value is measured in context-specific units. Valid values are zero (disabled bundling) or positive values. Default value is zero. |
|
< |
Optional |
Specifies the minimum number of threads that must be concurrently executing individual (non-bundled) requests for the bundler to switch from a pass-through to a bundling mode. Valid values are positive numbers. Default value is 4. |
Root Element
The cache-config element is the root element of the cache configuration descriptor, coherence-cache-config.xml. For more information on this document, see "Cache Configuration Deployment Descriptor".
At a high level, a cache configuration consists of cache schemes and cache scheme mappings. Cache schemes describe a type of cache, for instance a database backed, distributed cache. Cache mappings define what scheme to use for a given cache name.
Table B-10 describes the subelements you can define within the cache-config element.
Table B-10 cache-config Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
Required |
Specifies the caching-scheme that will be used for caches, based on the cache's name. |
|
|
Required |
Defines the available caching-schemes for use in the cluster. |
|
|
< |
Optional |
Defines factory wide default settings. |
Used in: caching-scheme-mapping
Each cache-mapping element specifies the caching-schemes which are to be used for a given cache name or pattern.
Table B-11 describes the subelements you can define within the cache-mapping element.
Table B-11 cache-mapping Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Required |
Specifies a cache name or name pattern. The name is unique within a cache factory.The following cache name patterns are supported:
The patterns get matched in the order of specificity (more specific definition is selected whenever possible). For example, if both |
|
< |
Required |
Contains the caching scheme name. The name is unique within a configuration file. Caching schemes are configured in the caching-schemes element. |
|
Optional |
Allows specifying replaceable cache scheme parameters. During cache scheme parsing, any occurrence of any replaceable parameter in format
<cache-mapping>
<cache-name>My*</cache-name>
<scheme-name>my-scheme</scheme-name>
<init-params>
<init-param>
<param-name>cache-loader</param-name>
<param-value>com.acme.MyCacheLoader</param-value>
</init-param>
<init-param>
<param-name>size-limit</param-name>
<param-value>1000</param-value>
</init-param>
</init-params>
</cache-mapping>
For any cache name match |
Used in: proxy-config
The cache-service-proxy element contains the configuration information for a cache service proxy that is managed by a proxy service.
Table B-12 describes the elements you can define within the cache-service-proxy element.
Table B-12 cache-service-proxy Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies whether the cache service proxy is enabled. If disabled, clients will not be able to access any proxied caches. Legal values are |
|
< |
Optional |
Specifies whether lock requests from remote clients are permitted on a proxied cache. Legal values are |
|
< |
Optional |
Specifies whether requests from remote clients that update a cache are prohibited on a proxied cache. Legal values are |
|
< |
Optional |
Specifies the fully qualified name of a class that implements the |
|
Optional |
Contains initialization parameters for the |
Used in: local-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme.
Cache store schemes define a mechanism for connecting a cache to a back-end data store. The cache store scheme may use any class implementing either the com.tangosol.net.cache.CacheStore or com.tangosol.net.cache.CacheLoader interfaces, where the former offers read-write capabilities, where the latter is read-only. Custom implementations of these interfaces may be produced to connect Coherence to various data stores. See "Cache of a Database" for an example of using a cachestore-scheme.
Table B-13 describes the elements you can define within the cachestore-scheme element.
Table B-13 cachestore-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
<scheme-name> |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
<scheme-ref> |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance". |
|
Optional |
Specifies the implementation of the cache store. The specified class must implement one of the following two interfaces.
|
|
|
Optional |
Configures the cachestore-scheme to use Coherence*Extend as its cache store implementation. |
|
|
Optional |
Specifies the configuration information for a bundling strategy. |
Used in: cache-config
Defines mappings between cache names, or name patterns, and caching-schemes. For instance you may define that caches whose names start with accounts- will use a distributed (distributed-scheme) caching scheme, while caches starting with the name rates- will use a replicated-scheme caching scheme.
Table B-14 describes the subelement you can define within the caching-scheme-mapping element.
Table B-14 caching-scheme-mapping Subelement
| Element | Required/Optional | Description |
|---|---|---|
|
Required |
Contains a single binding between a cache name and the caching scheme this cache will use. |
Used in: cache-config
The caching-schemes element defines a series of cache scheme elements. Each cache scheme defines a type of cache, for instance a database backed partitioned cache, or a local cache with an LRU eviction policy. Scheme types are bound to actual caches using mappings (see caching-scheme-mapping).
Table B-15 describes the different types of schemes you can define within the caching-schemes element.
Table B-15 caching-schemes Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
Optional |
Defines a cache scheme which provides on-heap cache storage. |
|
|
Optional |
Defines a cache scheme which provides off-heap cache storage, for instance on disk. |
|
|
Optional |
Defines a cache scheme which provides off-heap cache storage, that is size-limited by using time based paging. |
|
|
Optional |
Defines a cache scheme where storage of cache entries is partitioned across the cluster nodes. |
|
|
Optional |
Defines a cache scheme where storage of cache entries is partitioned across the cluster nodes with transactional guarantees. |
|
|
Optional |
Defines a cache scheme where each cache entry is stored on all cluster nodes. |
|
|
Optional |
Defines a replicated cache scheme which uses optimistic rather then pessimistic locking. |
|
|
Optional |
Defines a two tier cache scheme which consists of a fast local front-tier cache of a much larger back-tier cache. |
|
|
Optional |
Defines a near-scheme which uses object versioning to ensure coherence between the front and back tiers. |
|
|
Optional |
Defines a two tier cache scheme where entries evicted from a size-limited front-tier overflow and are stored in a much larger back-tier cache. |
|
|
Optional |
Defines an invocation service which can be used for performing custom operations in parallel across cluster nodes. |
|
|
Optional |
Defines a backing map scheme which provides a cache of a persistent store. |
|
|
Optional |
Defines a backing map scheme which uses object versioning to determine what updates need to be written to the persistent store. |
|
|
Optional |
Defines a cache scheme that enables caches to be accessed from outside a Coherence cluster by using Coherence*Extend. |
|
|
Optional |
Defines a cache scheme using a custom cache implementation. Any custom implementation must implement the |
|
|
Optional |
Note: As of Coherence 3.0, the disk-scheme configuration element has been deprecated and replaced by the external-scheme and paged-external-scheme configuration elements. |
Used in: caching-schemes, local-scheme, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme, cachestore-scheme, listener, eviction-policy, unit-calculator.
Class schemes provide a mechanism for instantiating an arbitrary Java object for use by other schemes. The scheme which contains this element will dictate what class or interface(s) must be extended. See "Cache of a Database" for an example of using a class-scheme.
The class-scheme may be configured to either instantiate objects directly by using their class-name, or indirectly by using a class-factory-name and method-name. The class-scheme must be configured with either a class-name or class-factory-name and method-name.
Table B-16 describes the elements you can define within the class-scheme element.
Table B-16 class-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
< |
Optional |
Contains a fully specified Java class name to instantiate. This class must extend an appropriate implementation class as dictated by the containing scheme and must declare the exact same set of public constructors as the superclass. |
|
< |
Optional |
Specifies a fully specified name of a Java class that will be used as a factory for object instantiation. |
|
< |
Optional |
Specifies the name of a static factory method on the factory class which will perform object instantiation. |
|
Optional |
Specifies initialization parameters which are accessible by implementations which support the |
Used in: external-scheme, paged-external-scheme, async-store-manager.
Used to create and configure custom implementations of a store manager for use in external caches.
Table B-17 describes the elements you can define within the custom-store-manager element.
Table B-17 custom-store-manager Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Required |
Specifies the implementation of the store manager. The specified class must implement the |
|
Optional |
Specifies initialization parameters, for use in custom store manager implementations which implement the |
Used in: cache-config
The defaults element defines factory wide default settings. This feature enables global configuration of serializers and socket providers used by all services which have not explicitly defined these settings.
Table B-18 describes the elements you can define within the defaults element.
Table B-18 defaults Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
Optional |
Specifies either: the class configuration information for a |
|
|
Specifies either: the configuration information for a socket provider, or it references a pre-defined socket provider within the <socket-provider>ssl</socket-provider> This setting only specifies the default socket provider for Coherence*Extend services. The TCMP socket provider is specified within the <unicast-listener> element in the operational configuration. |
Note:
As of Coherence 3.0, the disk-scheme configuration element has been deprecated and replaced with by the <external-scheme> and <paged-external-scheme> configuration elements.Used in: caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme, versioned-backing-map-scheme
The distributed-scheme defines caches where the storage for entries is partitioned across cluster nodes. See "Distributed Cache" for a more detailed description of partitioned caches. See "Partitioned Cache" for examples of various distributed-scheme configurations.
Partitioned caches support cluster wide key-based locking so that data can be modified in a cluster without encountering the classic missing update problem. Note that any operation made without holding an explicit lock is still atomic but there is no guarantee that the value stored in the cache does not change between atomic operations.
The partitioned cache service supports the concept of cluster nodes which do not contribute to the overall storage of the cluster. Nodes which are not storage enabled (see <local-storage> subelement) are considered "cache clients".
The cache entries are evenly segmented into several logical partitions (see <partition-count> subelement), and each storage enabled (see <local-storage> subelement) cluster node running the specified partitioned service (see <service-name> subelement) will be responsible for maintain a fair-share of these partitions.
By default the specific set of entries assigned to each partition is transparent to the application. In some cases it may be advantageous to keep certain related entries within the same cluster node. A key-associator (see <key-associator> subelement) may be used to indicate related entries, the partitioned cache service will ensure that associated entries reside on the same partition, and thus on the same cluster node. Alternatively, key association may be specified from within the application code by using keys which implement the com.tangosol.net.cache.KeyAssociation interface.
Storage for the cache is specified by using the backing-map-scheme (see <backing-map-scheme> subelement). For instance a partitioned cache which uses a local-scheme for its backing map will result in cache entries being stored in-memory on the storage enabled cluster nodes.
For the purposes of failover, a configured number of backups (see <backup-count> subelement) of the cache may be maintained in backup-storage (see <backup-storage> subelement) across the cluster nodes. Each backup is also divided into partitions, and when possible a backup partition will not reside on the same physical machine as the primary partition. If a cluster node abruptly leaves the cluster, responsibility for its partitions will automatically be reassigned to the existing backups, and new backups of those partitions will be created (on remote nodes) to maintain the configured backup count.
When a node joins or leaves the cluster, a background redistribution of partitions occurs to ensure that all cluster nodes manage a fair-share of the total number of partitions. The amount of bandwidth consumed by the background transfer of partitions is governed by the transfer-threshold (see <transfer-threshold> subelement).
Table B-19 describes the elements you can define within the distributed-scheme element.
Table B-19 distributed-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
< |
Optional |
Specifies the name of the service which will manage caches created from this scheme. Services are configured in the |
|
Optional |
Specifies either: the class configuration information for a |
|
|
< |
Optional |
Specifies an implementation of a |
|
Optional |
Specifies what type of cache will be used within the cache server to store the entries. Legal schemes are: Note that when using an off-heap backing map it is important that the corresponding < |
|
|
< |
Optional |
Specifies the number of partitions that a partitioned (distributed) cache will be "chopped up" into. Each member running the partitioned cache service that has the local-storage ( The number of partitions should be a prime number and sufficiently large such that a given partition is expected to be no larger than 50MB in size. The following are good defaults based on service storage sizes:
service storage partition-count
_______________ ______________
100M 257
1G 509
10G 2039
50G 4093
100G 8191
A list of first 1,000 primes can be found at Valid values are positive integers. The default value is |
|
Optional |
Specifies a class that will be responsible for providing associations between keys and allowing associated keys to reside on the same partition. This implementation must have a zero-parameter public constructor. |
|
|
Optional |
Specifies a class that implements the |
|
|
Optional |
Specifies a class that implements the |
|
|
< |
Optional |
Specifies the number of members of the partitioned cache service that hold the backup data for each unit of storage in the cache. Value of 0 means that in the case of abnormal termination, some portion of the data in the cache will be lost. Value of N means that if up to N cluster nodes terminate immediately, the cache data will be preserved. To maintain the partitioned cache of size M, the total memory usage in the cluster does not depend on the number of cluster nodes and will be in the order of M*(N+1). Recommended values are 0 or 1. Default value is the |
|
< |
Optional |
Specifies the number of members of the partitioned cache service that will hold the backup data for each unit of storage in the cache that does not require write-behind, that is, data that is not vulnerable to being lost even if the entire cluster were shut down. Specifically, if a unit of storage is marked as requiring write-behind, then it will be backed up on the number of members specified by the This value should be set to 0 or this setting should not be specified at all. The rationale is that since this data is being backed up to another data store, no in-memory backup is required, other than the data temporarily queued on the write-behind queue to be written. The value of 0 means that when write-behind has occurred, the backup copies of that data will be discarded. However, until write-behind occurs, the data will be backed up in accordance with the Recommended value is 0 or this element should be omitted altogether. |
|
Optional |
Specifies the type and configuration for the partitioned cache backup storage. |
|
|
< |
Optional |
Specifies the number of daemon threads used by the partitioned cache service. If zero, all relevant tasks are performed on the service thread. Legal values are positive integers or zero. Default value is the |
|
< |
Optional |
Specifies the lease ownership granularity. Available since release 2.3.Legal values are:
A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it. Default value is the |
|
< |
Optional |
Specifies the threshold for the primary buckets distribution in kilo-bytes. When a new node joins the partitioned cache service or when a member of the service leaves, the remaining nodes perform a task of bucket ownership re-distribution. During this process, the existing data gets re-balanced along with the ownership information. This parameter indicates a preferred message size for data transfer communications. Setting this value lower will make the distribution process take longer, but will reduce network bandwidth utilization during this activity. Legal values are integers greater then zero. Default value is the |
|
< |
Optional |
Specifies whether a cluster node will contribute storage to the cluster, that is, maintain partitions. When disabled the node is considered a cache client. Normally this value should be left unspecified within the configuration file, and instead set on a per-process basis using the tangosol.coherence.distributed.localstorage system property. This allows cache clients and servers to use the same configuration descriptor. Legal values are |
|
< |
Optional |
The |
|
< |
Optional |
Specifies the amount of time in milliseconds that a task can execute before it is considered "hung". Note: a posted task that has not yet started is never considered as hung. This attribute is applied only if the Thread pool is used (the |
|
< |
Optional |
Specifies the timeout value in milliseconds for requests executing on the service worker threads. This attribute is applied only if the thread pool is used (the |
|
< |
Optional |
Specifies the maximum amount of time a client will wait for a response before abandoning the original request. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:
Legal values are positive integers or zero (indicating no default timeout). Default value is the value specified in the |
|
|
Optional |
Specifies the guardian timeout value to use for guarding the service and any dependant threads. If the element is not specified for a given service, the default guardian timeout (as specified by the The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. |
|
|
Optional |
Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. Legal values are:
Default value is |
|
|
Optional |
Specifies the configuration information for a class that implements the The |
|
Optional |
Specifies the configuration information for a bundling strategy. |
|
|
Optional |
Specifies quorum policy settings for the partitioned cache service. |
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme
External schemes define caches which are not JVM heap based, allowing for greater storage capacity. See "Local Caches (accessible from a single JVM)" for examples of various external cache configurations.
This scheme is implemented by:
com.tangosol.net.cache.SerializationMap—for unlimited size caches
com.tangosol.net.cache.SerializationCache—for size limited caches
The implementation type is chosen based on the following rule:
if the <high-units> subelement is specified and not zero then SerializationCache is used;
otherwise SerializationMap is used.
External schemes use a pluggable store manager to store and retrieve binary key value pairs. Supported store managers include:
a wrapper providing asynchronous write capabilities for of other store manager implementations
allows definition of custom implementations of store managers
uses Berkeley Database JE to implement an on disk cache
uses a Coherence LH on disk database cache
uses NIO to implement memory-mapped file based cache
uses NIO to implement an off JVM heap, in-memory cache
The cache may be configured as size-limited, which means that when it reaches its maximum allowable size (that is, the <high-units> subelement) it prunes itself.
Note:
Eviction against disk-based caches can be expensive, consider using a paged-external-scheme for such cases.External schemes support automatic expiration of entries based on the age of the value, as configured by the <expiry-delay> subelement.
Persistence (long-term storage)
External caches are generally used for temporary storage of large data sets, for example as the back-tier of an overflow-scheme. Certain implementations do however support persistence for non-clustered caches, see the <store-name> subelement of bdb-store-manager and the <manager-filename> subelement of lh-file-manager for details. Clustered persistence should be configured by using a read-write-backing-map-scheme on a distributed-scheme.
Table B-20 describes the elements you can define within the external-scheme element.
Table B-20 external-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information |
|
< |
Optional |
Specifies a custom implementation of the external cache. Any custom implementation must extend one of the following classes:
and declare the exact same set of public constructors as the superclass. |
|
< |
Optional |
Specifies initialization parameters, for use in custom external cache implementations which implement the |
|
< |
Optional |
Specifies an implementation of a |
|
< |
Optional |
Used to limit the size of the cache. Contains the maximum number of units that can be placed in the cache before pruning occurs. An entry is the unit of measurement. When this limit is exceeded, the cache will begin the pruning process, evicting the least recently used entries until the number of units is brought below this limit. The scheme's |
|
< |
Optional |
Specifies the type of unit calculator to use. A unit calculator is used to determine the cost (in "units") of a given object. Legal values are:
This element is used only if the |
|
< |
Optional |
The unit-factor element specifies the factor by which the units, low-units and high-units properties are adjusted. Using a Using a BINARY unit calculator, for example, the factor of 1048576 could be used to count megabytes instead of bytes. Note: This element was introduced only to avoid changing the type of the units, low units and high units properties from 32-bit values to 64-bit values and is used only if the high-units element is set to a positive number. It is expected that this element will be dropped in a future release. Valid values are positive integer numbers. Default value is |
|
< |
Optional |
Specifies the amount of time since the last update that entries are kept by the cache before being expired. Entries that have expired are not be accessible and are evicted the next time a client accesses the cache. The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed. A value of zero implies no expiry. The default value is Note: The expiry delay parameter ( |
|
Optional |
Configures the external cache to use an asynchronous storage manager wrapper for any other storage manager. See "Pluggable Storage Manager" |
|
|
Optional |
Configures the external cache to use a custom storage manager implementation. |
|
|
Optional |
Configures the external cache to use Berkeley Database JE on disk databases for cache storage. |
|
|
Optional |
Configures the external cache to use a Coherence LH on disk database for cache storage. |
|
|
Optional |
Configures the external cache to use a memory-mapped file for cache storage. |
|
|
Optional |
Configures the external cache to use an off JVM heap, memory region for cache storage. |
Used in: ssl.
The <identity-manager> element contains the configuration information for initializing a javax.net.ssl.KeyManager instance.
The identity manager is responsible for managing the key material which is used to authenticate the local connection to its peer. If no key material is available, the connection is unable to present authentication credentials.
Table B-21 describes the elements you can define within the identity-manager element.
Table B-21 identity-manager Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
|
Optional |
Specifies the algorithm used by the identity manager. The default value is |
|
|
Optional |
Specifies the configuration for a security provider instance. |
|
Optional |
Specifies the configuration for a key store implementation. |
|
|
|
Required |
Specifies the private key password. |
Used in: remote-cache-scheme, remote-invocation-scheme.
The initiator-config element specifies the configuration information for a TCP/IP connection initiator. A connection initiator allows a Coherence*Extend client to connect to a cluster (by using a connection acceptor) and use the clustered services offered by the cluster without having to first join the cluster.
Table B-22 describes the elements you can define within the initiator-config element.
Table B-22 initiator-config Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
Optional |
Specifies the configuration information used by the connection initiator to detect dropped client-to-cluster connections. |
|
|
Optional |
Specifies either: the class configuration information for a |
|
|
Optional |
Specifies the configuration information for a connection initiator that connects to the cluster over TCP/IP. |
|
|
< |
Optional |
Contains the list of filter names to be used by this connection initiator. In the following example, specifying
<use-filters>
<filter-name>gzip</filter-name>
</use-filters>
|
Used in: init-params.
Defines an individual initialization parameter.
Table B-23 describes the elements you can define within the init-param element.
Table B-23 init-param Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Contains the name of the initialization parameter. For example:
<class-name>com.mycompany.cache.CustomCacheLoader</class-name>
<init-params>
<init-param>
<param-name>sTableName</param-name>
<param-value>EmployeeTable</param-value>
</init-param>
<init-param>
<param-name>iMaxSize</param-name>
<param-value>2000</param-value>
</init-param>
</init-params>
|
|
< |
Optional |
Contains the Java type of the initialization parameter.The following standard types are supported:
For example:
<class-name>com.mycompany.cache.CustomCacheLoader</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>EmployeeTable</param-value>
</init-param>
<init-param>
<param-type>int</param-type>
<param-value>2000</param-value>
</init-param>
</init-params>
|
|
< |
Optional |
Contains the value of the initialization parameter. The value is in the format specific to the Java type of the parameter. |
Used in: class-scheme, cache-mapping.
Defines a series of initialization parameters as name-value pairs. See "Partitioned Cache of a Database" for an example of using init-params.
Table B-24 describes the elements you can define within the init-params element.
Table B-24 init-params Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
Optional |
Defines an individual initialization parameter. |
Used in: serializer, socket-provider, service-failure-policy
The <instance> element contains the configuration of an implementation class or class factory that is used to plug in custom functionality.
Table B-25 describes the elements you can define within the instance element.
Table B-25 instance Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
|
Optional |
Specifies the fully qualified name of an implementation class. This element cannot be used together with the |
|
|
Optional |
Specifies the fully qualified name of a factory class for creating implementation class instances. This element cannot be used together with the |
|
< |
Optional |
Specifies the name of a static factory method on the factory class which will perform object instantiation. |
|
Optional |
Contains class initialization parameters for the implementation class. |
Used in: caching-schemes.
Defines an Invocation Service. The invocation service may be used to perform custom operations in parallel on any number of cluster nodes. See the com.tangosol.net.InvocationService API for additional details.
The following table describes the elements you can define within the invocation-scheme element.
Table B-26 invocation-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
< |
Optional |
Specifies the name of the service which will manage invocations from this scheme. |
|
Optional |
Specifies either: the class configuration information for a |
|
|
< |
Optional |
Specifies the number of daemon threads used by the invocation service. If zero, all relevant tasks are performed on the service thread. Legal values are positive integers or zero. Default value is the |
|
< |
Optional |
The |
|
|
Optional |
Specifies the amount of time in milliseconds that a task can execute before it is considered "hung". Note: a posted task that has not yet started is never considered as hung. This attribute is applied only if the Thread pool is used (the |
|
<task-timeout> |
Optional |
Specifies the default timeout value in milliseconds for tasks that can be timed-out (for example, implement the |
|
|
Optional |
Specifies the guardian timeout value to use for guarding the service and any dependant threads. If the element is not specified for a given service, the default guardian timeout (as specified by the The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. |
|
|
Optional |
Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. Legal values are:
Default value is |
|
|
Optional |
Specifies the configuration information for a class that implements the The |
|
<request-timeout> |
Optional |
Specifies the default timeout value in milliseconds for requests that can time-out (for example, implement the (1) the time it takes to deliver the request to an executing node (server); (2) the interval between the time the task is received and placed into a service queue until the execution starts; (3) the task execution time; (4) the time it takes to deliver a result back to the client. Legal values are positive integers or zero (indicating no default timeout). Default value is the |
Used in: proxy-config
The invocation-service-proxy element contains the configuration information for an invocation service proxy managed by a proxy service.
Table B-27 describes the elements you can define within the invocation-service-proxy element.
Table B-27 invocation-service-proxy Subelement
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies whether the invocation service proxy is enabled. If disabled, clients will not be able to execute |
|
< |
Optional |
Specifies the fully qualified name of a class that implements the |
|
Optional |
Contains initialization parameters for the |
Note:
Coherence*Extend-JMS support has been deprecated.Used in: acceptor-config.
The jms-acceptor element specifies the configuration information for a connection acceptor that accepts connections from Coherence*Extend clients over JMS. For additional details and example configurations see Oracle Coherence Client Guide.
Table B-28 describes the elements you can define within the jms-acceptor element.
Table B-28 jms-acceptor Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Required |
Specifies the JNDI name of the JMS |
|
< |
Required |
Specifies the JNDI name of the JMS Queue used by the connection acceptor. |
Note:
Coherence*Extend-JMS support has been deprecated.Used in: initiator-config.
The jms-initiator element specifies the configuration information for a connection initiator that enables Coherence*Extend clients to connect to a remote cluster by using JMS. For additional details and example configurations see Oracle Coherence Client Guide.
The following table describes the elements you can define within the jms-initiator element.
Table B-29 jms-initiator Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Required |
Specifies the JNDI name of the JMS |
|
< |
Required |
Specifies the JNDI name of the JMS Queue used by the connection initiator. |
|
< |
Optional |
Specifies the maximum amount of time to wait while establishing a connection with a connection acceptor. The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. Default value is an infinite timeout. |
Used in: distributed-scheme
Specifies an implementation of a com.tangosol.net.partition.KeyAssociator which will be used to determine associations between keys, allowing related keys to reside on the same partition.
Alternatively the cache's keys may manage the association by implementing the com.tangosol.net.cache.KeyAssociation interface.
Table B-30 describes the elements you can define within the key-associator element.
Table B-30 key-associator Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Required |
The name of a class that implements the |
Used in: distributed-scheme
Specifies an implementation of a com.tangosol.net.partition.KeyPartitioningStrategy which will be used to determine the partition in which a key will reside.
Table B-31 describes the elements you can define within the key-partitioning element.
Table B-31 key-partitioning Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Required |
The name of a class that implements the |
Used in: identity-manager, trust-manager.
The key-store element specifies the configuration for a key store implementation to use when implementing SSL. The key store implementation is an instance of the java.security.KeyStore class.
Table B-32 describes the elements you can define within the key-store element.
Table B-32 key-store Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
|
Required |
Specifies the Uniform Resource Locator (URL) to a key store. |
|
|
Optional |
Specifies the password for the key store. |
|
|
Optional |
Specifies the type of a |
Used in: external-scheme, paged-external-scheme, async-store-manager.
Configures a store manager which will use a Coherence LH on disk embedded database for storage. See "Persistent Cache on Disk" and "In-memory Cache with Disk Based Overflow" for examples of LH-based store configurations.
Implemented by the com.tangosol.io.lh.LHBinaryStoreManager class. The BinaryStore objects created by this class are instances of javadoc:com.tangosol.io.lh.LHBinaryStore.
Table B-33 describes the elements you can define within the lh-file-manager element.
Table B-33 lh-file-manager Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies a custom implementation of the LH |
|
Optional |
Specifies initialization parameters, for use in custom LH file manager implementations which implement the |
|
|
< |
Optional |
Specifies the path name for the root directory that the LH file manager will use to store files in. If not specified or specifies a non-existent directory, a temporary file in the default location will be used. |
|
< |
Optional |
Specifies the name for a non-temporary (persistent) file that the LH file manager will use to store data in. Specifying this parameter will cause the lh-file-manager to use non-temporary database instances. Use this parameter only for local caches that are backed by a cache loader from a non-temporary file: this allows the local cache to be pre-populated from the disk file on startup. When specified it is recommended that it use the |
Used in: local-scheme, external-scheme, paged-external-scheme, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme.
The Listener element specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on a cache.
The following table describes the elements you can define within the listener element.
Table B-34 listener Subelement
| Element | Required/Optional | Description |
|---|---|---|
|
Required |
Specifies the full class name of the listener implementation to use. The specified class must implement the |
Used in: tcp-acceptor, tcp-initiator
The local-address element specifies the local address (IP or DNS name) and port to which a TCP/IP socket is bound.
The local-address element is used within a TCP/IP acceptor definition to specify the address and port on which the TCP/IP server socket (opened by the connection acceptor) is bound. The socket is used by the proxy service to accept connections from Coherence*Extend clients. The following example binds the server socket to 192.168.0.2:9099.
<local-address> <address>192.168.0.2</address> <port>9099</port> </local-address>
The local-address element is used within a TCP/IP initiator definition to specify the local address and port on which the TCP/IP client socket (opened by the connection initiator) is bound. The socket is used by remote services to connect to a proxy service on the cluster. The following example binds the client socket to 192.168.0.1 on port 9099:
<local-address> <address>192.168.0.1</address> <port>9099</port> </local-address>
Table B-57 describes the subelements you can define within the local-address element.
Table B-35 local-address Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the address (IP or DNS name) on which a TCP/IP socket listens and publishes. |
|
< |
Optional |
Specifies the port on which a TCP/IP socket listens and publishes. Legal values are from 1 to 65535. When used in the context of a TCP/IP server (that is, for a TCP acceptor), the port child element is required. |
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme, backing-map-scheme
Local cache schemes define in-memory "local" caches. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near-scheme. See "Local Cache of a Partitioned Cache (Near cache)" for examples of various local cache configurations.
Local caches are implemented by the com.tangosol.net.cache.LocalCache class.
A local cache may be backed by an external cache store (see "cachestore-scheme"). Cache misses will read-through to the back end store to retrieve the data. If a writable store is provided, cache writes will propagate to the cache store as well. For optimizing read/write access against a cache store, see the "read-write-backing-map-scheme".
The cache may be configured as size-limited, which means that when it reaches its maximum allowable size (see the <high-units> subelement) it prunes itself back to a specified smaller size (see the <low-units> subelement), choosing which entries to evict according to its eviction-policy (see the <eviction-policy> subelement). The entries and size limitations are measured in terms of units as calculated by the scheme's unit-calculator (see the <unit-calculator> subelement).
The local cache supports automatic expiration of entries based on the age of the value (see the <expiry-delay> subelement).
Table B-36 describes the elements you can define within the local-scheme element.
Table B-36 local-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
< |
Optional |
Specifies the name of the service which will manage caches created from this scheme. Services are configured from within the |
|
< |
Optional |
Specifies a custom implementation of the local cache. Any custom implementation must extend the |
|
Optional |
Specifies initialization parameters, for use in custom local cache implementations which implement the |
|
|
< |
Optional |
Specifies the type of eviction policy to use.Legal values are:
Default value is |
|
< |
Optional |
Used to limit the size of the cache. Contains the maximum number of units that can be placed in the cache before pruning occurs. An entry is the unit of measurement, unless it is overridden by an alternate unit-calculator (see |
|
< |
Optional |
Contains the lowest number of units that a cache will be pruned down to when pruning takes place. A pruning will not necessarily result in a cache containing this number of units, however a pruning will never result in a cache containing less than this number of units. An entry is the unit of measurement, unless it is overridden by an alternate unit-calculator (see |
|
< |
Optional |
Specifies the type of unit calculator to use. A unit calculator is used to determine the cost (in "units") of a given object. Legal values are:
This element is used only if the |
|
< |
Optional |
The unit-factor element specifies the factor by which the units, low-units and high-units properties are adjusted. Using a Using a BINARY unit calculator, for example, the factor of 1048576 could be used to count megabytes instead of bytes. Note: This element was introduced only to avoid changing the type of the units, low units and high units properties from 32-bit values to 64-bit values and is used only if the high-units element is set to a positive number. It is expected that this element will be dropped in a future release. Valid values are positive integer numbers. Default value is |
|
< |
Optional |
Specifies the amount of time since the last update that entries are kept by the cache before being expired. Entries that have expired are not be accessible and are evicted the next time a client accesses the cache. Any attempt to read an expired entry will result in a reloading of the entry from the configured cache store (see < The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed. A value of zero implies no expiry. The default value is Note: The expiry delay parameter ( |
|
Optional |
Specifies the store which is being cached. If unspecified the cached data will only reside in memory, and only reflect operations performed on the cache itself. |
|
|
|
Optional |
Specifies whether or not a cache will pre-load data from its |
|
< |
Optional |
Specifies an implementation of a |
Used in: caching-schemes.
The near-scheme defines a two-tier cache consisting of a front-tier (see <front-scheme> subelement) which caches a subset of a back-tier cache (see <back-scheme> subelement). The front-tier is generally a fast, size limited cache, while the back-tier is slower, but much higher capacity. A typical deployment might use a local-scheme for the front-tier, and a distributed-scheme for the back-tier. The result is that a portion of a large partitioned cache will be cached locally in-memory allowing for very fast read access. See "Near Cache" for a more detailed description of near caches, and "Local Cache of a Partitioned Cache (Near cache)" for an example of near cache configurations.
The near scheme is implemented by the com.tangosol.net.cache.NearCache class.
Specifying an invalidation-strategy (see <invalidation-strategy> subelement) defines a strategy that is used to keep the front tier of the near cache in sync with the back tier. Depending on that strategy a near cache is configured to listen to certain events occurring on the back tier and automatically update (or invalidate) the front portion of the near cache.
Table B-37 describes the elements you can define within the near-scheme element.
Table B-37 near-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information |
|
< |
Optional |
Specifies a custom implementation of the near cache. Any custom implementation must extend the |
|
Optional |
Specifies initialization parameters for custom near cache implementations which implement the |
|
|
< |
Optional |
Specifies an implementation of a |
|
< |
Required |
Specifies the The eviction policy of the front-scheme defines which entries will be cached locally. For example:
<front-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>1000</high-units>
</local-scheme>
</front-scheme>
|
|
< |
Required |
Specifies the For example:
<back-scheme>
<distributed-scheme>
<scheme-ref>default-distributed</scheme-ref>
</distributed-scheme>
</back-scheme>
|
|
< |
Optional |
Specifies the strategy used keep the front-tier in-sync with the back-tier. Please see
Default value is |
|
< |
Optional |
The autostart element is intended to be used by cache servers (that is, |
Used in: external-scheme, paged-external-scheme, async-store-manager.
Configures an external store which uses memory-mapped file for storage.
This store manager is implemented by the com.tangosol.io.nio.MappedStoreManager class. The BinaryStore objects created by this class are instances of the com.tangosol.io.nio.BinaryMapStore.
Table B-38 describes the elements you can define within the nio-file-manager element.
Table B-38 nio-file-manager Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies a custom implementation of the local cache. Any custom implementation must extend the com.tangosol.io.nio.MappedStoreManager class and declare the exact same set of public constructors. |
|
Optional |
Specifies initialization parameters, for use in custom nio-file-manager implementations which implement the |
|
|
< |
Optional |
Specifies the initial buffer size in megabytes.The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m|G|g]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. Legal values are positive integers between 1 and |
|
< |
Optional |
Specifies the maximum buffer size in bytes.The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m|G|g]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. Legal values are positive integers between 1 and |
|
< |
Optional |
Specifies the path name for the root directory that the manager will use to store files in. If not specified or specifies a non-existent directory, a temporary file in the default location will be used. |
Used in: external-scheme, paged-external-scheme, async-store-manager.
Configures a store-manager which uses an off JVM heap, memory region for storage, which means that it does not affect the Java heap size and the related JVM garbage-collection performance that can be responsible for application pauses. See "NIO In-memory Cache" for an example of an NIO cache configuration.
Note:
JVMs require the use of a command line parameter if the total NIO buffers will be greater than 64MB. For example:-XX:MaxDirectMemorySize=512MImplemented by the com.tangosol.io.nio.DirectStoreManager class. The BinaryStore objects created by this class are instances of the com.tangosol.io.nio.BinaryMapStore.
Table B-39 describes the elements you can define within the nio-memory-manager element.
Table B-39 nio-memory-manager Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies a custom implementation of the local cache. Any custom implementation must extend the |
|
Optional |
Specifies initialization parameters, for use in custom |
|
|
< |
Optional |
Specifies the initial buffer size in bytes. The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m|G|g]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. Legal values are positive integers between 1 and |
|
< |
Optional |
Specifies the maximum buffer size in bytes. The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m|G|g]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. Legal values are positive integers between 1 and |
Used in: cachestore-scheme, distributed-scheme, remote-cache-scheme.
The operation-bundling element specifies the configuration information for a particular bundling strategy.
Bundling is a process of coalescing multiple individual operations into "bundles". It could be beneficial when
there is a continuous stream of operations on multiple threads in parallel;
individual operations have relatively high latency (network or database-related); and
there are functionally analogous "bulk" operations that take a collection of arguments instead of a single one without causing the latency to grow linearly (as a function of the collection size).
Note:
As with any bundling algorithm, there is a natural trade-off between the resource utilization and average request latency. Depending on a particular application usage pattern, enabling this feature may either help or hurt the overall application performance.See com.tangosol.net.cache.AbstractBundler for additional implementation details.
Table B-40 describes the subelement for the operation-bundling element.
Table B-40 operation-bundling Subelement
| Element | Required/Optional | Description |
|---|---|---|
|
Required |
Describes one or more bundle-able operations. |
Used in: caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme
The optimistic scheme defines a cache which fully replicates all of its data to all cluster nodes that run the service (see <service-name> subelement). See "Optimistic Cache" for a more detailed description of optimistic caches.
Unlike the replicated-scheme and distributed-scheme caches, optimistic caches do not support concurrency control (locking). Individual operations against entries are atomic but there is no guarantee that the value stored in the cache does not change between atomic operations. The lack of concurrency control allows optimistic caches to support very fast write operations.
Storage for the cache is specified by using the backing-map-scheme (see <backing-map-scheme> subelement). For instance an optimistic cache which uses a local-scheme for its backing map will result in cache entries being stored in-memory.
Table B-41 describes the elements you can define within the optimistic-scheme element.
Table B-41 optimistic-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
<scheme-ref> |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
<service-name> |
Optional |
Specifies the name of the service which will manage caches created from this scheme. Services are configured from within the |
|
Optional |
Specifies either: the class configuration information for a |
|
|
< |
Optional |
Specifies an implementation of a |
|
< |
Optional |
Specifies what type of cache will be used within the cache server to store the entries.Legal values are: To ensure cache coherence, the backing-map of an optimistic cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the |
|
|
Optional |
Specifies the guardian timeout value to use for guarding the service and any dependant threads. If the element is not specified for a given service, the default guardian timeout (as specified by the The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. |
|
|
Optional |
Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. Legal values are:
Default value is |
|
|
Optional |
Specifies the configuration information for a class that implements the The |
|
< |
Optional |
The |
Used in: acceptor-config, initiator-config.
The outgoing-message-handler specifies the configuration information used to detect dropped client-to-cluster connections. For connection initiators and acceptors that use connectionless protocols, this information is necessary to detect and release resources allocated to dropped connections. Connection-oriented initiators and acceptors can also use this information as an additional mechanism to detect dropped connections.
Table B-42 describes the elements you can define within the outgoing-message-handler element.
Table B-42 outgoing-message-handler Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the interval between ping requests. A ping request is used to ensure the integrity of a connection.The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. A value of zero disables ping requests. The default value is zero. |
|
< |
Optional |
Specifies the maximum amount of time to wait for a response to a ping request before declaring the underlying connection unusable.The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. The default value is the value of the |
|
< |
Optional |
Specifies the maximum amount of time to wait for a response message before declaring the underlying connection unusable.The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. The default value is an infinite timeout. |
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme.
The overflow-scheme defines a two-tier cache consisting of a fast, size limited front-tier, and slower but much higher capacity back-tier cache. When the size limited front fills up, evicted entries are transparently moved to the back. In the event of a cache miss, entries may move from the back to the front. A typical deployment might use a local-scheme for the front-tier, and a external-scheme for the back-tier, allowing for fast local caches with capacities larger the JVM heap would allow. In such a deployment the local-scheme element's high-units and eviction-policy will control the transfer (eviction) of entries from the front to back caches.
Note:
Relying on overflow for normal cache storage is not recommended. It should only be used to help avoid eviction-related data loss in the case where the storage requirements temporarily exceed the configured capacity. In general, the overflow's on disk storage should remain empty.Implemented by either com.tangosol.net.cache.OverflowMap or com.tangosol.net.cache.SimpleOverflowMap, see expiry-enabled for details.
Overflow supports automatic expiration of entries based on the age of the value, as configured by the <expiry-delay> subelement.
Table B-43 describes the elements you can define within the overflow-scheme element.
Table B-43 overflow-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
|
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
< |
Optional |
Specifies a custom implementation of the overflow cache. Any custom implementation must extend either the |
|
Optional |
Specifies initialization parameters, for use in custom overflow cache implementations which implement the |
|
|
< |
Required |
Specifies the The eviction policy of the front-scheme defines which entries which items are in the front versus back tiers. For example:
<front-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>1000</high-units>
</local-scheme>
</front-scheme>
|
|
< |
Required |
Specifies the cache-scheme to use in creating the back-tier cache.Legal values are: For Example:
<back-scheme>
<external-scheme>
<lh-file-manager/>
</external-scheme>
</back-scheme>
|
|
< |
Optional |
Specifies a cache-scheme for maintaining information on cache misses. For caches which are not expiry-enabled (see |
|
< |
Optional |
Turns on support for automatically-expiring data, as provided by the |
|
< |
Optional |
Specifies the amount of time since the last update that entries are kept by the cache before being expired. Entries that have expired are not be accessible and are evicted the next time a client accesses the cache. The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed. A value of zero implies no expiry. The default value is Note: The expiry delay parameter ( |
|
< |
Optional |
Specifies an implementation of a |
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme
As with external-scheme, paged-external-schemes define caches which are not JVM heap based, allowing for greater storage capacity. The paged-external-scheme optimizes LRU eviction by using a paging approach (see <paging> subelement). See Chapter 14, "Serialization Paged Cache," for a detailed description of the paged cache functionality.
This scheme is implemented by the com.tangosol.net.cache.SerializationPagedCache class.
Cache entries are maintained over a series of pages, where each page is a separate com.tangosol.io.BinaryStore, obtained from the configured storage manager (see "Pluggable Storage Manager"). When a page is created it is considered to be the "current" page, and all write operations are performed against this page. On a configured interval (see <page-duration> subelement), the current page is closed and a new current page is created. Read operations for a given key are performed against the last page in which the key was stored. When the number of pages exceeds a configured maximum (see <page-limit> subelement), the oldest page is destroyed and those items which were not updated since the page was closed are be evicted. For example configuring a cache with a duration of ten minutes per page, and a maximum of six pages, will result in entries being cached for at most an hour. Paging improves performance by avoiding individual delete operations against the storage manager as cache entries are removed or evicted. Instead the cache simply releases its references to those entries, and relies on the eventual destruction of an entire page to free the associated storage of all page entries in a single stroke.
External schemes use a pluggable store manager to create and destroy pages, and to access entries within those pages. Supported store-managers include:
async-store-manager—a wrapper providing asynchronous write capabilities for of other store-manager implementations
custom-store-manager—allows definition of custom implementations of store-managers
bdb-store-manager—uses Berkeley Database JE to implement an on disk cache
lh-file-manager—uses a Coherence LH on disk database cache
nio-file-manager—uses NIO to implement memory-mapped file based cache
nio-memory-manager—uses NIO to implement an off JVM heap, in-memory cache
Persistence (long-term storage)
Paged external caches are used for temporary storage of large data sets, for example as the back-tier of an overflow-scheme. These caches are not usable as for long-term storage (persistence), and will not survive beyond the life of the JVM. Clustered persistence should be configured by using a read-write-backing-map-scheme on a distributed-scheme. If a non-clustered persistent cache is what is needed, refer to "Persistence (long-term storage)".
Table B-44 describes the elements you can define within the paged-external-scheme element.
Table B-44 paged-external-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
< |
Optional |
Specifies a custom implementation of the external paged cache. Any custom implementation must extend the com.tangosol.net.cache.SerializationPagedCache class and declare the exact same set of public constructors. |
|
Optional |
Specifies initialization parameters, for use in custom external paged cache implementations which implement the |
|
|
<listener> |
Optional |
Specifies an implementation of a |
|
< |
Required |
Specifies the maximum number of active pages for the paged cache. Legal values are positive integers between 2 and 3600. |
|
< |
Optional |
Specifies the length of time, in seconds, that a page in the paged cache is current.The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed. Legal values are between 5 and 604800 seconds (one week) and zero (no expiry). Default value is zero |
|
Optional |
Configures the paged external cache to use an asynchronous storage manager wrapper for any other storage manager. See "Pluggable Storage Manager" for more information. |
|
|
Optional |
Configures the paged external cache to use a custom storage manager implementation. |
|
|
Optional |
Configures the paged external cache to use Berkeley Database JE on disk databases for cache storage. |
|
|
Optional |
Configures the paged external cache to use a Coherence LH on disk database for cache storage. |
|
|
Optional |
Configures the paged external cache to use a memory-mapped file for cache storage. |
|
|
Optional |
Configures the paged external cache to use an off JVM heap, memory region for cache storage. |
Used in: distributed-scheme
Specifies an implementation of a com.tangosol.net.partition.PartitionListener interface, which allows receiving partition distribution events.
Table B-45 describes the elements you can define within the partition-listener element.
Table B-45 partition-listener Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Required |
The name of a class that implements the |
Used in: backing-map-scheme (within a distributed-scheme only)
The partitioned element specifies whether the enclosed backing map is a PartitionAwareBackingMap. (This element is respected only for backing-map-scheme that is a child of a distributed-scheme.) If set to true, the specific scheme contained in the backing-map-scheme element will be used to configure backing maps for each individual partition of the PartitionAwareBackingMap; otherwise it is used for the entire backing map itself.The concrete implementations of the PartitionAwareBackingMap interface are:
com.tangosol.net.partition.ObservableSplittingBackingCache
com.tangosol.net.partition.PartitionSplittingBackingCache
com.tangosol.net.partition.ReadWriteSplittingBackingMap
Valid values are true or false. Default value is false.
Used in: distributed-scheme
The partitioned-quorum-policy-scheme element contains quorum policy settings for the partitioned cache service.
Table B-46 describes the elements you can define within the partitioned-quorum-policy-scheme element.
Table B-46 partitioned-quorum-policy-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
|
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
|
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
|
Optional |
Specifies the minimum number of ownership-enabled members of a partitioned service that must be present in order to perform partition distribution. The value must be a non-negative integer. |
|
|
Optional |
Specifies the minimum number of ownership-enabled members of a partitioned service that must be present in order to to restore lost primary partitions from backup. The value must be a non-negative integer. |
|
|
Optional |
specifies the minimum number of storage members of a cache service that must be present in order to process "read" requests. A "read" request is any request that does not mutate the state or contents of a cache. The value must be a non-negative integer. |
|
|
Optional |
specifies the minimum number of storage members of a cache service that must be present in order to process "write" requests. A "write" request is any request that may mutate the state or contents of a cache. The value must be a non-negative integer. |
|
|
Optional |
Specifies a class that provides custom quorum policies. This element cannot be used together with the default quorum elements or the The class must implement the |
|
|
Optional |
Specifies a factory class for creating custom action policy instances. This element cannot be used together with the default quorum elements or the This element is used together with the |
Used in: ssl, identity-manager, trust-manager.
The provider element contains the configuration information for a security provider that extends the java.security.Provider class.
Table B-47 describes the subelements you can define within the provider element.
Table B-47 provider Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
|
Optional |
Specifies the name of a security provider that extends the The class name can be entered using either this element or by using the |
|
|
Optional |
Specifies the name of a security provider that extends the This element cannot be used together with the |
|
|
Optional |
Specifies a factory class for creating This element cannot be used together with the This element can be used together with the |
|
< |
Optional |
Specifies the name of a static factory method on the factory class which will perform object instantiation. |
|
Optional |
Contains class initialization parameters for the provider implementation. This element cannot be used together with the |
Used in: proxy-scheme.
The proxy-config element specifies the configuration information for the clustered service proxies managed by a proxy service. A service proxy is an intermediary between a remote client (connected to the cluster by using a connection acceptor) and a clustered service used by the remote client.
Table B-48 describes the elements you can define within the proxy-config element.
Table B-48 proxy-config Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
Optional |
Specifies the configuration information for a cache service proxy managed by the proxy service. |
|
|
Optional |
Specifies the configuration information for an invocation service proxy managed by the proxy service. |
Used in: caching-schemes.
The proxy-scheme element contains the configuration information for a clustered service that allows Coherence*Extend clients to connect to the cluster and use clustered services without having to join the cluster.
Table B-49 describes the subelements you can define within the proxy-scheme element.
Table B-49 proxy-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
< |
Optional |
Specifies the name of the service. |
|
< |
Optional |
Specifies the amount of time in milliseconds that a task can execute before it is considered "hung". Note: a posted task that has not yet started is never considered as hung. This attribute is applied only if the Thread pool is used (the |
|
< |
Optional |
Specifies the timeout value in milliseconds for requests executing on the service worker threads. This attribute is applied only if the thread pool is used (the |
|
< |
Optional |
Specifies the maximum amount of time a client will wait for a response before abandoning the original request. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:
Legal values are positive integers or zero (indicating no default timeout). Default value is the value specified in the |
|
< |
Optional |
Specifies the number of daemon threads used by the service. If zero, all relevant tasks are performed on the service thread. Legal values are positive integers or zero. Default value is the value specified in the |
|
Required |
Contains the configuration of the connection acceptor used by the service to accept connections from Coherence*Extend clients and to allow them to use the services offered by the cluster without having to join the cluster. |
|
|
Optional |
Contains the configuration of the clustered service proxies managed by this service. |
|
|
< |
Optional |
The |
|
|
Optional |
Specifies the guardian timeout value to use for guarding the service and any dependant threads. If the element is not specified for a given service, the default guardian timeout (as specified by the The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. |
|
|
Optional |
Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. Legal values are:
Default value is |
|
|
Optional |
Specifies the configuration information for a class that implements the The |
|
Optional |
Specifies quorum policy settings for the Proxy service. |
Used in: proxy-scheme
The proxy-quorum-policy-scheme element contains quorum policy settings for the Proxy service.
Table B-49 describes the subelements you can define within the proxy-quorum-policy-scheme element.
Table B-50 proxy-quorum-policy-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
|
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
|
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
|
Optional |
specifies the minimum number of members of a proxy service that must be present in order to allow client connections. The value must be a non-negative integer. |
|
|
Optional |
Specifies a class that provides custom quorum policies. This element cannot be used together with the The class must implement the |
|
|
Optional |
Specifies a factory class for creating custom action policy instances. This element cannot be used together with the This element is used together with the |
Used in: caching-schemes, distributed-scheme.
The read-write-backing-map-scheme defines a backing map which provides a size limited cache of a persistent store. See Chapter 13, "Read-Through, Write-Through, Write-Behind, and Refresh-Ahead Caching" for more details.
The read-write-backing-map-scheme is implemented by the com.tangosol.net.cache.ReadWriteBackingMap class.
A read write backing map maintains a cache backed by an external persistent cache store (see <cachestore-scheme> subelement), cache misses will read-through to the back-end store to retrieve the data. If a writable store is provided, cache writes will propagate to the cache store as well.
When enabled (see <refreshahead-factor> subelement) the cache will watch for recently accessed entries which are about to expire, and asynchronously reload them from the cache store. This insulates the application from potentially slow reads against the cache store, as items periodically expire.
When enabled (see <write-delay> subelement) the cache will delay writes to the back-end cache store. This allows for the writes to be batched (see <write-batch-factor> subelement) into more efficient update blocks, which occur asynchronously from the client thread.
Table B-51 describes the elements you can define within the read-write-backing-map-scheme element.
Table B-51 read-write-backing-map-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
< |
Optional |
Specifies a custom implementation of the read write backing map. Any custom implementation must extend the |
|
Optional |
Specifies initialization parameters, for use in custom read write backing map implementations which implement the |
|
|
< |
Optional |
Specifies an implementation of a |
|
Optional |
Specifies the store to cache. If unspecified the cached data will only reside within the internal cache (see |
|
|
< |
Optional |
Specifies the timeout interval to use for [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. If |
|
Required |
Specifies a cache-scheme which will be used to cache entries. Legal values are: |
|
|
< |
Optional |
Specifies a cache-scheme for maintaining information on cache misses. The miss-cache is used track keys which were not found in the cache store. The knowledge that a key is not in the cache store allows some operations to perform faster, as they can avoid querying the potentially slow cache store. A size-limited scheme may be used to control how many misses are cached. If unspecified no cache-miss data will be maintained. Legal values are: |
|
< |
Optional |
Specifies if the cache is read only. If |
|
<write-delay> |
Optional |
Specifies the time interval for a write-behind queue to defer asynchronous writes to the cachestore by.The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed. If zero, synchronous writes to the cachestore (without queuing) will take place, otherwise the writes will be asynchronous and deferred by specified time interval after the last update to the value in the cache. Default is zero. |
|
< |
Optional |
The D' = (1.0 - F)*Dwhere:D = write-delay intervalF = write-batch-factor Conceptually, the write-behind thread uses the following logic when performing a batched update:
This element is only applicable if asynchronous writes are enabled (that is, the value of the write-delay element is greater than zero) and the |
|
< |
Optional |
Specifies the size of the write-behind queue at which additional actions could be taken. This value controls the frequency of the corresponding log messages. For example, a value of |
|
< |
Optional |
The refresh-ahead-factor element is used to calculate the "soft-expiration" time for cache entries. Soft-expiration is the point in time before the actual expiration after which any access request for an entry will schedule an asynchronous load request for the entry. This attribute is only applicable if the internal cache is a |
|
< |
Optional |
Specifies whether exceptions caught during synchronous cachestore operations are rethrown to the calling thread (possibly over the network to a remote member). If the value of this element is false, an exception caught during a synchronous cachestore operation is logged locally and the internal cache is updated. If the value is true, the exception is rethrown to the calling thread and the internal cache is not changed. If the operation was called within a transactional context, this would have the effect of rolling back the current transaction. Legal values are |
Used in: tcp-initiator
The remote-addresses element contains the address (IP or DNS name) and port of one or more TCP/IP acceptors. The TCP/IP initiator uses this information to establish a connection with a proxy service on remote cluster. TCP/IP acceptors are configured within the proxy-scheme element. The TCP/IP initiator attempts to connect to the addresses in a random order until either the list is exhausted or a TCP/IP connection is established. See Oracle Coherence Client Guide for additional details and example configurations.
The following example configuration instructs the initiator to connect to 192.168.0.2:9099 and 192.168.0.3:9099 in a random order:
<remote-addresses>
<socket-address>
<address>192.168.0.2</address>
<port>9099</port>
</socket-address>
<socket-address>
<address>192.168.0.3</address>
<port>9099</port>
</socket-address>
</remote-addresses>
Table B-54 describes the elements you can define within the remote-addresses element.
Table B-52 remote-addresses Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
Optional |
Specifies the address (IP or DNS name) and port on which a TCP/IP acceptor is listening. Multiple |
|
|
Optional |
Contains the configuration for a A |
Used in: cachestore-scheme, caching-schemes, near-scheme.
The remote-cache-scheme element contains the configuration information necessary to use a clustered cache from outside the cluster by using Coherence*Extend.
Table B-53 describes the elements you can define within the remote-cache-scheme element.
Table B-53 remote-cache-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
< |
Optional |
Specifies the name of the service which will manage caches created from this scheme. |
|
Optional |
Specifies the configuration information for a bundling strategy. |
|
|
Required |
Contains the configuration of the connection initiator used by the service to establish a connection with the cluster. |
Used in: caching-schemes
The remote-invocation-scheme element contains the configuration information necessary to execute tasks within the context of a cluster without having to first join the cluster. This scheme uses Coherence*Extend to connect to the cluster.
Table B-54 describes the elements you can define within the remote-invocation-scheme element.
Table B-54 remote-invocation-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
< |
Optional |
Specifies the name of the service. |
|
Required |
Contains the configuration of the connection initiator used by the service to establish a connection with the cluster. |
Used in: caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme, versioned-backing-map-scheme
The replicated scheme defines caches which fully replicate all their cache entries on each cluster nodes running the specified service. See "Replicated Cache" for a more detailed description of replicated caches.
Replicated caches support cluster wide key-based locking so that data can be modified in a cluster without encountering the classic missing update problem. Note that any operation made without holding an explicit lock is still atomic but there is no guarantee that the value stored in the cache does not change between atomic operations.
Storage for the cache is specified by using the backing-map scheme (see <backing-map> subelement). For instance, a replicated cache which uses a local-scheme for its backing map will result in cache entries being stored in-memory.
Table B-55 describes the elements you can define within the replicated-scheme element.
Table B-55 replicated-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
< |
Optional |
Specifies the name of the service which will manage caches created from this scheme. Services are configured from within the |
|
Optional |
Specifies either: the class configuration information for a |
|
|
< |
Optional |
Specifies an implementation of a |
|
< |
Optional |
Specifies what type of cache will be used within the cache server to store the entries.Legal values are: To ensure cache coherence, the backing-map of an replicated cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the |
|
< |
Optional |
Specifies the duration of the standard lease in milliseconds. When a lease has aged past this number of milliseconds, the lock will automatically be released. Set this value to zero to specify a lease that never expires. The purpose of this setting is to avoid deadlocks or blocks caused by stuck threads; the value should be set higher than the longest expected lock duration (for example, higher than a transaction timeout). It's also recommended to set this value higher than |
|
< |
Optional |
Specifies the lease ownership granularity. Available since release 2.3.Legal values are:
A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it. Default value is the |
|
< |
Optional |
Specifies whether the lease issues should be transferred to the most recent lock holders. Legal values are true or false. Default value is the |
|
|
Optional |
Specifies the guardian timeout value to use for guarding the service and any dependant threads. If the element is not specified for a given service, the default guardian timeout (as specified by the The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. |
|
|
Optional |
Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. Legal values are:
Default value is |
|
|
Optional |
Specifies the configuration information for a class that implements the The |
|
< |
Optional |
The |
Used in: acceptor-config, defaults, distributed-scheme, initiator-config, invocation-scheme, optimistic-scheme, replicated-scheme, transactional-scheme,
The serializer element contains the class configuration information for a com.tangosol.io.Serializer implementation.
The serializer element accepts either a literal or a full configuration. The preferred approach is to reference a full configuration which is defined in the operational configuration file. By default, the operational configuration file contains two serializer class definitions: one for Java (default) and one for POF. See "serializers".
The following example demonstrates referring to the POF serializer definition that is in the operational configuration file:
... <serializer>pof</serializer> ...
The following example demonstrates a full serializer class configuration:
...
<serializer>
<instance>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>String</param-type>
<param-value>my-pof-config.xml</param-value>
</init-param>
</init-params>
</instance>
</serializer>
...
Table B-56 describes the elements you can define within the serializer element.
Table B-56 serializer Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
|
Optional |
Contains the class configuration information for a |
Used in: remote-addresses
The socket-address element specifies the address and port on which a TCP/IP acceptor is listening.
Table B-57 describes the subelements you can define within the socket-address element.
Table B-57 socket-address Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Required |
Specifies the IP address (IP or DNS name) on which a TCP/IP acceptor socket is listening. |
|
< |
Required |
Specifies the port on which a TCP/IP acceptor socket is listening. Legal values are from 1 to 65535. |
Used in: tcp-acceptor, tcp-initiator, defaults.
The <socket-provider> element contains the configuration information for a socket and channel factory that implements the com.tangosol.net.SocketProvider interface. The socket providers configured within the <tcp-acceptor> and <tcp-initiator> elements are for use with Coherence*Extend. Socket providers for TCMP are configured in an operational override for within the <unicast-listener> element.
The <socket-provider> element accepts either a literal or a full configuration. The preferred approach is to reference a configuration which is defined in the operational configuration file. See "socket-providers".
Out-of-box, the operational configuration file contains two socket provider configurations: system (default) and ssl. Additional socket providers can be defined in an operational override file as required. Socket provider configurations are referred to using their id attribute name. The following example refers to the pre-defined SSL socket provider configuration:
<socket-provider>ssl</socket-provider>
Preconfigured override is tangosol.coherence.socketprovider. See Appendix C, "Command Line Overrides" for more information.
Table B-58 describes the subelements you can define within the socket-provider element.
Table B-58 socket-provider Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
|
Optional |
Specifies a socket provider that produces instances of the JVM's default socket and channel implementations. |
|
|
Optional |
Specifies a socket provider that produces socket and channel implementations which utilize SSL. |
|
|
Optional |
Contains the class configuration information for a |
Used in: socket-provider.
The <ssl> element contains the configuration information for a socket provider that produces socket and channel implementations which utilize SSL.
Table B-59 describes the elements you can define within the ssl element.
| Element | Required/Optional | Description |
|---|---|---|
|
|
Optional |
Specifies the name of the protocol used by the socket and channel implementations produced by the SSL socket provider. The default value is |
|
|
Optional |
Specifies the configuration for a security provider instance. |
|
|
Optional |
Specifies the configuration information for an implementation of the A |
|
Optional |
Specifies the configuration information for initializing an identity manager instance. |
|
|
Optional |
Specifies the configuration information for initializing a trust manager instance. |
|
|
|
Optional |
Specifies the configuration information for an implementation of the A |
Used in: acceptor-config.
The tcp-acceptor element specifies the configuration information for a connection acceptor that accepts connections from Coherence*Extend clients over TCP/IP. See Oracle Coherence Client Guide for additional details and example configurations.
Table B-60 describes the elements you can define within the tcp-acceptor element.
Table B-60 tcp-acceptor Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
Optional |
Specifies the local address (IP or DNS name) and port on which the TCP/IP server socket (opened by the connection acceptor) is bound. |
|
|
Optional |
Contains the configuration for a A |
|
|
Optional |
Specifies the configuration information for a socket and channel factory that implements the |
|
|
|
Optional |
Specifies whether or not a TCP/IP socket can be bound to an in-use or recently used address. This setting is deprecated because the resulting behavior is significantly different across operating system implementations. The JVM will, in general, select a reasonable default which is safe for the target operating system. Valid values are |
|
< |
Optional |
Indicates whether keep alive ( |
|
< |
Optional |
Indicates whether TCP delay (Nagle's algorithm) is enabled on a TCP/IP socket. Valid values are |
|
< |
Optional |
Configures the size of the underlying TCP/IP socket network receive buffer.Increasing the receive buffer size can increase the performance of network I/O for high-volume connections, while decreasing it can help reduce the backlog of incoming data.The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m|G|g]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. Default value is O/S dependent. |
|
< |
Optional |
Configures the size of the underlying TCP/IP socket network send buffer. The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m|G|g]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. Default value is O/S dependent. |
|
< |
Optional |
Configures the size of the TCP/IP server socket backlog queue. Valid values are positive integers. Default value is O/S dependent. |
|
< |
Optional |
Enables [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. Linger is disabled by default. |
|
Optional |
A collection of IP addresses of TCP/IP initiator hosts that are allowed to connect to this TCP/IP acceptor. |
|
|
|
Optional |
Specifies whether or not the suspect protocol is enabled in order to detect and close rogue Coherence*Extend client connections. The suspect protocol is enabled by default. Valid values are |
|
|
Optional |
Specifies the outgoing connection backlog (in bytes) after which the corresponding client connection is marked as suspect. A suspect client connection is then monitored until it is no longer suspect or it is closed in order to protect the proxy server from running out of memory. The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. Default value is 10000000. |
|
|
Optional |
Specifies the outgoing connection backlog (in messages) after which the corresponding client connection is marked as suspect. A suspect client connection is then monitored until it is no longer suspect or it is closed in order to protect the proxy server from running out of memory. Default value is |
|
|
Optional |
Specifies the outgoing connection backlog (in bytes) at which point a suspect client connection is no longer considered to be suspect. The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. Default value is 2000000. |
|
|
Optional |
Specifies the outgoing connection backlog (in messages) at which point a suspect client connection is no longer considered to be suspect. Default value is |
|
|
Optional |
Specifies the outgoing connection backlog (in bytes) at which point the corresponding client connection must be closed in order to protect the proxy server from running out of memory. The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. Default value is 100000000. |
|
|
Optional |
Specifies the outgoing connection backlog (in messages) at which point the corresponding client connection must be closed in order to protect the proxy server from running out of memory. Default value is |
Used in: initiator-config.
The tcp-initiator element specifies the configuration information for a connection initiator that enables Coherence*Extend clients to connect to a remote cluster by using TCP/IP. See Oracle Coherence Client Guide for additional details and example configurations.
Table B-61 describes the elements you can define within the tcp-initiator element.
Table B-61 tcp-initiator Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
Optional |
Specifies the local address (IP or DNS name) and port on which the TCP/IP client socket (opened by the TCP/IP initiator) is bound. |
|
|
Required |
Contains the address of one or more TCP/IP connection acceptors. The TCP/IP connection initiator uses this information to establish a TCP/IP connection with a remote cluster. |
|
|
Optional |
Specifies the configuration information for a socket and channel factory that implements the |
|
|
|
Optional |
Specifies whether or not a TCP/IP socket can be bound to an in-use or recently used address. This setting is deprecated because the resulting behavior is significantly different across operating system implementations. The JVM will, in general, select a reasonable default which is safe for the target operating system. Valid values are |
|
< |
Optional |
Indicates whether keep alive ( |
|
< |
Optional |
Indicates whether TCP delay (Nagle's algorithm) is enabled on a TCP/IP socket. Valid values are |
|
< |
Optional |
Configures the size of the underlying TCP/IP socket network receive buffer.Increasing the receive buffer size can increase the performance of network I/O for high-volume connections, while decreasing it can help reduce the backlog of incoming data.The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m|G|g]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. Default value is O/S dependent. |
|
< |
Optional |
Configures the size of the underlying TCP/IP socket network send buffer.The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m|G|g]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. Default value is O/S dependent. |
|
< |
Optional |
Specifies the maximum amount of time to wait while establishing a connection with a connection acceptor.The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. Default value is an infinite timeout. |
|
< |
Optional |
Enables [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. Linger is disabled by default. |
Used in caching-schemes, version-persistent-scheme, version-transient-scheme.
The transactional-scheme element defines a transactional cache, which is a specialized distributed cache that provides transactional guarantees. Multiple transactional-scheme elements may be defined to support different configurations. Applications use transactional caches in one of three ways:
Applications use the CacheFactory.getCache() method to get an instance of a transactional cache. In this case, there are implicit transactional guarantees when performing cache operations. However, default transaction behavior cannot be changed.
Applications explicitly use the Transaction Framework API to create a Connection instance that uses a transactional cache. In this case, cache operations are performed within a transaction and the application has full control to change default transaction behavior as required.
Java EE applications use the Coherence Resource Adapter to create a Transaction Framework API Connection instance that uses a transactional cache. In this case, cache operations are performed within a transaction that can participate as part of a distributed (global) transaction. Applications can change some default transaction behavior.
Table B-62 describes the elements you can define within the transactional-scheme element.
Table B-62 transactional-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
< |
Optional |
Specifies the name of the service which manages caches created from this scheme. The default service name if no service name is provided is |
|
Optional |
Specifies either: the class configuration information for a |
|
|
< |
Optional |
Specifies the number of daemon threads used by the partitioned cache service. If zero, all relevant tasks are performed on the service thread. Legal values are positive integers or zero. Default value is the Specifying the thread-count value will change the default behavior of the Transactional Framework's internal transaction caches that are used for transactional storage and recovery. |
|
< |
Optional |
Specifies whether a cluster node will contribute storage to the cluster, that is, maintain partitions. When disabled the node is considered a cache client. Normally this value should be left unspecified within the configuration file, and instead set on a per-process basis using the tangosol.coherence.distributed.localstorage system property. This allows cache clients and servers to use the same configuration descriptor. Legal values are |
|
< |
Optional |
Specifies the number of partitions that a partitioned (distributed) cache will be "chopped up" into. Each member running the partitioned cache service that has the local-storage ( The number of partitions should be a prime number and sufficiently large such that a given partition is expected to be no larger than 50MB in size. The following are good defaults for sample service storage sizes:
service storage partition-count
_______________ ______________
100M 257
1G 509
10G 2039
50G 4093
100G 8191
A list of first 1,000 primes can be found at Valid values are positive integers. Default value is the value specified in the |
|
|
Optional |
Specifies the transaction storage size. Once the transactional storage size is reached, an eviction policy is used that removes 25% of eligible entries from storage. The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. Default value is 10MB. |
|
< |
Optional |
Specifies the threshold for the primary buckets distribution in kilo-bytes. When a new node joins the partitioned cache service or when a member of the service leaves, the remaining nodes perform a task of bucket ownership re-distribution. During this process, the existing data gets re-balanced along with the ownership information. This parameter indicates a preferred message size for data transfer communications. Setting this value lower will make the distribution process take longer, but will reduce network bandwidth utilization during this activity. Legal values are integers greater then zero. Default value is the |
|
< |
Optional |
Specifies the number of members of the partitioned cache service that hold the backup data for each unit of storage in the cache. Value of 0 means that in the case of abnormal termination, some portion of the data in the cache will be lost. Value of N means that if up to N cluster nodes terminate immediately, the cache data will be preserved. To maintain the partitioned cache of size M, the total memory usage in the cluster does not depend on the number of cluster nodes and will be in the order of M*(N+1). Recommended values are 0 or 1. Default value is the |
|
< |
Optional |
Specifies the amount of time in milliseconds that a task can execute before it is considered "hung". Note: a posted task that has not yet started is never considered as hung. This attribute is applied only if the Thread pool is used (the |
|
< |
Optional |
Specifies the timeout value in milliseconds for requests executing on the service worker threads. This attribute is applied only if the thread pool is used (the |
|
< |
Optional |
Specifies the maximum amount of time a client will wait for a response before abandoning the original request. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:
Legal values are positive integers or zero (indicating no default timeout). Default value is the value specified in the |
|
|
Optional |
Specifies the guardian timeout value to use for guarding the service and any dependant threads. If the element is not specified for a given service, the default guardian timeout (as specified by the The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. |
|
|
Optional |
Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. Legal values are:
Default value is |
|
Optional |
Specifies quorum policy settings for the partitioned cache service. |
|
|
< |
Optional |
The element is intended to be used by cache servers (that is, |
Used in: ssl.
The <trust-manager> element contains the configuration information for initializing a javax.net.ssl.TrustManager instance.
A trust manager is responsible for managing the trust material that is used when making trust decisions and for deciding whether credentials presented by a peer should be accepted.
A valid trust-manager configuration will contain at least one child element.
Table B-63 describes the elements you can define within the trust-manager element.
Table B-63 trust-manager Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
|
Optional |
Specifies the algorithm used by the trust manager. The default value is |
|
|
Optional |
Specifies the configuration for a security provider instance. |
|
Optional |
Specifies the configuration for a key store implementation. |
Used in: versioned-backing-map-scheme.
The version-persistent-scheme defines a cache for storing object versioning information in a clustered cache. Specifying a size limit on the specified scheme's backing-map allows control over how many version identifiers are tracked.
Table B-64 describes the elements you can define within the version-persistent-scheme element.
Table B-64 persistent-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the name modifier that is used to create a cache of version objects associated with a given cache. The value of this element is appended to the base cache name. Legal value is a string. Default value is |
|
< |
Required |
Specifies the scheme for the cache used to maintain the versioning information. Legal values are: |
Used in: versioned-near-scheme, versioned-backing-map-scheme.
The version-transient-scheme defines a cache for storing object versioning information for use in versioned near-caches. Specifying a size limit on the specified scheme's backing-map allows control over how many version identifiers are tracked.
The following table describes the elements you can define within the version-transient-scheme element.
Table B-65 transient-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the name modifier that is used to create a cache of version objects associated with a given cache. The value of this element is appended to the base cache name. Legal value is a string. Default value is "-version". For example, if the base case is named |
|
< |
Required |
Specifies the scheme for the cache used to maintain the versioning information. Legal values are: |
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme.
The versioned-backing-map-scheme is an extension of a read-write-backing-map-scheme, defining a size limited cache of a persistent store. It uses object versioning to determine what updates need to be written to the persistent store. See "Versioning" for more information.
The versioned-backing-map-scheme scheme is implemented by the com.tangosol.net.cache.VersionedBackingMap class.
As with the read-write-backing-map-scheme, a versioned backing map maintains a cache backed by an external persistent cache store (see <cachestore-scheme> subelement), cache misses will read-through to the back-end store to retrieve the data. Cache stores may also support updates to the back-end data store.
Refresh-Ahead and Write-Behind Caching
As with the read-write-backing-map-scheme both the refresh-ahead (see <refresh-ahead> subelement) and write-behind (see <write-behind> subelement) caching optimizations are supported. See "Read-Through, Write-Through, Write-Behind, and Refresh-Ahead Caching" in the Oracle Coherence Getting Started Guide for more details.
For entries whose values implement the com.tangosol.util.Versionable interface, the versioned backing map will use the version identifier to determine if an update must be written to the persistent store. The primary benefit of this feature is that in the event of cluster node failover, the backup node can determine if the most recent version of an entry has already been written to the persistent store, and if so it can avoid an extraneous write.
Table B-66 describes the elements you can define within the versioned-backing-map-scheme element.
Table B-66 versioned-backing-map-scheme Subelement
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
< |
Optional |
Specifies a custom implementation of the versioned backing map. Any custom implementation must extend the |
|
Optional |
Specifies initialization parameters, for use in custom versioned backing map implementations which implement the |
|
|
< |
Optional |
Specifies an implementation of a |
|
Optional |
Specifies the store to cache. If unspecified the cached data will only reside within the (see |
|
|
< |
Required |
Specifies a cache-scheme which will be used to cache entries. Legal values are: |
|
< |
Optional |
Specifies a cache-scheme for maintaining information on cache misses. The miss-cache is used track keys which were not found in the cache store. The knowledge that a key is not in the cache store allows some operations to perform faster, as they can avoid querying the potentially slow cache store. A size-limited scheme may be used to control how many misses are cached. If unspecified no cache-miss data will be maintained. Legal values are: |
|
< |
Optional |
Specifies if the cache is read only. If true the cache will load data from cachestore for read operations and will not perform any writing to the cachestore when the cache is updated. Legal values are |
|
< |
Optional |
Specifies the time interval for a write-behind queue to defer asynchronous writes to the cachestore by.The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed. If zero, synchronous writes to the cachestore (without queuing) will take place, otherwise the writes will be asynchronous and deferred by the number of seconds after the last update to the value in the cache. Default is zero. |
|
< |
Optional |
The write-batch-factor element is used to calculate the "soft-ripe" time for write-behind queue entries. A queue entry is considered to be "ripe" for a write operation if it has been in the write-behind queue for no less than the write-delay interval. The "soft-ripe" time is the point in time before the actual "ripe" time after which an entry will be included in a batched asynchronous write operation to the |
|
< |
Optional |
Specifies the size of the write-behind queue at which additional actions could be taken. This value controls the frequency of the corresponding log messages. For example, a value of |
|
< |
Optional |
The |
|
< |
Optional |
Specifies whether exceptions caught during synchronous cachestore operations are rethrown to the calling thread (possibly over the network to a remote member). If the value of this element is false, an exception caught during a synchronous cachestore operation is logged locally and the internal cache is updated. If the value is true, the exception is rethrown to the calling thread and the internal cache is not changed. If the operation was called within a transactional context, this would have the effect of rolling back the current transaction. Legal values are |
|
Optional |
Specifies a cache-scheme for tracking the version identifier for entries in the persistent cachestore (see |
|
|
Optional |
Specifies a cache-scheme for tracking the version identifier for entries in the transient internal cache (see |
|
|
< |
Optional |
Specifies if the backing map is responsible for keeping the transient version cache up to date. If disabled the backing map manages the transient version cache only for operations for which no other party is aware (such as entry expiry). This is used when there is already a transient version cache of the same name being maintained at a higher level, for instance within a |
Used in: caching-schemes.
Note:
As of Coherence release 2.3, it is suggested that a near-scheme be used instead ofversioned-near-scheme. Legacy Coherence applications use versioned-near-scheme to ensure Coherence through object versioning. As of Coherence 2.3 the near-scheme includes a better alternative, in the form of reliable and efficient front cache invalidation.As with the near-scheme, the versioned-near-scheme defines a two tier cache consisting of a small and fast front-end, and higher-capacity but slower back-end cache. The front-end (see <front-end> subelement) and back-end (see <back-end> subelement) are expressed as normal cache-schemes. A typical deployment might use a local-scheme for the front-end, and a distributed-scheme for the back-end. See "Near Cache" for a more detailed description of versioned near caches.
The versioned near scheme is implemented by the com.tangosol.net.cache.VersionedNearCache class.
Object versioning is used to ensure coherence between the front and back tiers. See the <version-transient-scheme> subelement for more information
Table B-67 describes the elements you can define within the near-scheme element.
Table B-67 versioned-near-scheme Subelements
| Element | Required/Optional | Description |
|---|---|---|
|
< |
Optional |
Specifies the scheme's name. The name must be unique within a configuration file. |
|
< |
Optional |
Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" for more information. |
|
< |
Optional |
Specifies a custom implementation of the versioned near cache. The specified class must extend the |
|
Optional |
Specifies initialization parameters, for use in custom versioned near cache implementations which implement the |
|
|
< |
Optional |
Specifies an implementation of a |
|
< |
Required |
Specifies the For example:
<front-scheme>
<local-scheme>
<scheme-ref>default-eviction</scheme-ref>
</local-scheme>
</front-scheme>
or
<front-scheme>
<class-scheme>
<class-name>com.tangosol.util.SafeHashMap</class-name>
<init-params></init-params>
</class-scheme>
</front-scheme>
|
|
< |
Required |
Specifies the For Example:
<back-scheme>
<distributed-scheme>
<scheme-ref>default-distributed</scheme-ref>
</distributed-scheme>
</back-scheme>
|
|
Optional |
Specifies a scheme for versioning cache entries, which ensures coherence between the front and back tiers. |
|
|
< |
Optional |
The autostart element is intended to be used by cache servers (that is, |