3 Developing Applications for ActiveCache

All of the files required by ActiveCache are installed automatically as part of the Oracle WebLogic Server Typical (default) configuration. The default root directory for the installation is C:\Oracle\Middleware. WebLogic Server is installed in C:\Oracle\Middleware\wlserver_10.3 and Coherence is installed in C:\Oracle\Middleware\coherence_3.5.

The default installation includes all of the files that WebLogic Server needs to work with Coherence, Coherence*Web, and TopLink Grid. For a description of the installed files that are referred to frequently in this book, see the "Glossary".

Developing Applications to Use ActiveCache—Main Steps

The following steps summarize the procedure for using Coherence caches with applications running on WebLogic Server.

  1. Choose the cluster topology on which your applications will run. You can decide to make WebLogic Servers data members of the Coherence cache, or just clients. See "Choose the ActiveCache Deployment Topology".

  2. Specify the configuration for the Coherence caches that your applications will use. See "Create and Configure a Data Cache".

  3. Add code in your Web application to access the Coherence caches. You can use either JNDI lookup or resource injection to access a Coherence NamedCache cache object. See "Access the Data Cache from your Application Code".

  4. Store the cache configuration file with the application. Where you store the file depends on how you want the caches to be visible to the deployed applications. See "Locate the Cache Configuration File".

  5. Determine how the cache server will access the cache configuration file when it starts. See "Access the Cache Configuration on Cache Server Startup".

  6. Cluster nodes are classloader-scoped. Where you deploy coherence.jar in the classloader hierarchy determines how cluster membership is handled. See "Package Applications and Configure Cluster Nodes".

  7. Adjust preconfigured cluster values for your deployed applications, if necessary. You can use WLST or the WebLogic Server Administration Console to configure some cluster-related values. See "Create and Configure Coherence Clusters".

  8. Start the cache servers. See "Start a Cache Server".

  9. Use one of the several methods to start WebLogic Server. See "Start WebLogic Server".

  10. Monitor the runtime status of the Coherence cluster from the WebLogic Server Administration Console. See "Monitor Coherence Cluster Properties".

Choose the ActiveCache Deployment Topology

A cluster is used to harness multiple computers to store and manage data. Usually, this is for reliability and scalability purposes. One of the primary uses of Coherence is to cluster an application's objects and data. In the simplest sense, all of the data that is inserted into Coherence data caches is accessible by all servers in the application cluster that share the same cache configuration.

Two different Coherence cluster topologies can be formed by mixing WebLogic Servers and stand-alone Coherence cache servers. Here, cache servers are defined as Coherence data servers running on JVM instances dedicated to maintaining data (such as serialized session state data).

  • In the In-Process topology, all WebLogic Servers (employing ActiveCache) in the cluster are storage-enabled. In this case, storage-enabled means that these servers will provide cache storage and backup storage. you do not have to create a separate data tier.

    This topology is not recommended for production use. This topology is supported mainly for development and testing. By storing the session data in-process with the application server, this topology is very easy to get up and running quickly for smoke tests, development and testing.

    Note:

    There are different default settings for local storage for Distributed caches on WebLogic Server, depending on whether you are employing Coherence*Web. For WebLogic Server, local storage is enabled by default for Distributed caches. However, when using Coherence*Web on WebLogic Server, local storage is disabled by default. In this case, you must create a separate data tier of stand-alone Coherence caches.
  • In the Out-of-Process topology, use the stand-alone Coherence cache servers to host the data. Configure the WebLogic Servers to be storage-disabled so they can be used to serve requests. This topology creates a true, separate data tier, and further reduces overhead for the WebLogic Servers that are processing requests.

  • The WebLogic Out-Of-Process topology is a slight variation on the Out-of-Process topology. In this topology WebLogic Server instances replace storage-enabled cache servers. This enables you to manage the lifecycle of the storage-enabled members. The advantage of this topology is that requests and data are segregated to their own servers. Latency for processing requests is reduced. Both storage-enabled and -disabled servers can be managed by WebLogic Sever management tools.

Note:

For more information on the In-Process and Out-of-Process deployment topologies, see Deployment Topologies in the User's Guide for Oracle Coherence*Web.

Create and Configure a Data Cache

ActiveCache can be configured to use any of the cache types supported by Oracle Coherence. An in-depth discussion on Coherence caches and their configuration is beyond the scope of this book. For information on working with Coherence caches and integrating them into your applications, see Create and Use Coherence Caches in the Developer's Guide for Oracle Coherence.

Access the Data Cache from your Application Code

Applications that run on WebLogic Server 11gR1 (10.3.3) or later can use ActiveCache to access a data cache. The data cache is represented by the Coherence NamedCache cache object. This object is designed to hold resources that are shared among members of a cluster. These resources are managed in memory, and are typically composed of data that is also stored persistently in a database, or data that has been assembled or calculated. Thus, these resources are referred to as cached.

Your application can obtain a NamedCache either by resource injection or by lookup in a component-scoped JNDI resource tree. The lookup technique can be used in EJBs, servlets, or JSPs. The resource injection technique can be used only by servlets or EJBs.

Note:

It is not recommended that you store remote EJB references in Coherence named caches, nor should you store them in Coherence*Web-backed HTTP sessions.

To Obtain the NamedCache by Resource Injection

A @Resource annotation can be used in a servlet or an EJB to dynamically inject the NamedCache. This annotation cannot be used in a JSP. The name of the cache used in the annotation must be defined in the Coherence cache configuration file.

Example 3-1 illustrates a resource injection of the NamedCache myCache.

Example 3-1 Obtaining a NamedCache by Resource Injection

...
@Resource(mappedName="myCache")
com.tangosol.net.NamedCache nc;
...

To Obtain the NamedCache by JNDI Lookup

A component-scoped JNDI tree can be used in EJBs, servlets, or JSPs to reference the NamedCache.

To use a component-scoped JNDI lookup, define a resource-ref of type com.tangosol.net.NamedCache in either the web.xml or ejb-jar.xml file. Example 3-2 illustrates a <resource-ref> stanza that identifies myCache as the NamedCache.

Note:

The <res-auth> and <res-sharing-scope> elements do not appear in the example. The <res-auth> element is ignored because currently no resource sign-on is performed to access data caches. The <res-sharing-scope> element is ignored because data caches are sharable by default and this behavior cannot be overridden.

Example 3-2 Defining a NamedCache as resource-ref for JNDI Lookup

...
<resource-ref>
  <res-ref-name>coherence/myCache</res-ref-name>
  <res-type>com.tangosol.net.NamedCache</res-type>
  <mapped-name>MyCache</mapped-name>
</resource-ref>
...

Locate the Cache Configuration File

The location where you store the cache configuration file determines the scope of the caches; that is, the visibility of the caches to the application. There are three options for cache visibility:

  • application server-scoped—all deployed applications in a container become part of one cache service. Caches will be visible globally to all applications deployed on the server.

  • EAR-scoped—all deployed applications within each EAR become part of one Coherence node. Caches will be visible to all modules in the EAR. For example, this could be a recommended deployment if all of the modules must share the same cache.

  • WAR-scoped—each deployed Web application becomes its own Coherence node. Caches will be visible to the individual modules only. For example, this could be a recommended deployment for a stand-alone WAR deployment or stand-alone EJB deployment.

Note:

The cache configuration must be consistent for both WebLogic Server instances and Coherence cache servers.

Table 3-1 Storage Locations for Cache Configuration File Based on Cache Scoping

For this cache scoping ... Store the cache configuration file here Comments

application server-scope

  • store the cache configuration file in the server's classpath

See the following section, "Access the Cache Configuration on Cache Server Startup" for more information.

application-scoped cache for an EAR file

  • JAR file in the EAR library directory

  • APP-INF/classes directory of EAR

Caches defined in coherence-cache-config.xml and placed at EAR level can be seen and shared by all modules in the EAR.

Caches defined at EAR level will be visible to the individual applications within the EAR only, but they must have unique service names across all the EARs in the application. Also, if you define caches both at the EAR level and at the module level, then the cache, scheme, and service names must be unique across the EAR-level cache configuration and the module cache configuration.

Web-component-scoped cache in an EAR, or a stand-alone WAR deployment

  • JAR file in the WEB-INF/lib directory of a WAR file

  • WEB-INF/classes directory of a WAR file

Caches defined at module level will be visible to the individual modules only, but they must have unique service names across all the modules in the application. Also, if you define caches both at the EAR level and at the module level, then the cache, scheme, and service names must be unique across the EAR-level cache configuration and the module cache configuration.

stand-alone EJB deployment

  • EJB-JAR file

An EJB module in an EAR cannot have module-scoped caches—they can only be application-scoped.


Access the Cache Configuration on Cache Server Startup

The cache server must be able to access the cache configuration file on start-up. There are two ways to do this:

  • Place the cache configuration file in the server's classpath, or

  • Declare the cache configuration file location in the server start-up command with the tangosol.coherence.cacheconfig system property. For more information on this property, see the Developer's Guide for Oracle Coherence.

    Example 3-3 illustrates the tangosol.coherence.cacheconfig system property in a sample startup command.

    Example 3-3 Declaring the Cache Configuration File in a Server Startup Command

    java -server -Xms512m -Xmx512m 
    -cp <Coherence installation dir>/lib/coherence.jar:<Coherence installation dir>/lib/coherence-web-spi.war -Dtangosol.coherence.management.remote=true 
    -Dtangosol.coherence.cacheconfig=WEB-INF/classes/coherence-cache-config.xml
    -Dtangosol.coherence.session.localstorage=true com.tangosol.net.DefaultCacheServer
    

If you are working with two (or more) applications, it is possible that they could possibly have two (or more) different cache configurations. In this case, the cache configuration on the cache server must contain the union of these configurations. This allows the applications to be supported in the same cache cluster. Note that this is only valid for the stand-alone cache server topology.

Package Applications and Configure Cluster Nodes

Coherence cluster nodes are classloader-scoped. Cluster nodes can be application server-scoped—the entire JVM acts as a single Coherence cluster node, EAR-scoped—each application can be a Coherence cluster node, or WAR-scoped—each Web module within an application can be a Coherence cluster node.

The packing and configuration options for these three scenarios are described in the following sections:

Packaging Applications and Configuring Application Server-Scoped Cluster Nodes

With this configuration, all deployed applications on the WebLogic Server instance that are accessing Coherence caches directly become part of one Coherence node. Caches will be visible to all applications deployed on the server. This configuration produces the smallest number of Coherence nodes in the cluster (one for each WebLogic Server JVM instance).

Since the Coherence library is deployed in the container's classpath, only one copy of the Coherence classes will be loaded into the JVM, thus minimizing resource utilization. On the other hand, since all applications are using the same cluster node, all applications will be affected if one application misbehaves.

To Use Coherence Data Caches with Application Server-Scoped Cluster Nodes

  1. Edit your WebLogic Server system classpath to include coherence.jar and WL_HOME/common/deployable-libraries/active-cache.jar in the system classpath. The active-cache.jar should be referenced only from the deployable-libraries folder in the system classpath and should not be copied to any other location.

  2. (Optional) If you must configure Coherence cluster properties, create a CoherenceClusterSystemResourceMBean and reference it in the ServerMBean.

    You can use WLST to reference the MBean. See createServerScopedCoherenceSystemResource in Example 3-9.

Packaging Applications and Configuring EAR-Scoped Cluster Nodes

With this configuration, all deployed applications within each EAR become part of one Coherence node. Caches will be visible to all modules in the EAR. For example, this could be a recommended deployment if all the modules must share the same Coherence node. It can also be a recommended configuration if you plan on deploying only one EAR to an application server.

This configuration produces the next smallest number of Coherence nodes in the cluster (one for each deployed EAR). Since the Coherence library is deployed in the application's classpath, only one copy of the Coherence classes is loaded for each EAR.

Since all Web applications in the EAR use the same cluster node, all Web applications in the EAR will be affected if one of them misbehaves. EAR-scoped cluster nodes reduce the deployment effort as no changes to the application server classpath are required.

To Use Coherence Caches with EAR-Scoped Cluster Nodes

  1. Use either of the following methods to deploy the coherence.jar and active-cache.jar files as shared libraries to all of the target servers where the application will be deployed.

    • Use the WebLogic Server Administration Console to deploy coherence.jar and active-cache.jar as shared libraries. See "Install a Java EE Library" in the Oracle Fusion Middleware Oracle WebLogic Server Administration Console Help.

      As an alternative to the Administration Console, you can also deploy on the command line. The following are sample deployment commands:

      java weblogic.Deployer -username <> -password <> -adminurl <> -deploy coherence.jar -name coherence -library -targets <>
      
      java weblogic.Deployer -username <> -password <> -adminurl <> -deploy active-cache.jar -name active-cache -library -targets <>
      
    • Copy coherence.jar and active-cache.jar to the EAR's APP-INF/lib folder of the application. However, the preferred way is to deploy them as shared libraries.

  2. Refer to the coherence.jar and active-cache.jar files as libraries. Example 3-4 illustrates a sample weblogic-application.xml configuration.

    Example 3-4 coherence and active-cache JARs Referenced in the weblogic-application.xml File

    <weblogic-application>
    ...
      <library-ref>
        <library-name>coherence</library-name>
      </library-ref>
       ...
      <library-ref>
        <library-name>active-cache</library-name>
      </library-ref>
    ...
    </weblogic-application>
    
  3. (Optional) If you must configure Coherence cluster properties, create a CoherenceClusterSystemResourceMBean and reference it as a coherence-cluster-ref element in weblogic-application.xml file. This element allows the applications to enroll in the Coherence cluster as specified by the CoherenceClusterSystemResourceMBean attributes.

    Example 3-5 illustrates a sample configuration. The myCoherenceCluster MBean in the example is of type CoherenceClusterSystemResourceMBean.

    Example 3-5 coherence-cluster-ref Element for EAR-Scoped Cluster Nodes

    <weblogic-application>
    ...
      <coherence-cluster-ref>
        <coherence-cluster-name>
         myCoherenceCluster
        </coherence-cluster-name>
      </coherence-cluster-ref> 
    ...
    </weblogic-application>  
    

To Define a Filtering Classloader for Application-Scoped Cluster Nodes

If the coherence.jar is placed in the application server classpath, you can still configure an EAR-scoped cluster node by defining a filtering classloader. This is described in the following steps:

  1. Place coherence.jar in the application classpath.

  2. Configure a filtering classloader in the EAR file.

    The filtering classloader is defined in the <prefer-application-packages> stanza of the weblogic-application.xml file. Example 3-6 illustrates a sample filtering classloader configuration. The package-name elements indicate the package names of the classes in the coherence.jar and active-cache.jar.

    Example 3-6 Configuring a Filtering Classloader

    <weblogic-application>
    ...
     <prefer-application-packages>
       <package-name>com.tangosol.*</package-name>
       <package-name>weblogic.coherence.service.*</package-name>
       <package-name>com.oracle.coherence.common.*</package-name>
     </prefer-application-packages>
    ...
    </weblogic-application>
    

Packaging Applications and Configuring WAR-Scoped Cluster Nodes

With this configuration, or if only one application wants to use Coherence, each deployed Web application becomes its own Coherence node. Caches will be visible to the individual modules only. For example, this could be a recommended deployment for a stand-alone WAR deployment or stand-alone EJB deployment.

If you are deploying multiple WAR files, note that this configuration produces the largest number of Coherence nodes in the cluster—one for each deployed WAR file that uses coherence.jar. It also results in the largest resource utilization of the three configurations—one copy of the Coherence classes are loaded for each deployed WAR. On the other hand, since each deployed Web application is its own cluster node, Web applications are completely isolated from other potentially misbehaving Web applications.

Note:

A Web module within an EAR can have a module-scoped Coherence node but an EJB module within an EAR can only have an application-scoped Coherence cluster node.

To Use Coherence Caches with WAR-Scoped Cluster Nodes

  1. Use the WebLogic Server Administration Console to deploy coherence.jar and active-cache.jar as shared libraries to all of the target servers where the application will be deployed. See "Install a Java EE Library" in the Oracle Fusion Middleware Oracle WebLogic Server Administration Console Help.

    As an alternative to the Administration Console, you can also deploy on the command line. The following are sample deployment commands:

    java weblogic.Deployer -username <> -password <> -adminurl <> -deploy coherence.jar -name coherence -library -targets <>
    
    java weblogic.Deployer -username <> -password <> -adminurl <> -deploy active-cache.jar -name active-cache -library -targets <>
    
  2. Import coherence.jar and active-cache.jar as optional packages in the manifest.mf file of each module that will be using Coherence.

    As an alternative to using the manifest file, copy coherence.jar and active-cache.jar to each WAR file's WEB-INF/lib directory.

    Example 3-7 illustrates the contents of a sample manifest.mf file.

    Example 3-7 Referencing coherence and active-cache Jar Files the manifest.mf File

    Manifest-Version: 1.0
    Extension-List: coherence active-cache
    coherence-Extension-Name: coherence
    active-cache-Extension-Name: active-cache
    
  3. (Optional) If you must configure Coherence cluster properties, create a CoherenceClusterSystemResourceMBean and reference it as a coherence-cluster-ref element in weblogic.xml or weblogic-ejb-jar.xml file.

    Example 3-8 illustrates a sample configuration for WAR-scoped cluster nodes in the weblogic.xml file. The myCoherenceCluster MBean is of type CoherenceClusterSystemResourceMBean.

    Example 3-8 coherence-cluster-ref Element for WAR-Scoped Cluster Nodes

    <weblogic-web-app>
    ...
      <coherence-cluster-ref>
        <coherence-cluster-name>
         myCoherenceCluster
        </coherence-cluster-name>
      </coherence-cluster-ref> 
    ...
    </weblogic-web-app>
    

Create and Configure Coherence Clusters

Using WLST or the Administration Console, you can create a Coherence cluster configuration and select WebLogic Server instances or clusters on which the cluster configuration is accessible.

The createCoherenceClusterMBean.py WLST script shown in Example 3-9 configures three Coherence clusters, including a server-scoped configuration that gets deployed to the Administration Server (myserver).

Example 3-9 createCoherenceClusterMBean.py

from java.util import *
from javax.management import *
from java.lang import *
import javax.management.Attribute

"""
This script configures a Coherence Cluster System Resource MBean and deploys it
to the admin server
"""

def createCoherenceSystemResource(wlsTargetNames, coherenceClusterSourceName):

       name = coherenceClusterSourceName
       # start creation
       print 'Creating CoherenceClusterSystemResource with name '+ name
       cohSR = create(name,"CoherenceClusterSystemResource")
       cohBean = cohSR.getCoherenceClusterResource()
       cohCluster = cohBean.getCoherenceClusterParams()
         cohCluster.setUnicastListenAddress("localhost")
         cohCluster.setUnicastListenPort(7001)
       cohCluster.setUnicastPortAutoAdjust(true)
       # you can set up the multicast port or define WKAs
       cohCluster.setMulticastListenAddress("231.1.1.1")
       cohCluster.setMulticastListenPort(8001)
       cohCluster.setTimeToLive(5)

       for wlsTargetName in wlsTargetNames:
          cd("Servers/"+wlsTargetName)
          target = cmo
          cohSR.addTarget(target)
          cd("../..")

def createServerScopedCoherenceSystemResource(wlsTargetNames, coherenceClusterSourceName):

       name = coherenceClusterSourceName
       # start creation
       print 'Creating CoherenceClusterSystemResource with name '+ name
       cohSR = create(name,"CoherenceClusterSystemResource")
       cohBean = cohSR.getCoherenceClusterResource()
       cohCluster = cohBean.getCoherenceClusterParams()
         cohCluster.setUnicastListenAddress("localhost")
         cohCluster.setUnicastListenPort(7002)
       cohCluster.setUnicastPortAutoAdjust(true)
       # you can set up the multicast port or define WKAs
       cohWKAs = cohCluster.getCoherenceClusterWellKnownAddresses()
       cohWKA = cohWKAs.createCoherenceClusterWellKnownAddress("wka1")
       cohWKA.setName("wka1")
       cohWKA.setListenAddress("localhost")
       cohWKA.setListenPort(9001)

       for wlsTargetName in wlsTargetNames:
          cd("Servers/"+wlsTargetName)
          target = cmo
          cohSR.addTarget(target)
            print cmo
            serverBean = cmo
            serverBean.setCoherenceClusterSystemResource(cohSR)
          cd("../..")

def createCustomCoherenceSystemResource(wlsTargetNames, coherenceClusterSourceName, tangosolOverrideFile):

       name = coherenceClusterSourceName
       # start creation
       cohSR = getMBean("/CoherenceClusterSystemResources/"+name)
       if cohSR == None:
         print 'Creating CoherenceClusterSystemResource with name '+ name
         cohSR = create(name,"CoherenceClusterSystemResource")
         cohSR.importCustomClusterConfigurationFile(tangosolOverrideFile)       

       for wlsTargetName in wlsTargetNames:
          cd("Servers/"+wlsTargetName)
          target = cmo
          cohSR.addTarget(target)
          cd("../..")

props = System.getProperties()
ADMIN_NAME = props.getProperty("admin.username")
ADMIN_PASSWORD = props.getProperty("admin.password")
ADMIN_HOST = props.getProperty("admin.host")
ADMIN_PORT = props.getProperty("admin.port")
ADMIN_URL = "t3://"+ADMIN_HOST+":"+ADMIN_PORT

TANGOSOL_OVERRIDE = props.getProperty("tangosol-override")

TARGETS = [ 'myserver' ]

print "Starting the script ..."
try :
       connect(ADMIN_NAME, ADMIN_PASSWORD, ADMIN_URL)
       edit()
       startEdit()
       createCoherenceSystemResource(TARGETS, 'cohSystemResource') 
       createServerScopedCoherenceSystemResource(TARGETS, 'serverScopedCohSystemResource') 
       createCustomCoherenceSystemResource(TARGETS, 'customCohSystemResource',TANGOSOL_OVERRIDE) 
       save()
       activate(block="true")
       disconnect()
       print 'Done configuring the Coherence Cluster System Resources'
except:
       dumpStack()
       undo('true','y')

For Administration Console procedures, see "Configure Coherence" in the Oracle Fusion Middleware Oracle WebLogic Server Administration Console Help.

Cluster-related values are stored in a descriptor file in the WebLogic Server configuration repository:

<domain-home>/config/coherence/CoherenceClusterSystemResourceName/CoherenceClusterSystemResourceName-####-coherence.xml.

For example, C:\Oracle\Middleware\user_projects\domains\base_domain\config\coherence\cohSystemResource\cohSystemResource-0759-coherence.xml.

Alternatively, you can configure properties that are not specified for the cluster by configuring them in a custom configuration file, for example, tangosol-coherence-override.xml, shown in Example 3-10.

Example 3-10 tangosol-coherence-override.xml

<?xml version='1.0'?>
<!--
This operational configuration override file is set up for use with Coherence in
a development mode.
-->
<coherence xml-override="/tangosol-coherence-override.xml">
 <cluster-config>
  <multicast-listener>
    <time-to-live system-property="tangosol.coherence.ttl">4</time-to-live>
    <join-timeout-milliseconds>3000</join-timeout-milliseconds>
  </multicast-listener>

  <packet-publisher>
   <packet-delivery>
    <timeout-milliseconds>30000</timeout-milliseconds>
   </packet-delivery>
  </packet-publisher>
 </cluster-config>

 <logging-config>
  <severity-level system-property="tangosol.coherence.log.level">5</severity-level>
  <character-limit system-property="tangosol.coherence.log.limit">0</character-limit>
 </logging-config>
</coherence>

Use WLST to import the custom cluster configuration file (also shown in Example 3-9, see createCustomCoherenceSystemResource) or the WebLogic Server Administration Console. See "Import a custom cluster configuration" in the Oracle Fusion Middleware Oracle WebLogic Server Administration Console Help.

Note:

If you specify cluster-related properties by importing a custom configuration file, the properties specified in the file must not be the same properties that were specified using WLST or the WebLogic Server Administration Console.

Start a Cache Server

A Coherence data node (also known as a Cache Server) is a dedicated JVM that is responsible for storing and managing all cached data. The senior node (which is the first node) in a Coherence data cluster can take several seconds to start; the start up time required by subsequent nodes is minimal. Thus, to optimize performance, you should always start a Coherence data node before starting a WebLogic Server instance. This will ensure that there is minimal (measured in milliseconds) startup time for applications using Coherence. Any additional Web applications that use Coherence are guaranteed not to be the senior data member, so they will have minimal impact on WebLogic Server startup.

Note:

Whether you start the cache servers first or the WebLogic Server instances first, depends on the server topology you are employing.
  • If you are using In-Process topology (all storage-enabled WebLogic Server instances employing ActiveCache), then it does not matter if you start the cache servers first or WebLogic Server instances first.

  • If you are using Out-of-Process topology (storage-disabled WebLogic server instances and stand-alone Coherence cache servers), then start the cache servers first, followed by the WebLogic Server instances.

  • If you are using WebLogic Out-of-Process topology, your topology is a mix of storage-enabled and storage-disabled WebLogic Server instances. Start the storage-enabled instances first, followed by the storage-disabled instances.

To Start a Stand-Alone Coherence Data Node

  1. Create a script for starting a Coherence data node. The following is a very simple example of a script that starts a storage-enabled cache server to use with ActiveCache. This example assumes that you are using a Sun JVM. See JVM Tuning in the Developer's Guide for Oracle Coherence for more information.

    java -server -Xms512m -Xmx512m 
    -cp <Coherence installation dir>/lib/coherence.jar:<Coherence installation dir>/lib/coherence-web-spi.war -Dtangosol.coherence.management.remote=true 
    -Dtangosol.coherence.cacheconfig=WEB-INF/classes/cache_configuration_file 
    -Dtangosol.coherence.session.localstorage=true com.tangosol.net.DefaultCacheServer
    

    In this example, cache_configuration_file refers to the cache configuration in the coherence-cache-config.xml file. The cache configuration defined for the cache server must be the same as the configuration defined for the application servers which run on the same ActiveCache cluster.

    If you run Coherence*Web for session management, then the cache configuration information should be merged with the session configuration contained in the session-cache-config.xml file. Similarly, the cache and session configuration must be consistent across WebLogic Servers and cache servers.

  2. Start one or more Coherence data nodes using the script described in the previous step.

To Start a Storage-Enabled or -Disabled WebLogic Server Instance

By default, an ActiveCache-enabled WebLogic Server instance starts in storage-disabled mode.

To start the WebLogic Server instance in storage-enabled mode, include the command line property -Dtangosol.coherence.session.localstorage=true in the server startup command.

For more information on working with WebLogic Server through the command line, see the weblogic.Server Command-Line Reference chapter in the Oracle Fusion Middleware Command Reference for Oracle WebLogic Server.

Start WebLogic Server

WebLogic Server provides several ways to start and stop server instances. The method that you choose depends on whether you prefer using the Administration Console or a command-line interface, and on whether you are using Node Manager to manage the server's life cycle. For detailed information, see "Starting and Stopping Servers" in Oracle Fusion Middleware Managing Server Startup and Shutdown for Oracle WebLogic Server. For a quick reference, see "Starting and Stopping Servers: Quick Reference."

Monitor Coherence Cluster Properties

The WebLogic Server Administration Console displays run-time monitoring information for Coherence clusters associated with a particular application or module, such as cluster size, members, and version. For more information, see "Monitoring Coherence Clusters" in the Oracle Fusion Middleware Oracle WebLogic Server Administration Console Help.