Oracle® Database Lite Developer's Guide
10g (10.0.0) Part No. B13788-01 |
|
![]() Previous |
![]() Next |
This document describes how synchronization functions between Oracle Database Lite and an Oracle database using the Mobile Server and the Mobile Sync client application. It also discusses the Consolidator, including the publish and subscribe model, the use of the Consolidator and Resource Manager APIs to customize applications, and the advanced features for customizing the Consolidator, among other topics. The topics that are discussed in this document are the following:
Section 3.3, "Mobile Sync Application Programming Interfaces (APIs)"
Section 3.4, "The Publish and Subscribe Model and Oracle Database Lite Synchronization"
Section 3.5, "Using Consolidator to Define the Sample11.java Example"
Section 3.7, "Advanced Features for Customizing Consolidator"
Section 3.9, "Mapping Datatypes Between the Oracle Server and Clients"
Oracle Database Lite contains a subset of data stored in the Oracle database. This subset is stored in snapshots in Oracle Database Lite. Unlike a base table, a snapshot keeps track of changes made to it in a change log. Users can make changes in Oracle Database Lite while the device is disconnected, and can synchronize them with the Oracle database.
There are basically three types of publication items that can be used to define synchronization; fast refresh, complete refresh, and queue based.
The most common method of synchronization is a fast refresh publication item where changes are uploaded by the client, and changes for the client are downloaded. Meanwhile, a background process called the Message Generator and Processor (MGP) periodically collects the changes uploaded by all clients and applies them to database tables. It then composes new data, ready to be downloaded to each client during the next synchronization, based on predefined subscriptions.
Another method of synchronization is the complete refresh publication item. During a complete refresh, all data for a publication is downloaded to the client. For example, during the very first synchronization session, all data on the client is refreshed from the Oracle database. This form of synchronization takes longer because all rows that qualify for a subscription are transferred to the client device, regardless of existing client data.
Lastly, there is the queue based publication item. This can be considered the most basic form of publication item, for the simple reason that there is no synchronization logic created with it. The synchronization logic is left entirely in the hands of the developer. A queue based publication item is ideally suited for scenarios that do not require actual synchronization but require something somewhere in between. For instance, data collection on the client. With data collection, there is no need to worry about conflict detection, client state information, or server side updates. Therefore, there is no need to add the additional overhead normally associated with a fast refresh or complete refresh publication item.
Data is synchronized between Oracle Database Lite and an Oracle database server. This is accomplished by invoking the Mobile Sync client which interacts with the Mobile Server. The Mobile Server uses synchronization objects such as users, publications, publication items, and subscriptions to process client and server data changes. This is often referred to as the publish and subscribe model.
This section describes the following synchronization concepts. Topics include:
A publication item is a Mobile Server object that has a unique name and contains a SQL query that is defined against an Oracle database table, view, or a synonym. The query in the publication item can have optional parameters, known as subscription parameters or template variables, which are used to determine what subset of the data of the table, view, or synonym is synchronized for each user.
A publication is a Mobile Server object that has a unique name and serves as a container of publication items. A publication may contain zero or more publication items, and a publication item may be contained in zero or more publications. A publication keeps track of all the subscription parameters used in the member publication items. A publication also contains indexes defined on publication items as well as platform specific information such as the type of database to be created on the client.
Every Oracle Database Lite application has an associated publication that defines the data needed by the application.
A subscription relates a publication to a user. A subscription cannot be used unless and until every parameter of the publication is initialized to a value. When a user synchronizes with the Mobile Server, an Oracle Database Lite is created for each subscription. Each publication item of the publication becomes a snapshot in this database.
Through established subscriptions, the Mobile Server prepares any new data for each client which is then downloaded when the client synchronizes. Only the required subset of data is downloaded to each client. If the publication has been flagged for complete refresh, all the qualifying data is downloaded.
Shared maps save space on the server by improving the scalability of replication for multiple users sharing subscription data sets. This feature, which is turned on by default, reduces the size of the map tables for large lookup publication items. When multiple users share the same data, usually their query subsetting parameters are identical.
The following steps take you through the components and procedures necessary to perform a synchronization. These steps assume you are installing the client on a Windows system. By completing the steps listed, you will be able to synchronize every time. Steps 1 and 2 may require the assistance of your administrator.
You must download and configure the msync.exe client, and use it to create a local Oracle Database Lite for a sample user named "S11U1." This user exists as part of the samples installed.
Install and configure an instance of the Mobile Server as described in the Oracle9i Lite Installation and Configuration Guide for Windows NT/2000/XP.
From the command line on the Mobile Server system, run instdemo.bat (instdemo on Solaris) to create sample applications in the Mobile Server repository.
On Solaris
You can specify the following path to create and store sample applications in the Mobile Server repository.
&fmv237;\Mobile\Server\Samples
On Windows NT
You can specify the following path to create and store sample applications in the Mobile Server repository.
&fmv239;\Mobile\Server\Samples
Note: Replace &fmv240; with your actual Oracle Home directory name. |
Using a browser, install the Mobile Sync application to connect to your Mobile Server instance using the following URL.
http://<mobile_server>/webtogo/setup
where <mobile_server>
is the hostname of your Mobile Server instance. Click the link which installs the client for "Windows 32" and follow instructions to install the Mobile Sync application. At this stage, you will be prompted for an installation directory. This procedure assumes and recommends you install it in your &fmv243; directory.
Open your &fmv244;\Mobile\Sdk\bin
directory and run msync.exe.
As Figure 3-1 displays, the mSync dialog appears.
As Table 3-1 describes, enter the appropriate parameters in the corresponding fields.
Table 3-1 Mobile Sync Parameters
Field | Value | Description |
---|---|---|
User | s11u1 | Mobile Client user name. This field is not case sensitive. |
Password | MANAGER | Mobile Client password. This field is case sensitive. |
Save Password | Select | Select this check box to save the password. |
Server | Your Mobile Server instance hostname | The Mobile Server IP address or URL. |
To save this information, click Apply.
To start synchronizing, click Sync.
A progress bar appears to indicate the completion of each synchronization task such as composing, sending, receiving, and processing. The progress bar also displays the duration for completion of each task. If synchronization executes successfully, the message "Sync success" appears. When you see this message, a sample database orders.odb is created in the &fmv247;\Mobile\oldb40\S11U1
directory on the client system. You can view this database using a SQL viewer such as Mobile SQL. It contains two tables named ORD_MASTER
and ORD_DETAIL
.
If synchronization fails, the message "Sync Failed" appears. To determine the cause of a failed synchronization, the Mobile Server administrator can view tracing information in the Mobile Server log file.
The preferred way to create synchronization objects such as publications and publication items is a tool called the Packaging Wizard which is included in the Mobile Development Kit. For more information on the Packaging Wizard, see the Oracle Database Lite Tools and Utilities Guide.
Synchronization objects can also be created programmatically using the Consolidator API and Resource Manager APIs. For more information, see Section 3.4, "The Publish and Subscribe Model and Oracle Database Lite Synchronization" in Chapter 3, "Synchronization".
Now that you have performed at least one synchronization, we can look at the synchronization process in more detail. Oracle Database Lite uses an asynchronous method for synchronization between Oracle Database Lite clients and the Oracle database server through the Mobile Server. This means that the Mobile Sync module operates independently of the MGP as neither component is dependent on the other to complete its operation.
Figure 3-2 illustrates the fast refresh synchronization process.
The default synchronization method is the fast refresh mode as displayed in Figure 3-2. Fast refresh is an incremental refresh where changes are uploaded and stored in queues during the upload phase. Next, the changes which have been stored in out queues are downloaded and applied to the client. Meanwhile, the MGP periodically views the In Queues and takes anything found in an In Queue and applies it to the database during the apply phase. Changes generated by this client, other clients, and server-side applications to the Oracle database are then composed and stored in an out queue until the next time a client is synchronized.
The upload and download phases are performed independently of any apply or compose phase. An apply phase is not dependent on an upload phase, nor is a download phase dependent on a compose phase. During any synchronization session, download occurs after upload, and compose occurs after apply.
A complete refresh is simply an execution of the snapshot query. When application synchronization performance is slow, application developers must tune the snapshot query. Complete refresh items such as publication items are not optimized for performance. Therefore, to improve performance, application developers can use the fast refresh option. The Consperf utility only analyzes fast refresh publication items.
When synchronization is initiated, the client opens a connection to the Mobile Server via the selected mode of transport, which causes the Mobile Server to open a connection to the Oracle database server. This process is illustrated in the following figure.
Figure 3-3 displays the Client Upload and Download phases.
Changes to Oracle Database Lite records are accumulated and flagged with codes for the type of Data Manipulation Language (DML) operation performed such as insert, update, or delete. The data is encrypted, compressed, and sent to the Mobile Server to populate objects called In Queues. An in queue is a persistent database object created to store data temporarily during synchronization.
During the same session, snapshots on the client are updated by applying data from the out queue to the Oracle Database Lite. The difference between an out queue and an in queue is not a table, but a data structure containing a reference to data contained in the Oracle database base tables. For more information on customizing the synchronization process using the In Queue and Out Queues, see Section 3.7.9, "Queue Interface for Customizing Replication".
For each user, the MGP takes any content of the in queue and applies it to the base tables on the Oracle database. Any conflicts are detected and resolved at this time. For more information, see in Section 3.8.3, "Resolving Conflicts Using the Error Queue". The apply phase is completed when the changes uploaded by all users are processed.
Figure 3-4 illustrates the Apply and Compose phases in MGP.
After the apply phase, the MGP reviews the base tables. It composes and stores any changes in the Out Queues to be downloaded to the client.
Note: The Message Generator and Processor (MGP) is a background process which periodically becomes active, looks at the in queues, and applies the changes to the Oracle database base tables. Based on how MGP is configured, there may be a delay in how quickly it composes and readies the out queues to be downloaded to the client regardless of how frequently you synchronize. The changes are stored safely in the in queues until MGP processes them, after which they are downloaded to the client on the next synchronization process. |
During a complete refresh, all contents of the snapshot tables present on the client are refreshed from the Oracle database tables. The MGP is not involved because all the contents are refreshed, but this form of synchronization is time consuming and engages system-resources intensively.
It is possible to encrypt Oracle Database Lite using a utility called ENCRYPDB
. Synchronizing with an encrypted database requires an understanding of how Oracle Database Lite manages encrypted databases. For more information, see the Oracle Database Lite Tools and Utilities Guide.
For a detailed description of Mobile Sync Application Programming Interfaces, refer Section 5, "Native Application Development".
Mobile Server uses a publish and subscribe model to centrally manage data distribution between Oracle database servers and Oracle Database Lite clients. Basic functions such as creating publication items and publications, can be implemented most easily using the Packaging Wizard. These functions can also be performed using the Consolidator API and Resource Manager API by writing Java programs to customize the functions as needed. More advanced functionality can only be enabled programmatically using the Consolidator API and Resource Manager API.
The publish and subscribe model uses database objects described in Table 3-2:
Table 3-2 Publish/Subscribe Model Elements
The publish and subscribe model can be implemented one of two ways:
Declaratively, using the Packaging Wizard to package and publish applications. This is the recommended method. This method is described fully in the Oracle Database Lite Tools and Utilities Guide.
Programmatically, using the Resource Manager API and the Consolidator API can invoke certain advanced features or customize an implementation. This technique is recommended for advanced users requiring specialized functionality.
The publish and subscribe model can be customized programmatically using the Resource Manager API and the Consolidator API. The basic procedure to invoke Consolidator to implement the publish and subscribe model involves the following steps:
Create database tables.
Connect to Mobile Server.
Create publications.
Create publication items.
Create publication item indexes as required.
Create Users
Add publication items to publications.
Subscribe users to publications.
Define user subscription parameters to publications.
Instantiate the subscriptions.
Note: To call the Publish and Subscribe APIs, the following JAR files must be specified in your classpath.
|
To Illustrate how these APIs are used to define Consolidator, the following sections use a sample Java program included with Oracle Database Lite 10g, called sample11.java. Entries referring to the Resource Manager package are children of the Mobile Admin class found in the Web-to-Go API. Entries referring to the Consolidator class are part of the Consolidator API.
This sample can be found:
On Solaris
&fmv290;/mobile/server/samples
On Windows NT
&fmv291;\Mobile\Server\Samples
Here is the source code for the program:
import java.sql.SQLException; import java.sql.*; import oracle.lite.sync.Consolidator; public class sample11{ static String CONS_SCHEMA; static String DEFAULT_PASSWORD; public static void main(String argv[]) throws Throwable { /////////////////////////////////////////////////////////////////////////////// //SAMPLE11 /////////////////////////////////////////////////////////////////////////////// if(argv.length != 2) { System.out.println("Syntax: java sample11 <Schema> <Password>"); return; } CONS_SCHEMA = argv[0] ; DEFAULT_PASSWORD = argv[1] ; //Create Required Tables Using Standard JDBC DriverManager.registerDriver ((Driver)Class.forName ("oracle.jdbc.driver.OracleDriver").newInstance ()); Connection c = null; Statement s = null; try { c = DriverManager.getConnection ("jdbc:oracle:oci8:@WEBTOGO.WORLD", "MASTER", "MASTER" ); s = c.createStatement (); s.executeUpdate("create table MASTER.ORD_MASTER(" + "ID number(9)," + "DDATE DATE default TO_DATE('1990-01-01 15:35:40', 'YYYY-MM-DD HH24:MI:SS')," + "STATUS number(9)," + "NAME varchar2(20)," + "DESCRIPTION varchar2(20)" + ")"); s.executeUpdate("alter table MASTER.ORD_MASTER add constraint" +" orders_pk primary key(ID)"); s.execute("GRANT ALL ON MASTER.ORD_MASTER to " + CONS_SCHEMA + " WITH GRANT OPTION"); s.executeUpdate("create table MASTER.ORD_DETAIL(" + "ID number(9)," + "KEY number(9)," + "DDATE DATE default TO_DATE('1995-01-01 15:35:40', 'YYYY-MM-DD HH24:MI:SS')," + "DESCRIPTION varchar2(20)," + "QTYORDERED number(9)," + "QTYSHIPPED number(9)," + "QTYRECEIVED number(9)," + "COST number(9)" + ")"); s.executeUpdate("alter table MASTER.ORD_DETAIL add constraint" +" items_pk primary key(ID, KEY)"); s.execute("GRANT ALL ON MASTER.ORD_DETAIL to " + CONS_SCHEMA + " WITH GRANT OPTION"); c.commit (); } catch (SQLException ee) { ee.printStackTrace (); } finally { if (s!= null) try {s.close ();}catch (SQLException e1){} if (c!= null) try {c.close ();}catch (SQLException e2){} } //Connecting to the Mobile Server oracle.mobile.admin.ResourceManager.openConnection(CONS_SCHEMA, DEFAULT_PASSWORD); //Creating Publications try { Consolidator.DropPublication("T_SAMPLE11"); } catch (Throwable e) { //e.printStackTrace(); ignore error } Consolidator.CreatePublication("T_SAMPLE11", Consolidator.OKPI_CREATOR_ID, "OrdersODB.%s", null); //Creating Publication Items try { Consolidator.DropPublicationItem("P_SAMPLE11-M"); } catch (Throwable e) { //e.printStackTrace(); ignore error } try { Consolidator.CreatePublicationItem("P_SAMPLE11-M","MASTER","ORD_MASTER", "F", "SELECT * FROM MASTER.ORD_MASTER", null, null); } catch (Throwable e) { e.printStackTrace(); } try { Consolidator.DropPublicationItem("P_SAMPLE11-D"); } catch (Throwable e) { //e.printStackTrace(); } try { Consolidator.CreatePublicationItem("P_SAMPLE11-D","MASTER","ORD_DETAIL", "F", "SELECT * FROM MASTER.ORD_DETAIL", null, null); //Creating Publication Item Indexes Consolidator.CreatePublicationItemIndex("P_SAMPLE11M-I1", "P_SAMPLE11-M", "I", "DDATE"); Consolidator.CreatePublicationItemIndex("P_SAMPLE11M-I2", "P_SAMPLE11-M", "I", "STATUS"); Consolidator.CreatePublicationItemIndex("P_SAMPLE11M-I3", "P_SAMPLE11-M", "I", "NAME"); Consolidator.CreatePublicationItemIndex("P_SAMPLE11D-I2", "P_SAMPLE11-D", "I", "KEY"); Consolidator.CreatePublicationItemIndex("P_SAMPLE11D-I3", "P_SAMPLE11-D", "I", "DESCRIPTION"); //Adding Publication Items to a Publication Consolidator.AddPublicationItem("T_SAMPLE11", "P_SAMPLE11-M", null, null, "S", null, null); Consolidator.AddPublicationItem("T_SAMPLE11", "P_SAMPLE11-D", null, null, "S", null, null); } catch (Throwable e) { e.printStackTrace (); } // Creating Users try { oracle.mobile.admin.ResourceManager.Example("S11U1"); } catch (Throwable e) { //e.printStackTrace(); ignore error } oracle.mobile.admin.ResourceManager.Example("S11U1","manager","S11U1","C"); // Instantiating a Subscription Consolidator.Example("T_SAMPLE11", "S11U1"); Consolidator.InstantiateSubscription("T_SAMPLE11", "S11U1"); oracle.mobile.admin.ResourceManager.commitTransaction(); oracle.mobile.admin.ResourceManager.closeConnection(); }}
The first section of the program gets a JDBC connection to database MASTER, and creates the base tables ORD_MASTER and ORD_DETAIL in the database. This part of the process can also be done using SQL. If you have gone through the steps described in Section 3.1.2, "Synchronization Example" in Chapter 3, "Synchronization", these tables have been created in the Mobile Server repository and on the client.
The following expression connects to the Mobile Server.
For example,
ResourceManager.openConnection(<USERNAME>, <PASSWORD>); oracle.mobile.admin.ResourceManager.openConnection (CONS_SCHEMA, DEFAULT_PASSWORD);
For this example, the <USERNAME>
is S11U1 and the <PASSWORD>
is MANAGER.
The next step is to create a publication using the Consolidator Class. Publications are essentially sets of publication items. Sample11.java creates two publications. The DropPublication
command is used first to make certain that the publication being created doesn't already exist.
Special characters including spaces are supported in publication names.
CreatePublication
has the following syntax:
public static void CreatePublication (String name, int client_storage_type, String client_name_template, String enforce_ri) throws Throwable
Example
In Sample11.java, the publication being created is T_SAMPLE11:
Consolidator.CreatePublication("T_SAMPLE11", Consolidator.OKPI_CREATOR_ID, "OrdersODB.%s", null);
The parameters of CreatePublication
are listed in Table 3-3:
Table 3-3 CreatePublication Parameters
Parameter | Definition |
---|---|
name
|
The name of the publication being created. |
client_storage_type
|
A constant which defines the platform type. |
client_name_template
|
This is the template for publication item names on client devices. This can be one of several choices:
|
enforce_ri
|
This parameter is reserved for future enhancement and should always be NULL. |
Note: If you use Oracle Database Lite as the client storage type, the database does not have an extension. |
After creating the publication, it is necessary to create the publication item. Publication items define the snapshot of the base tables which is downloaded to Oracle Database Lite. The refresh mode of the publication item is specified during creation so it is pre-configured for fast- or complete-refresh. You can also establish data-subsetting parameters when creating the publication item, to provide a finer degree of control on the data requirements for a given client.
Publication item names are limited to twenty-six characters and must be unique across all publications. The following examples create a publication item named P_SAMPLE11-M. Before creating the publication item, the sample uses DropPublicationItem
to clean up any prior publication items that might have the same name.
CreatePublicationItem
has the following syntax:
public static void CreatePublicationItem (String name, String owner, String store, String refresh_mode, String select_stmt, String cbk_owner, String cbk_name) throws Throwable
The parameters of CreatePublicatonItem
are listed in Table 3-4:
Table 3-4 CreatePublicationItem Sample Parameters
Parameter | Definition |
---|---|
name
|
Specifies the publication item name. |
owner
|
Specifies the base object schema owner. For example, MASTER is the owner of the base object ORD_MASTER. |
store
|
Specifies the base table or view name in the Oracle database. The snapshot which is defined is also assigned this name. |
refresh_mode
|
Defines the refresh mode as fast or complete. See Section 3.7.5, "Fast Refresh and Update Operation for Multi-Table Publications (Views)" for more information. |
select_stmt
|
A SQL select statement which identifies data from the specified columns in the database table. |
cbk_owner
|
Specifies the callback package owner. For more information, see Section 3.7.12, "Callback Customization for Before and After Compose/Apply". |
cbk_name
|
Specifies the callback package name. For more information, see Section 3.7.12, "Callback Customization for Before and After Compose/Apply". |
Example
In the Sample11.java program, the following commands create snapshot definitions, or publication items, called P_SAMPLE-M and P_SAMPLE-D, of the ORD_MASTER and ORD_DETAIL database tables, which were created in the repository earlier.
Consolidator.CreatePublicationItem("P_SAMPLE11-M","MASTER","ORD_MASTER", "F", "SELECT * FROM MASTER.ORD_MASTER", null, null); Consolidator.CreatePublicationItem("P_SAMPLE11-D","MASTER","ORD_DETAIL", "F", "SELECT * FROM MASTER.ORD_DETAIL", null, null);
Publication items can be defined for both tables and views.
When publishing updatable multi-table views, there are certain restrictions that apply:
The view must contain a parent table with a primary key defined.
INSTEAD OF triggers must be defined for data manipulation language (DML) operations on the view. See Section 3.7.5, "Fast Refresh and Update Operation for Multi-Table Publications (Views)" for more information.
All base tables of the view must be published.
Data subsetting is the ability to create specific subsets of data and assign them to a parameter name which can then be assigned to a subscribing user. When creating publication items, a parameterized select statement with a character limit of up to 8k can be defined. Subscription parameters must be specified at the time the publication item is created, and are used during synchronization to control the data published to a specific client.
Creating a Data Subset Example
Consolidator.CreatePublicationItem("CORP_DIR1", "DIRECTORY1", "ADDRLRL4P", "F" , "SELECT LastName, FirstName, company, phone1, phone2, phone3, phone4, phone5, phone1id, phone2id, phone3id, displayphone, address, city, state, zipcode, country, title, custom1, custom2, custom3,note FROM directory1.addrlrl4p WHERE company > :COMPANY", null, null);
In this sample statement, data is being retrieved from a publication named CORP_DIR1, and is subset by the company.
Note: Within the select statement, the parameter name for the data subset must be prefixed with a colon, for example:COMPANY .
|
Sequence support has been enhanced with the 10g release. The previous implementation currently exists, but will be deprecated and unsupported.
The Enhancements
The following enhancements to sequence support are available.
True sequence support on the client - The Consolidator now supports replication of true sequence objects to the client.
Clear association with a publication - In a manner similar to publication items, adding sequences to a publication will propagate the corresponding sequence objects to all subscribing users. Note that a publication and a sequence have a one-to-many relationship. This means a publication can contain many different sequences, but a single sequence cannot exist in more than one publication.
Online and Offline - There are two types of sequences, online and offline. An online sequence is designed to support online Web-to-Go applications. This is accomplished by creating the same sequence object on both the server and the client. The paired sequences will be incremented by two and started with staggered values; one will start with an even number and one will start with an odd number. By using an odd/even window model such as the one described above, the Consolidator will ensure uniqueness regardless of whether the application is running in online mode or in offline mode. An offline sequence is similar to an online sequence except that the server-side sequence is not created and the developer can specify the increment value. Whether the sequence uses consecutive numbers or not is up to the application developer.
Sequence management - Once the sequences have been defined and associated with a publication, the Consolidator will manage all aspects of administering them for subscribing users, including allocation of new windows once predefined thresholds are met.
Complete Application Programming Interface (API) to manage the sequences - The API enables you to manage the sequences; for example, create/drop a sequence, add/remove a sequence from a publication, modify a sequence, and advance a sequence window for a user.
See the Consolidator Admin API Specification (included on the CD) for a complete listing of the APIs to define and administer sequences.
When a publication uses Data Subsetting parameters, you must set the parameters for each subscription to the publication. An example of a parameter is "COMPANY
" and is described in Section 3.5.5.3, "Data Subsetting".
public static void SetSubscriptionParameter (String publication, String clientid, String param_name, String param_value) throws Throwable
The parameters for SetSubscriptionParameter
are listed in Table 3-5:
Table 3-5 SetSubscriptionParameter Sample Parameters
Parameter | Definition |
---|---|
publication
|
Defines the publication from which the subset is to be taken. |
clientid
|
Defines the client ID which the data subset data is for. |
param_name
|
Defines the parameter name. |
param_value
|
Defines the parameter value being passed which determines what data is returned from publication item queries using this parameter. |
Example
This example sets the subscription parameter for the client DAVIDL
, subscribing to the publication named CORP_DIR1
:
Consolidator.SetSubscriptionParameter("CORP_DIR1", "DAVIDL", "COMPANY", "'DAVECO'");
Note: This method should only be used on publications created using the Consolidator API. To create template variables, a similar technique is possible using the Packaging Wizard. |
The Mobile Server supports automatic deployment of indexes in Oracle Database Lite on clients. The Mobile Server automatically replicates primary key indexes from the server database. The Consolidator API provides calls to explicitly deploy unique, regular, and primary key indexes to clients as well.
CreatePublicationItemIndex
uses the following syntax:
public static void CreatePublicationItemIndex (String name, String publication_item, String pmode, String columns) throws Throwable
The parameters of CreatePublicationItemIndex
are listed in Table 3-6:
Table 3-6 CreatePublicationItemIndex Parameters
Parameter | Definition |
---|---|
name
|
Defines the name of the index to be created. |
publication_item
|
Defines the index's publication item. |
pmode
|
Defines the index mode, I - regular, U - unique, P - primary key mode. See Section 3.5.8.2, "Define Client Indexes" for more information. |
columns
|
Defines the names of the columns included in the index. There can be more than one column listed per statement, the list of columns should be separated by commas and not contain any spaces. |
Example 1
In our Sample11.java sample code this takes the following form:
Consolidator.CreatePublicationItemIndex("P_SAMPLE11M-I1", "P_SAMPLE11-M", "I", "DDATE"); Consolidator.CreatePublicationItemIndex("P_SAMPLE11M-I2", "P_SAMPLE11-M", "I", "STATUS"); Consolidator.CreatePublicationItemIndex("P_SAMPLE11M-I3", "P_SAMPLE11-M", "I", "NAME"); Consolidator.CreatePublicationItemIndex("P_SAMPLE11D-I2", "P_SAMPLE11-D", "I", "KEY"); Consolidator.CreatePublicationItemIndex("P_SAMPLE11D-I3", "P_SAMPLE11-D", "I", "DESCRIPTION");
Sample11.java creates 5 indexes which establish regular indexes on the "DDATE", "STATUS", and "NAME" columns of the P_SAMPLE-M publication item, and the "KEY" and "DESCRIPTION" columns of the P_SAMPLE-D publication item. An index can contain more than one column. You could also define an index with multiple columns as follows:
Example 2
Consolidator.CreatePublicationItemIndex("P_SAMPLE11D-I1", "P_SAMPLE11-D", "I", "KEY,DESCRIPTION");
Client-side indexes can be defined for existing publication items. There are three types of indexes that can be specified:
P - Primary key
U - Unique
I - Regular
Note: When an index of type 'U' or 'P' is defined on a publication item, there is no check for duplicate keys on the server. If the same constraints do not exist on the base object of the publication item, Mobile Sync may fail with a duplicate key violation. See the Consolidator Admin API Specification for more information.
Once you create a publication item, you must associate it with a publication. To change the definition, you can either drop the publication item and then recreate it with the new definition, or use schema evolution depending on your requirements. See "DropPublicationItem" and "AlterPublicationItem" respectively in the Consolidator Admin API Specification for more information.
The syntax for AddPublicationItem
is:
public static void AddPublicationItem (String publication, String item, String columns, String disabled_dml, String conflict_rule, String restricting-predicate, String weight) throws Throwable
The following examples add a publication item named P_SAMPLE1 to the publication T_SAMPLE1. The parameters of AddPublicationItem
are listed in Table 3-7:
Table 3-7 AddPublicationItem Parameter
Parameter | Definition |
---|---|
publication
|
Defines the publication to receive the new item. |
item
|
Defines the publication item to be added. |
columns
|
Specifies a new name for publication item columns Using null specifies that no columns are renamed. All columns in the publication item query must be specified in the proper order which is either:
|
disabled_dml
|
Specifies options for disabling DML. The possible values are:
|
conflict_rule
|
Defines the winner in conflict resolution: either 'C' for client wins or 'S' for server wins. See Section 3.5.9.2, "Defining Conflict Rules" for more information. |
restricting_predicate
|
Specifies high-priority mode. A restricting predicate can be assigned to a publication item as it is added to a publication. When a client is synchronizing in high priority mode, the predicate is used to limit data pushed to the client. This parameter can be null. This parameter is for advanced use. |
weight
|
Specified as null or an integer to determine priority in executing Client Operations to master tables. See Section 3.5.9.3, "Using Table Weight" for more information. This value must be an integer between 1 and 1023. |
Example
Consolidator.AddPublicationItem("T_SAMPLE1", "P_SAMPLE1", null, null, "S", null, null);
When adding a publication item to a publication, the user can specify winning rules to resolve synchronization conflicts in favor of either the client 'C' or the server 'S'. A Mobile Server synchronization conflict is detected under any of the following situations:
The same row was updated on the client and on the server.
Both the client and server created rows with equal primary keys.
The client deleted a row and the server updated the same row.
The client updated a row and the server deleted the same row. This is considered a synchronization error for compatibility with Oracle database advanced replication.
For systems with delayed data processing, where a client's data is not directly applied to the base table (for instance in a three tier architecture) a situation could occur when first a client inserts a row and then updates the same row, while the row has not yet been inserted into the base table. In that case, if the DEF_APPLY parameter in C$ALL_CONFIG is set to TRUE, an INSERT operation is performed, instead of the UPDATE. It is up to the application developer to resolve the resulting primary key conflict. If, however, DEF_APPLY is not set, a "NO DATA FOUND" exception is thrown (see below for the synchronization error handling).
All the other errors including nullity violations and foreign key constraint violations are synchronization errors.
If synchronization errors are not automatically resolved, the corresponding transactions are rolled back and the transaction operations are moved into Mobile Server error queue in C$EQ, while the data is stored in CEQ$. Mobile Server database administrators can change these transaction operations and re-execute or purge transactions from the error queue.
Table weight is an integer property of association between publications and publication items. Mobile Server uses table weight to determine which order to apply Client Operations to master tables within each publication, as follows:
Client INSERT operations are executed first, from lowest to highest table weight order.
Client DELETE operations are executed next, from highest to lowest table weight order.
Client UPDATE operations are executed last, from lowest to highest table weight order.
The value assigned must be an integer between 1 and 1023.
Table weight is applied to publication items within a specific publication, for example, a publication can have more than one publication item of weight "2" which would have INSERT operations performed after those for any publication item of a lower weight within the same publication.
Sample11 has you drop users using dropUser()
, before creating the new user. This serves to clear out any spurious user ID's before creating the new one. See Section 3.5.11, "Drop User" for details. The parameters for this function are not case sensitive.
The syntax for createUser
is:
public static boolean createUser (String userName, String password, String fullName, String privilege) throws Throwable;
The parameters of createUser
are listed in Table 3-8:
Table 3-8 createUser - Sample Parameters
Parameter | Definition |
---|---|
userName
|
Defines the user name for mobile client. |
password
|
Defines the password for this user name. |
fullName
|
Optional. Specifies the full name for user, for example, John Smith. |
privilege
|
This parameter defines the Mobile Server user privilege. This value can be one of the following:
|
The following example creates a user "S11U1" with the parameters listed in the table:
Example
oracle.mobile.admin.ResourceManager.createUser("S11U1","manager","John Smith","C")
You can drop existing Mobile Server users with the dropUser
function. The parameters for this function are not case sensitive.
The syntax for dropUser
is:
The following example drops the user "S11U1":
public static void dropUser(String userName);
The parameters of dropUser
are listed in Table 3-9:
Table 3-9 dropUser - Sample Parameters
Parameter | Definition |
---|---|
userName
|
Specifies user name for mobile client. |
Example
oracle.mobile.admin.ResourceManager.dropUser("S11U1");
You can subscribe users to a publication using the CreateSubscription
function.
CreateSubscription
has the following syntax:
public static void CreateSubscription (String publication, String clientid) throws Throwable
The following examples subscribe the client, S11U1, to the publication, T_SAMPLE11, with the parameters listed in Table 3-10.
Table 3-10 Create Subscription - Sample Parameters
Parameter | Definition |
---|---|
publication
|
Specifies the publication being subscribed to. |
clientid
|
Specifies the user subscribing to the publication. |
Example
Consolidator.CreateSubscription("T_SAMPLE11", "S11U1");
After you subscribe a user to a publication, you then complete the subscription process by instantiating the subscription. When the Mobile Server instantiates a subscription, it creates a complete internal representation of the subscription.
Note: If you need to set subscription parameters for data subsetting, this must be completed before instantiating the subscription. See Section 3.5.5.3, "Data Subsetting" for more information. |
The syntax for InstantiateSubscription
is:
public static void InstantiateSubscription (String publication, String clientid) throws Throwable
The parameters for InstantiateSubscription
are listed in Table 3-11.
Table 3-11 InstantiateSubscription - Sample Parameters
Parameter | Definition |
---|---|
publication
|
Specifies the publication being subscribed to. |
clientid
|
Specifies the user subscribing to the publication. |
The following example instantiates a client's subscription to a publication, with the values specified in the table:
Example
Consolidator.InstantiateSubscription("T_SAMPLE1", "DAVIDL");
The API calls used in Section 3.4, "The Publish and Subscribe Model and Oracle Database Lite Synchronization" are those necessary when creating publications, publication items, and subscriptions programmatically. The topics in this section are used less frequently, but are still important.
The first time a client synchronizes, the Mobile Server automatically enables Mobile Server to create the database objects on the client in the form of snapshots. By default, the primary key index of a table is automatically replicated from the server. You can create secondary indexes on the through a publication item. If you do not want the primary index, you must explicitly drop it from the publication items. See the Consolidator Admin API Specification, for specific API information.
You can change passwords for Mobile Server users with the setPassword()
function which has the following syntax:
The syntax for setPassword
is:
public static void setPassword (String userName, String newpwd) throws Throwable
The parameters for setPassword
are listed in Table 3-12:
Table 3-12 setPassword - Sample Parameters
Parameter | Definition |
---|---|
userName
|
Specifies user name for mobile client. |
newpwd
|
Specifies the new password for the mobile client. |
The following example changes the password for the user "MOBILE":
Example
ResourceManager.setPassword("MOBILE","MOBILENEW");
Publication items can be defined for database objects existing on remote database instances outside of the Mobile Server repository. Local private synonyms of the remote objects should be created in the Oracle database. Execute the following SQL script located in the &fmv313;\Mobile\server\admin\consolidator_rmt.sql
directory, on the remote schema in order to create Consolidator logging objects.
The synonyms should then be published using the CreatePublicationItem
API. If the remote object is a view that needs to be published in updatable mode and/or fast-refresh mode, the remote parent table must also be published locally. Parent hints should be provided for the synonym of the remote view similar those used for local, updatable and/or fast refreshable views.
Two additional APIs have been created, DependencyHint
and RemoveDependencyHint
, to deal with non-apparent dependencies introduced by publication of remote objects.
Remote links to the Oracle database must be established prior to attempting remote linking procedures, please refer to the Oracle SQL Reference for this information.
Note: The performance of synchronization from remote databases is subject to network throughput and the performance of remote query processing. Because of this, remote data synchronization is best used for simple views or tables with limited amount of data. |
The CreatePublicationItem
API, used with the following parameters, creates a new, stand-alone publication item as a remote database object.
Syntax
public static void CreatePublicationItem ((String rmt_jdbc_url), String name, String owner, String store, String refresh_mode, String select_stmt, String cbk_owner, String cbk_name) throws Throwable
or,
public static void CreatePublicationItem ((Connection rmt_jdbc_conn), String name, String owner, String store, String refresh_mode, String select_stmt, String cbk_owner, String cbk_name) throws Throwable
The parameters for synonym creation using CreatePublicationItem
are listed in Table 3-13:
Table 3-13 CreatePublicationItem Parameters for Remote Database Linking
Parameter | Description |
---|---|
rmt_jdbc_url
|
The string specifying a jdbc URL for the remote database instance. |
rmt_jdbc_conn
|
The connection to the Oracle database where the remote instance resides. |
name
|
A string defining a new publication item name. |
owner
|
A string specifying the synonym owner. |
store
|
A string specifying the synonym name. Note: to publish a remote object, a private synonym for it must be created. |
refresh_mode
|
A string specifying the refresh mode. F for fast refresh or C for complete refresh. The default is fast refresh. |
select_stmt
|
A string specifying a select statement for the new publication. This statement my be parameterized. In the example that follows the parameter is :CAP , defined by placing a colon in front of the parameter name.
|
cbk_owner
|
Specifies the callback package owner as NULL. For more information, see Section 3.7.12, "Callback Customization for Before and After Compose/Apply". |
cbk_name
|
Specifies the callback package name as NULL. For more information, see Section 3.7.12, "Callback Customization for Before and After Compose/Apply". |
If the URL string is used, the remote connection is established and closed automatically. If the connection is null or cannot be established, an exception is thrown. The remote connection information is used to create logging objects on the linked database and to extract metadata.
Example
Consolidator.CreatePublicationItem( "jdbc:oracle:oci8:@oracle.world", "P_SAMPLE1", "SAMPLE1", "PAYROLL_SYN", "F" "SELECT * FROM sample1.PAYROLL_SYN"+"WHERE SALARY >:CAP", null, null);
Note: Within the select statement, the parameter name for the data subset must be prefixed with a colon, for example:CAP .
|
This creates a hint for a non-apparent dependency.
Syntax
public static void DependencyHint (String owner, Sting store, String owner_d, String store_d) throws Throwable
The parameters for CreateDependencyHint
are listed in Table 3-14:
Table 3-14 CreateDependencyHint Parameters
Parameter | Description |
---|---|
owner
|
A string specifying the owner of the view. |
store
|
A string specifying the name of the view. |
owner_d
|
A string specifying the owner of the base table or view. |
store_d
|
A string specifying the name of the base table or view. |
Example
Given remote view definition create payroll_view as select p.pid, e.name from payroll p, emp e where p.emp_id = e.emp_id; Execute locally create synonym v_payroll_syn for payroll_view@<remote_link_address>; create synonym t_emp_syn for emp@<remote_link_address>;
Where <remote_link_address>
is the link established on the Oracle database. Use DependencyHint
to indicate that the local synonym v_payroll_syn
depends on the local synonym t_emp_syn:
Consolidator.DependencyHint("SAMPLE1","V_PAYROLL_SYN","SAMPLE1","T_EMP_SYN");
This removes a hint for a non-apparent dependency.
Syntax
public static void RemoveDependencyHint (String owner, Sting store, String owner_d, String store_d) throws Throwable
The parameters for RemoveDependencyHint are listed in Table 3-15:
The following features include special functions which are not required for most application designs. These features may require advanced understanding of both Java and the design of the database being manipulated, including how queries have been constructed, how tables have been arranged and any dependencies that apply. The topics discussed are:
Section 3.7.1, "Compose Phase Customization Using MyCompose"
Section 3.7.4, "Modifying a Publication Item Using AlterPublicationItem"
Section 3.7.5, "Fast Refresh and Update Operation for Multi-Table Publications (Views)"
Section 3.7.9, "Queue Interface for Customizing Replication"
Section 3.7.11, "Foreign Key Constraints in Updatable Publication Items"
Section 3.7.12, "Callback Customization for Before and After Compose/Apply"
The compose phase takes a query on one or more server-side base tables and puts the generated DML operations for the publication item the query describes into the out queue to be downloaded into the client. Consolidator manages these DML operations in a "generic" way using the physical DML logs on the server-side base tables. This can be resource intensive if the DML operations are complex, for example, if there are complex data-subsetting queries being used. The tools to customize this process include an extendable MyCompose
with compose methods which can be overridden, and additional Consolidator APIs to register and load the customized class.
MyCompose
is an abstract class which serves as the super-class for creating a user-written sub-class, for example:
ItemACompose
public class ItemACompose extends oracle.lite.sync.MyCompose { ... }
The user-written class produces publication item DML operations to be sent to a client device by interpreting the base table DML logs. The extended MyCompose
sub-class is registered with a publication item, and takes over all compose phase operations for that publication item. An extended MyCompose
class can be registered with more than one publication item if it is sufficiently generic, however, internally, there is a unique instance of the extended class for each publication item.
The MyCompose class uses the following four methods: needCompose
, doCompose
, init
, and destroy
to customize the compose phase. One or more of these methods can be overridden in the customized sub-class to customize compose phase operations. For most users attempting to customize the compose phase for one client at a time, doCompose
and needCompose
are sufficient. The init
and destroy
methods are used when some process must be performed for all clients, either before or after individual client processing. There are several more methods described in Section 3.7.1.3, "Subsidiary MyCompose Methods" that provide useful information for the use of these four methods.
Use this method to identify a client that has changes to a specific publication item to be downloaded. This method is primarily useful as a way to trigger doCompose
.
Syntax
public int needCompose(Connection conn, String clientid) throws Throwable
The parameters for needCompose are listed in Table 3-16:
Table 3-16 needCompose Parameters
Parameter | Definition |
---|---|
conn
|
Database connection to the Mobile Server repository. |
clientid
|
Specifies the client which is connecting to the database. |
The following example examines a client base table for changes, in this example the presence of "dirty" records. If there are changes the method returns MyCompose.YES
which triggers the doCompose
method.
Example
public int needCompose(String clientid) throws Throwable{ boolean baseDirty = false; String [][] baseTables = this.getBaseTables(); for(int i = 0; i < baseTables.length; i++){ if(this.baseTableDirty(baseTables[i][0], baseTables[i][1])){ baseDirty = true; break; } } if(baseDirty){ return MyCompose.YES; }else{ return MyCompose.NO; } }
This sample code overrides the needCompose
method, and uses subsidiary methods discussed in Section 3.7.1.3, "Subsidiary MyCompose Methods", to check if the publication item has any tables with changes that need to be sent to the client. In this example, the base tables are retrieved, then checked for changed, or "dirty," records. If the result of that test is true, a value of "Yes" is returned which triggers the call for doCompose
.
This method populates the DML log table for a specific publication item subscribed to by a client.
Syntax
public int doCompose(Connection conn, String clientid) throws Throwable
The parameters for doCompose are listed in Table 3-17:
Table 3-17 doCompose Parameters
Parameter | Definition |
---|---|
conn
|
Database connection to the Mobile Server repository. |
clientid
|
Specifies the client which is connecting to the database. |
The following example contains a publication item with only one base table and that a DML (Insert, Update, or Delete) operation on the base table is also performed on the publication item. This method is called for each client subscribed to that publication item.
Example
public int doCompose(Connection conn, String clientid) throws Throwable { int rowCount = 0; String [][] baseTables = this.getBaseTables(); String baseTableDMLLogName = this.getBaseTableDMLLogName(baseTables[0][0], baseTables[0][1]); String baseTablePK = this.getBaseTablePK(baseTables[0][0],baseTables[0][1]); String pubItemDMLTableName = this.getPubItemDMLTableName(); String sql = "INSERT INTO " + pubItemDMLTableName + " SELECT " + baseTablePK + ", DMLTYPE$$ FROM " + baseTableDMLLogName; Statement st = conn.createStatement(); rowCount = st.executeUpdate(sql); st.close(); return rowCount; }
This sample code overrides the doCompose
method and uses subsidiary methods discussed in Section 3.7.1.3, "Subsidiary MyCompose Methods" to create a SQL statement. Using this sample you have MyCompose
retrieve the base table, the base table primary key, the base table DML log name and the publication item DML table name using the appropriate get
methods. You can then use the table names and other information returned by these methods to create a dynamic SQL statement ("sql
") which performs an insert into the publication item DML table of the contents of the base table primary key and DML operation from the base table DML log.
This method provides the framework for user-created compose preparation processes. The init
method is called once for all clients prior to the individual client compose phase. The default implementation has no effect.
Syntax
public void init(Connection conn)
The parameters for init are listed in Table 3-18:
This method provides the framework for user-created compose cleanup processes. The destroy
method is called once for all clients after to the individual client compose phase. The default implementation has no effect.
Syntax
public void destroy(Connection conn)
The parameters for destroy are listed in Table 3-18:
The following methods return information for use by primary MyCompose methods.
This returns the name of the publication.
Syntax
public String getPublication()
This returns the publication item name.
Syntax
public String getPublicationItem()
Returns the name of the DML table or DML table view, including schema name, which doCompose
or init
are supposed to insert into.
Syntax
public String getPubItemDMLTableName()
You can embed the returned value into dynamic SQL statements. The table or view structure is:
<PubItem PK> DMLTYPE$$
The parameters for getPubItemDMLTableName
are listed in Table 3-20:
Returns the primary key for the listed publication in comma separated format in the form of <col1>,<col2>,<col3>
.
Syntax
public String getPubItemPK() throws Throwable
Returns all the base tables for the publication item in an array of two-string arrays. Each two-string array contains the base table schema and name. The parent table is always the first base table returned, in other words, baseTables[0]
Syntax
public string [][] getBaseTables() throws Throwable
Returns the primary key for the listed base table in comma separated format, in the form of <col1>, col2>,<col3>
.
Syntax
public String getBaseTablePK(String owner, String baseTable) throws Throwable
The parameters for getBaseTablePK
are listed in Table 3-21:
Returns the a boolean value for whether or not the base table has changes to be synchronized.
Syntax
public boolean baseTableDirty(String owner, String store)
The parameters for baseTableDirty
are listed in Table 3-22:
Returns the name for the physical DML log table or DML log table view for a base table.
Syntax
public string getBaseTableDMLLogName(String owner, String baseTable)
The parameters for getBaseTableDMLLogName
are listed in Table 3-23:
Table 3-23 getBaseTableDMLLogName Parameters
Parameter | Definition |
---|---|
owner
|
The schema name of the base table owner. |
baseTable
|
The base table name. |
You can embed the returned value into dynamic SQL statements. There may be multiple physical logs if the publication item has multiple base tables. The parent base table's physical primary key corresponds to the primary key of the publication item. The structure of the log is:
<Base Table PK> DMLTYPE$$
The parameters for getBaseTableDMLLogName
view structure are listed in Table 3-24:
Returns a view of the map table which can be used in a dynamic SQL statement and contains a primary key list for each client device. The view can be an inline view.
Syntax
public String getMapView() throws Throwable
The structure of the map table view is:
CLID$$CS <Pub Item PK> DMLTYPE$$
The parameters of the map table view are listed in Table 3-25:
Once you have created your sub-class, it must be registered with a publication item. The Consolidator API now has two methods RegisterMyCompose
and DeRegisterMyCompose
to permit adding and removing the sub-class from a publication item.
The RegisterMyCompose
method registers the sub-class and loads it into the Mobile Server repository, including the class byte code. By loading the code into the repository, the sub-class can be used without having to be loaded at runtime.
Syntax
public static void RegisterMyCompose ( String publication, String pubItem, String className, boolean reloadBytecode) throws Throwable
The parameters of RegisterMyCompose
are listed in Table 3-26:
Table 3-26 RegisterMyCompose Parameters
Parameter | Definition |
---|---|
publication
|
The name of the publication the publication item is part of. |
pubItem
|
The name of the publication item to which the sub-class is being registered. |
className | The name of the customized MyCompose sub-class. |
reloadBytecode | If this value is true, then the existing byte code for the class in the Mobile Server repository is overwritten. |
The DeRegisterMyCompose
method removes the sub-class from the Mobile Server repository.
Syntax
public static void DeRegisterMyCompose ( String publication, String pubItem, boolean removeBytecode) throws Throwable
The parameters of DeRegisterMyCompose
are listed in Table 3-27:
Table 3-27 DeRegisterMyCompose Parameters
Parameter | Definition |
---|---|
publication
|
The name of the publication the publication item belongs too. |
pubItem
|
The name of the publication item the sub-class is being registered too. |
removeBytecode | If this value is true, then the existing byte code for the class in the Mobile Server repository is removed. If the byte code is removed, all publication items registered with this class have their registration removed. |
The sync discovery feature is used to request an estimate of the size of the download for a specific client, based on historical data. The following statistics are gathered to maintain the historical data:
The total number of rows send for each publication item.
The total data size for these rows.
The compressed data size for these rows.
The API consists of the getDownloadInfo
method which returns the DownloadInfo
object. The DownloadInfo
object contains a set of PublicationSize
objects and access methods. The PublicationSize
objects carry the size information of a publication item. The method Iterator iterator()
can then be used to view each PublicationSize
object in the DownloadInfo
object.
Syntax
public DownloadInfo getDownloadInfo (String clientid, boolean uncompressed, boolean completeRefresh)
The parameters of getDownloadInfo
are listed in Table 3-28:
Table 3-28 getDownloadInfo Parameters
Parameter | Description |
---|---|
clientid
|
The name of the client. |
uncompressed
|
If set to true, returns the true size of the data object, if false the size of the data object after being compressed. |
completeRefresh
|
If set to true, returns the size of all rows that will be synchronized during a complete refresh regardless of the refresh mode. |
Example
DownloadInfo dl = Consolidator.getDownloadInfo("S11U1", true, true);
The access methods provided by the DownloadInfo
class are listed in Table 3-29:
Table 3-29 DownloadInfo Class Access Methods
Method | Definition |
---|---|
public Iterator iterator ()
|
This returns an Iterator object so that the user can traverse through the all the PublicationSize objects that are contained inside the DownloadInfo object.
|
public long getTotalSize ()
|
This returns the size information of all PublicationSize objects in bytes, and by extension, the size of all publication items subscribed to by that user. If no historical information is available for those publication items, the value returned is '-1'.
|
public long getPubSize (String pubName)
|
This returns the size of all publication items that belong to the publication referred to by the string pubName . If no historical information is available for those publication items, the value returned is '-1'.
|
public long getPubRecCount (String pubName)
|
This will return the number of all records of all the publication items that belong to the publication referred by the string pubName , that will be synchronization during the next synchronization.
|
public long getPubItemSize (String pubItemName)
|
This will return the size of a particular publication item referred by pubItemName . It follows the following rules in order.
|
public long getPubItemRecCount (String pubItemName)
|
This will return the number of records of the publication item referred by pubItemName that will be synced in the next synchronization.
|
The access methods provided by the PublicationSize
class are listed inTable 3-30:
Table 3-30 PublicationSize Class Access Methods
Parameter | Definition |
---|---|
public String getPubName ()
|
This will return the name of the publication containing the publication item. |
public String getPubItemName ()
|
This will return the name of the publication item referred to by the PublicationSize object.
|
public long getSize ()
|
This will return the total size of the publication item referred to by the PublicationSize object.
|
public long getNumOfRows()
|
This will return the number of rows of the publication item that will be synchronized in the next synchronization. |
Sample Code
import java.sql.*; import java.util.Iterator; import java.util.HashSet; import oracle.lite.sync.ConsolidatorManager; import oracle.lite.sync.DownloadInfo; import oracle.lite.sync.PublicationSize; public class TestGetDownloadInfo{ public static void main(String argv[]) throws Throwable { // Open Consolidator connection try { // Create a ConsolidatorManager object ConsolidatorManager cm = new ConsolidatorManager (); // Open a Consolidator connection cm.OpenConnection ("MOBILEADMIN", "MANAGER", "jdbc:oracle:thin:@server:1521:orcl", System.out); // Call getDownloadInfo DownloadInfo dlInfo = cm.getDownloadInfo ("S11U1", true, true); // Call iterator for the Iterator object and then we can use that to transverse // through the set of PublicationSize objects. Iterator it = dlInfo.iterator (); // A temporary holder for the PublicationSize object. PublicationSize ps = null; // A temporary holder for the name of all the Publications in a HashSet object. HashSet pubNames = new HashSet (); // A temporary holder for the name of all the Publication Items in a HashSet // object. HashSet pubItemNames = new HashSet (); // Traverse through the set. while (it.hasNext ()) { // Obtain the next PublicationSize object by calling next (). ps = (PublicationSize)it.next (); // Obtain the name of the Publication this PublicationSize object is associated // with by calling getPubName (). pubName = ps.getPubName (); System.out.println ("Publication: " + pubName); // We save pubName for later use. pubNames.add (pubName); // Obtain the Publication name of it by calling getPubName (). pubItemName = ps.getPubItemName (); System.out.println ("Publication Item Name: " + pubItemName); // We save pubItemName for later use. pubItemNames.add (pubItemName); // Obtain the size of it by calling getSize (). size = ps.getSize (); System.out.println ("Size of the Publication: " + size); // Obtain the number of rows by calling getNumOfRows (). numOfRows = ps.getNumOfRows (); System.out.println ("Number of rows in the Publication: " + numOfRows); } // Obtain the size of all the Publications contained in the // DownloadInfo objects. long totalSize = dlInfo.getTotalSize (); System.out.println ("Total size of all Publications: " + totalSize); // A temporary holder for the Publication size. long pubSize = 0; // A temporary holder for the Publication number of rows. long pubRecCount = 0; // A temporary holder for the name of the Publication. String tmpPubName = null; // Transverse through the Publication names that we saved earlier. it = pubNames.iterator (); while (it.hasNext ()) { // Obtain the saved name. tmpPubName = (String) it.next (); // Obtain the size of the Publication. pubSize = dlInfo.getPubSize (tmpPubName); System.out.println ("Size of " + tmpPubName + ": " + pubSize); // Obtain the number of rows of the Publication. pubRecCount = dlInfo.getPubRecCount (tmpPubName); System.out.println ("Number of rows in " + tmpPubName + ": " + pubRecCount); } // A temporary holder for the Publication Item size. long pubItemSize = 0; // A temporary holder for the Publication Item number of rows. long pubItemRecCount = 0; // A temporary holder for the name of the Publication Item. String tmpPubItemName = null; // Traverse through the Publication Item names that we saved earlier. it = pubItemNames.iterator (); while (it.hasNext ()) { // Obtain the saved name. tmpPubItemName = (String) it.next (); // Obtain the size of the Publication Item. pubItemSize = dlInfo.getPubItemSize (tmpPubItemName); System.out.println ("Size of " + pubItemSize + ": " + pubItemSize); // Obtain the number of rows of the Publication Item. pubItemRecCount = dlInfo.getPubItemRecCount (tmpPubItemName); System.out.println ("Number of rows in " + tmpPubItemName + ": " + pubItemRecCount); } System.out.println (); // Close the connection cm.CloseConnection (); } catch (Exception e) { e.printStackTrace(); } }}
Consolidator database objects called map tables are used to maintain the state for each Mobile Client. If there are a large number of clients, and each client subscribes to a large amount of data, the map tables can become very large creating scalability issues. Using the following APIs, map tables can be partitioned by clientid, making them more manageable.
The API allows you to create a map table partition, add additional partitions, drop one or all partitions, and merge map table partitions. Map table partitions can be monitored using the ALL_PARTITIONS database catalog view.
Note: This form of partitioning is not related to the partition functionality provided by Oracle Server, and is used exclusively by Oracle Database Lite 10g. |
Creates a partition for the referenced publication item's map table. If there is data in the map table, it is transferred to the partition being created. After the partition has been successfully created, the map table can be truncated to remove redundant data using the SQL command TRUNCATE TABLE.
Syntax
public static void PartitionMap (String pub_item, int num_parts, String storage, String ind_storage) throws Throwable
The parameters of PartitionMap
are listed in Table 3-31.
Table 3-31 PartitionMap Parameters
Parameter | Definition |
---|---|
pub_item
|
The publication item whose map table is being partitioned. |
num_parts
|
The number of partitions. |
storage
|
A string specifying the storage parameters. This parameter requires the same syntax as the SQL command CREATE TABLE. See the Oracle9i SQL Reference for more information. |
ind_storage
|
A string specifying the storage parameters for indexes on the partition. This parameter requires the same syntax as the SQL command CREATE INDEX. See the Oracle9i SQL Reference for more information. |
Example
Consolidator.PartitionMap("P_SAMPLE1", 5, "tablespace mobileadmin", "initrans 10 pctfree 70");
Adds a partition for the referenced publication item's map table. If there is data in the map table, it is transferred to the partition being created. After the partition has been successfully created, the map table can be truncated to remove redundant data using the SQL command TRUNCATE TABLE.
Syntax
public static void AddMapPartition ( String pub_item, int num_parts, String storage, String ind_storage) throws Throwable
The parameters of AddMapPartition
are listed in Table 3-32:
Table 3-32 AddMapPartitions Parameters
Parameter | Definition |
---|---|
pub_item
|
The publication item whose map table is being partitioned. |
num_parts
|
The number of partitions. |
storage
|
A string specifying the storage parameters. This parameter requires the same syntax as the SQL command CREATE TABLE. See the Oracle Database Lite SQL Reference for more information. |
ind_storage
|
A string specifying the storage parameters for indexes on the partition. This parameter requires the same syntax as the SQL command CREATE INDEX. See the Oracle Database Lite SQL Reference for more information. |
Example
Consolidator.AddMapPartitions("P_SAMEPLE1",5,"tablespace mobileadmin","initrans 10 pctfree 40");
Drops the named partition. In the following example, the partition
parameter is the name of the partition. Partition names must be retrieved by querying the ALL_PARTITIONS table view CV$ALL_PARTITIONS since partitions are named by Consolidator.
Syntax
public static void DropMapPartition( String partition) throws Throwable
Example
Consolidator.DropMapPartition("MAP101_1");
Drops all partitions of the map table for the named publication item.
Syntax
public static void DropAllMapPartitions( String pub_item) throws Throwable
Example
Consolidator.DropAllMapPartitions("P_SAMPLE1");
Merges the data from one partition into another. Partition names must be retrieved by querying the ALL_PARTITIONS table view CV$ALL_PARTITIONS, since partitions are named by Consolidator.
Syntax
public static void MergeMapPartitions ( String from_partition, String to_partiton) throws Throwable
Example
Consolidator.MergeMapPartition(""MAP101_1", "MAP101_2");
You can add additional columns to existing publication items. These new columns are pushed to all subscribing clients the next time they synchronize. This is accomplished through a complete refresh of all changed publication items.
An administrator can add multiple columns.
This feature is supported for all client formats.
The client does not upload snapshot information to the server. This also means the client cannot change snapshots directly on the client database, for example, you could not alter a table using Mobile SQL on EPOC.
Publication item upgrades will be deferred during high priority synchronizations. This is necessary for low bandwidth networks, such as wireless, because all publication item upgrades require a complete refresh of changed publication items. While the high priority flag is set, high priority clients will continue to receive the old publication item format.
The server needs to support a maximum of two versions of the publication item which has been altered.
This allows additional columns to be added to an existing publication item. The WHERE clause may also be altered, but additional subscription parameters may not be added.
Syntax
public static void AlterPublicationItem (String name, String select_stmt) throws Throwable
The parameters for AlterPublicationItem
are listed in Table 3-33:
Table 3-33 Alter Publication Item Parameters
Parameter | Description |
---|---|
name
|
A character string specifying the publication item name. |
select_stmt
|
A new publication item select statement containing additional columns. |
Example
Consolidator.AlterPublicationItem("P_SAMEPLE1", "select * from EMP");
The Mobile Server supports fast refresh and update operations for complex multiple table publication items called views, that meet specific criteria. During a fast refresh, incremental changes are synchronized, during a complete refresh all data is refreshed with current data. The refresh mode is established when you create the publication item using the CreatePublicationItem
API call. In order to change the refresh mode you must first drop the publication item and recreate it with the appropriate mode.
For a view to be updatable, it must have a parent table. A parent table can be any one of the view's base tables in which a primary key is included in the view's column list and is unique in the view's row set. If you want to make a view updatable, you must provide the Mobile Server with the appropriate hint and the view's parent table before you create a publication item on the view.
To make publication items based on a view updatable, you must use the following two mechanisms:
Parent table hints
INSTEAD OF triggers or DML procedure callouts
Parent table hints define the parent table for a given view. Parent table hints are provided through the ParentHint
function which uses the stoats:
public static void ParentHint (String owner, Sting store, String owner_d, String store_d) throws Throwable
The parameters for ParentHint
are listed in Table 3-34:
Table 3-34 ParentHint Parameters
Parameter | Description |
---|---|
owner
|
A string specifying the view owner. |
store
|
A string specifying the view name. |
owner_d
|
A string specifying the base object owner. |
store_d
|
A string specifying the base object name. |
Example
Consolidator.ParentHint("SAMPLE3","ADDROLRL4P","SAMPLE3","ADDRESS");
INSTEAD OF triggers are used to execute INSTEAD OF INSERT, INSTEAD OF UPDATE, or INSTEAD OF DELETE commands. INSTEAD OF triggers also map these DML commands into operations that are performed against the view's base tables. INSTEAD OF triggers are a function of Oracle database. See the Oracle database documentation for details on INSTEAD OF triggers.
Publication items are created for fast refresh by default. Under fast refresh, only incremental changes are replicated. The advantages of fast refresh are reduced overhead and increased speed when replicating data stores with large amounts of data where there are limited changes between synchronization sessions.
The Mobile Server performs a fast refresh of a view if the view meets the following criteria:
Each of the view's base tables must have a primary key.
All primary keys from all base tables must be included in the view's column list.
If the item is a view, and the item predicate involves multiple tables, then all tables contained in the predicate definition must have primary keys and must have corresponding publication items.
The view requires only a unique primary key for the parent table. The primary keys of other tables may be duplicated. For each base table primary key column, you must provide the Mobile Server with a hint about the column name in the view. You can accomplish this by using PrimaryKeyHint
.
The syntax for PrimaryKeyHint
is:
public static void PrimaryKeyHint (String publication_item, String column, String b_owner, String b_store, String b_column) throws Throwable
The parameters for PrimaryKeyHint
are listed in Table 3-35:
Table 3-35 PrimaryKeyHint Parameters
Parameter | Description |
---|---|
publication_item
|
The name of the publication item the primary key hint is to be mapped to. |
owner
|
A string specifying the view owner. |
store
|
A string specifying the view name. |
store_d
|
A string specifying the base object owner. |
b_column
|
The name of the base table column the hint is using. |
Example
Consolidator.ParentHint("SAMPLE3","ADDROLRL4P","SAMPLE3","ADDRESS");
Publication items can be created for complete refresh using the Complete Refresh
call from the Consolidator API. When this mode is specified, client data is completely refreshed with current data from the server after every sync. An administrator can force a complete refresh on an entire publication on an entire publication via an API call. The complete refresh function forces complete refresh of a publication for a given client.
The syntax for CompleteRefresh
is:
public static void CompleteRefresh (String client_id, String publication) throws Throwable
The parameters for CompleteRefresh
are listed in Table 3-36:
You can specify a virtual primary key for publication items where the base object does not have a primary key defined. A virtual primary key can be created for more than one column, but the API must be called separately for each column you wish to assign a virtual primary key. The following methods create and drop a virtual primary key.
This creates a virtual primary key column.
Syntax
public static void CreateVirtualPKColumn (String owner, String store, String column) throws Throwable
The parameters for CreateVirtualPKColumn
are listed in Table 3-37:
Table 3-37 CreateVirtualPKColumn Parameters
Parameter | Description |
---|---|
owner
|
A string specifying a the owner of the base table or view. |
store
|
A string specifying the base table or view. |
column
|
A string specifying the primary key column. |
Example
Consolidator.CreateVirtualPKColumn("SAMPLE1", "DEPT", "DEPT_ID");
This allows a virtual primary key to be dropped.
Syntax
public static void DropVirtualPKColumn (String owner, String store) throws Throwable
The parameters for DropVirtualPKColumn
are listed in Table 3-38:
Table 3-38 DropVirtualPKColumn Parameters
Parameter | Description |
---|---|
owner
|
A string specifying a the owner of the base table or view. |
store
|
A string specifying the base table or view. |
Example
Consolidator.DropVirtualPKColumn("SAMPLE1", "DEPT");
This feature allows complex publication item queries to be cached. This applies to queries that cannot be optimized by the Oracle query engine. By caching the query in a temporary table, the Consolidator template can join to the snapshot more efficiently.
Storing the data in a temporary table does result in additional overhead to MGP operation, and the decision to use it should only be made after first attempting to optimize the publication item query to perform well inside the Consolidator template. If the query cannot be optimized in this way, the caching method should be used.
The following example is a template used by the MGP during the compose phase to identify client records that are no longer valid, and should be deleted from the client:
UPDATE pub_item_map map SET delete = true WHERE client = <clientid> AND NOT EXISTS (SELECT 'EXISTS' FROM (<publication item query>) snapshot WHERE map.pk = snapshot.pk);
In this example, when <publication item query>
becomes too complex, because it contains multiple nested subqueries, unions, virtual columns, connect by clauses, and other complex functions, the query optimizer is unable to determine an acceptable plan. This can have a significant impact on performance during the MGP compose phase. Storing the publication item query in a temporary table, using the publication item query caching feature, flattens the query structure and enables the template to effectively join to it.
The following API enables publication item query caching.
Syntax
public static void EnablePublicationItemQueryCache(String name) throws Throwable
The parameters for EnablePublicationItemQueryCache
are listed in Table 3-39:
Table 3-39 EnablePublicationItemQueryCache Parameters
Parameters | Description |
---|---|
name
|
A string specifying the name of the publication item. |
Example
Consolidator.EnablePublicationItemQueryCache( "P_SAMPLE1");
The following API disables publication item query caching.
Syntax
public static void DisablePublicationItemQueryCache(String name) throws Throwable
The parameters for DisablePublicationItemQueryCache
are listed in Table 3-40:
Table 3-40 DisablePublicationItemQueryCache Parameters
Parameters | Description |
---|---|
name
|
A string specifying the name of the publication item. |
Example
Consolidator.DisablePublicationItemQueryCache("P_SAMPLE1");
The Mobile Server synchronization process can be customized in many ways. You can attach application logic to the Mobile Server by binding PL/SQL procedures to publication items. The procedures must expose the BeforeCompose
, AfterCompose
, BeforeApply
, and AfterApply
methods of the Consolidator API. The Mobile Server calls these methods before and after it:
Applies client changes to server tables on behalf of Mobile Sync clients.
Composes fast-refresh changes for a given publication item.
The Mobile Server passes the current Mobile Sync user name information to these methods.
User-defined PL/SQL procedures can cache or pre-compute data. They can also resolve foreign key constraint violation problems. See Section 3.7.11, "Foreign Key Constraints in Updatable Publication Items" for more information. See Section 3.7.12, "Callback Customization for Before and After Compose/Apply" for details on using these calls.
Application developers can manage the replication process programmatically by using the CreateQueuePublicationItem API. Normally the MGP manages both the in queues and the out queues, this API allows the application developer to manage queue operations during a synchronization session using a PL/SQL package described in Section 3.7.9.3, "Queue Interface PL/SQL Procedure" and by creating the queues themselves.
When data arrives from the client it is placed in the publication item in queues. Consolidator calls UPLOAD_COMPLETE once the data has been committed. All records in the current synchronization session are given the same transaction identifier. Consolidator has a Queue Control Table (C$INQ+name) that indicates which publication item in queues have received new transactions using this transaction identifier. You can refer to this table to determine which queues need processing.
Before Consolidator begins the download phase of the synchronization session, it calls DOWNLOAD_INIT. This procedure allows customization of any settings which need to be set or modified to determine which data is sent to the client. Consolidator finds a list of the publication items which can be downloaded based on the client's subscription. A list of publication items and their refresh mode, 'Y' for complete refresh, 'N' for fast refresh, is inserted into a temporary table (C$PUB_LIST_Q). Items can be deleted or the refresh status can be modified in this table since Consolidator refers to C$PUB_LIST_Q to determine which items will be downloaded to the client.
Similar to in queue, every record in the out queue should be associated with it a transaction identifier (TRANID$$). Consolidator passes the last_tran
parameter to indicate the last transaction that the client has successfully applied. New out queue records which have not been downloaded to the client before should be marked with the value of curr_tran
parameter. The value of curr_tran
is always greater than that of last_tran
, though not necessarily sequential. Consolidator only downloads records from the out queues when the value of TRANID$$ is greater than last_tran. When the data is downloaded, Consolidator calls DOWNLOAD_COMPLETE.
You need to create the out queue in the Mobile Server repository manually using SQL. You may also wish to create the in queue as well although Consolidator creates this if one does not exist. Connect to your repository and execute the following statements to create in queues and out queues with the following structure:
Out queue
'CTM$'+name ( CLID$$CS VARCHAR2 (30), .. publication_item_store_columns (c1..cN), .. TRANID$$ NUMBER (10), DMLTYPE$$ CHAR (1) CHECK (DMLTYPE$$ IN ('I','U','D'), )
In queue
'CFM$'+name ( CLID$$CS VARCHAR2 (30), TRANID$$ NUMBER (10), SEQNO$$ NUMBER (10), DMLTYPE$$ CHAR (1) CHECK (DMLTYPE$$ IN ('I','U','D'), .. publication_item_store_columns (c1..cN), .. )
Consolidator creates a queue control table, C$INQ, and a temporary table, C$PUB_LIST_Q. You can examine the queue control table to determine which publication items have received new transactions.
Queue Control Table
'C$INQ'+name ( CLIENTID VARCHAR2 (30), TRANID$$ NUMBER, STORE VARCHAR2 (30), )
Temporary Table
'C$PUB_LIST_Q' ( NAME VARCHAR2 (30), COMP_REF CHAR(1), CHECK(COMP_REF IN('Y','N')) )
The parameters for the manually created queues are listed in Table 3-41:
Table 3-41 Queue Interface Creation Parameters
Parameter | Description |
---|---|
CLID$$CS
|
A unique string identifying the client. |
TRANID$$
|
A unique number identifying the transaction. |
SEQNO$$
|
A unique number for every DML language operation per transaction in the inqueue (CFM$) only. |
DMLTYPE$$
|
Checks the type of DML instruction:
Outqueue only. |
STORE | Represents the publication item name in the queue control table (C$INQ) only. |
NAME | The publication item name in the temporary table (C$PUB_LIST_Q) only. |
COMP_REF | This value is either 'Y' for yes, or 'N' for no and is a flag used for determining the refresh mode of publication items. |
|
|
The following PL/SQL package specification defines the callouts needed by the queue interface:
Sample Code
CREATE OR REPLACE PACKAGE CONS_QPKG AS /* * notifies that inq has new transaction */ PROCEDURE UPLOAD_COMPLETE( CLIENTID IN VARCHAR2, TRAN_ID IN NUMBER -- IN queue tranid ); /* * init data for download */ PROCEDURE DOWNLOAD_INIT( CLIENTID IN VARCHAR2, LAST_TRAN IN NUMBER, CURR_TRAN IN NUMBER, HIGH_PRTY IN VARCHAR2 ); /* * notifies when all the client's data is sent */ PROCEDURE DOWNLOAD_COMPLETE( CLIENTID IN VARCHAR2 ); END CONS_QPKG; /
This API call creates a publication item in the form of a queue. This API call registers the publication item and creates CFM$name table as an in queue, if one does not exist.
Syntax
public static void CreateQueuePublicationItem ( String name, String owner, String store, String select_stmt, String pk_columns, String cbk_owner, String cbk_name) throws Throwable
The parameters for CreateQueuePublicationItem
are listed in Table 3-42:
Table 3-42 CreateQueuePublicationItem Parameters
Parameter | Description |
---|---|
name
|
Defines a new publication item/queue name. |
owner
|
This is the owner of the base table or view. |
store
|
This value specifies the name of the base table or view. |
select_stmt
|
A string specifying a select statement for the new publication item. This statement can include a subscription parameter. |
pk_columns
|
A comma separated list which creates virtual primary keys. |
cbk_owner
|
Specifies the callback package owner. For more information, see Section 3.7.12, "Callback Customization for Before and After Compose/Apply". This is an advanced feature. |
cbk_name
|
Specifies the callback package name. For more information, see Section 3.7.12, "Callback Customization for Before and After Compose/Apply". This is an advanced feature. |
You must provide Consolidator with the primary key of the owner.store
in order to create a queue that can be updated or fast-refreshed. If the store
has no primary key, one can be specified in the pk_columns
parameter. If pk_columns
is null, Consolidator uses the primary key of the store
.
The PL/SQL package can be defined outside of the Mobile Server repository if necessary, although in order to function it must still refer to the in queues, out queues, queue control table and temporary table, which are defined inside the repository. The following API calls are used to retrieve the procedure name, register, or remove a procedure.
This registers the string 'pkg
' as the current procedure.
Syntax
public String RegisterQueuePkg(String pkg) throws SQLException
Example
Consolidator.RegisterQueuePkg("ASL.QUEUES_PKG");
Mobile Server makes a callout during synchronization indicating whether the client is attempting a null sync. A null sync refers to the fact that the client has no changes to upload. This callout can be implemented by creating a PL/SQL procedure within the Mobile Server repository. The procedure must have the following specification:
create or replace package CUSTOMIZE as procedure
NullSync(p_Client IN varchar2, p_NullSync as boolean); end CUSTOMIZE;
Replicating tables between Oracle database and clients in updatable mode can result in foreign key constraint violations if the tables have referential integrity constraints. When a foreign key constraint violation occurs, the server rejects the client transaction.
For example, two tables EMP and DEPT have referential integrity constraints. The DeptNum (department number) attribute in the DEPT table is a foreign key in the EMP table. The DeptNum value for each employee in the EMP table must be a valid DeptNum value in the DEPT table.
A Mobile Server user adds a new department to the DEPT table, and then adds a new employee to this department in the EMP table. The transaction first updates DEPT and then updates the EMP table. However, the database application does not store the sequence in which these operations were executed.
When the user replicates with the Mobile Server, the Mobile Server updates the EMP table first. In doing so, it attempts to create a new record in EMP with an invalid foreign key value for DeptNum. Oracle database detects a referential integrity violation. The Mobile Server rolls back the transaction and places the transaction data in the Mobile Server error queue. In this case, the foreign key constraint violation occurred because the operations within the transaction are performed out of their original sequence.
You can use a PL/SQL procedure avoid foreign key constraint violations based on out-of-sequence operations by using DEFERRABLE constraints in conjunction with the BeforeApply
and AfterApply
functions. DEFERRABLE constraints can be either INITIALLY IMMEDIATE or INITIALLY DEFERRED. The behavior of DEFERRABLE INITIALLY IMMEDIATE foreign key constraints is identical to regular immediate constraints. They can be applied interchangeably to applications without impacting functionality.
The Mobile Server calls the BeforeApply
function before it applies client transactions to the server and calls the AfterApply
function after it applies the transactions. Using the BeforeApply
function, you can set constraints to DEFFERED to delay referential integrity checks. After the transaction is applied, call the AfterApply
function to set constraints to IMMEDIATE. At this point, if a client transaction violates referential integrity, it is rolled back and moved into the error queues.
To prevent foreign key constraint violations using DEFERRABLE constraints:
Drop all foreign key constraints and then recreate them as DEFERRABLE constraints.
Bind user-defined PL/SQL procedures to publications that contain tables with referential integrity constraints.
The PL/SQL procedure should set constraints to DEFERRED in the BeforeApply
function and IMMEDIATE in the AfterApply
function as in the following example featuring a table named SAMPLE3 and a constraint named address.14_fk
:
procedure BeforeApply(clientname varchar2) is cur integer; begin cur := dbms_sql.open_cursor; dbms_sql.parse(cur,'SET CONSTRAINT SAMPLE3.address14_fk DEFERRED', dbms_sql.native); dbms_sql.close_cursor(cur); end; procedure AfterApply(clientname varchar2) is cur integer; begin cur := dbms_sql.open_cursor; dbms_sql.parse(cur, 'SET CONSTRAINT SAMPLE3.address14_fk IMMEDIATE', dbms_sql.native); dbms_sql.close_cursor(cur); end;
Mobile Server uses table weight to determine which order to apply Client Operations to master tables. Table weight is expressed as an integer, and are implemented as follows:
Client INSERT operations are executed first, from lowest to highest table weight order.
Client DELETE operations are executed next, from highest to lowest table weight order.
Client UPDATE operations are executed last, from lowest to highest table weight order.
In the example listed in Section 3.7.11.1, "Foreign Key Constraint Violation Example", a constraint violation error could be resolved by assigning DEPT a lower table weight than EMP. For example:
(DEPT weight=1, EMP weight=2)
When creating publication items, the user can specify a customizable package to be called during the Apply and Compose phase of the MGP background process. Client data is accumulated in the in queue prior to being processed by the MGP. Once processed by the MGP, data is accumulated in the out queue before being pulled to the client by Mobile Sync.
These procedures enable you to incorporate customized code into the process. The clientname
and tranid
are passed to allow for customization at the user and transaction level.
procedure BeforeApply(clientname varchar2)
This procedure must be called after all client's data is applied.
procedure AfterApply(clientname varchar2)
This procedure must be called before client's data with tranid
is applied.
procedure BeforeTranApply(tranid number)
This procedure must be called after client's data with tranid
is applied.
procedure AfterTranApply(tranid number)
This procedure must be called before out queue is composed.
procedure BeforeCompose(clientname varchar2)
This procedure must be called after out queue is composed.
procedure AfterCompose(clientname varchar2)
Once a publication item has been created, a user can use Java to specify a customized PL/SQL procedure which is stored in the Mobile Server repository to be called in place of all DML operations for that publication item. There can be only one mobile DML procedure for each publication item. The procedure should be created with the following structure:
AnySchema.AnyPackage.AnyName(DML in CHAR(1), COL1 in TYPE, COL2 in TYPE, COLn.., PK1 in TYPE, PK2 in TYPE, PKn..)
The parameters for customizing a DML operation are listed in Table 3-43:
Table 3-43 Mobile DML Operation Parameters
Parameter | Description |
---|---|
DML
|
DML operation for each row. Values can be "D" for DELETE, "I" for INSERT, or "U" for UPDATE. |
COL1 ... COLn
|
List of columns defined in the publication item. The column names must be specified in the same order that they appear n the publication item query. If the publication item was created with "SELECT * FROM example", the column order must be the same as they appear in the table "example". |
PK1 ... PKn | List of primary key columns. The column names must be specified in the same order that they appear in the base or parent table. |
For example, if you want to have a DML procedure for publication item "example", which is defined by the following query:
select A,B,C from publication_item_example_table
Assuming "A" is the primary key column for "example", then your DML procedure would have the following signature:
any_schema.any_package.any_name(DML in CHAR(1), A in TYPE, B in TYPE, C in TYPE,A_OLD in TYPE)
During runtime this procedure will be called with 'I', 'U', or 'D' as the DML type. For insert and delete operations, A_OLD will be null. In the case of updates, it will be set to the primary key of the row that is being updated. Once the PL/SQL procedure is defined, it can be attached to the publication item through the following API call:
Consolidator.AddMobileDmlProcedure("PUB_example","example","any_schema.any_package.any_name")
where "example"
is the publication item name and "PUB_example"
is the publication name.
Please refer to the Consolidator Admin API Specification for more information on calling this API.
The following piece of PL/SQL code defines an actual DML procedure for a publication item in one of the sample publications. As described below, the ORD_MASTER table. The query was defined as:
SQL Statement
SELECT * FROM "ord_master", where ord_master has a single column primary key on "ID"
ord_master Table
SQL> desc ord_master Name Null? Type----------------------------------------- -------- -------------ID NOT NULL NUMBER(9)DDATE DATESTATUS NUMBER(9)NAME VARCHAR2(20)DESCRIPTION VARCHAR2(20)
Code Example
CREATE OR REPLACE PACKAGE "SAMPLE11"."ORD_UPDATE_PKG" AS procedure UPDATE_ORD_MASTER(DML CHAR,ID NUMBER,DDATE DATE,STATUSNUMBER,NAME VARCHAR2,DESCRIPTION VARCHAR2, ID_OLD NUMBER);END ORD_UPDATE_PKG;/CREATE OR REPLACE PACKAGE BODY "SAMPLE11"."ORD_UPDATE_PKG" as procedure UPDATE_ORD_MASTER(DML CHAR,ID NUMBER,DDATE DATE,STATUSNUMBER,NAME VARCHAR2,DESCRIPTION VARCHAR2, ID_OLD NUMBER) is begin if DML = 'U' then execute immediate 'update ord_master set id = :id, ddate = :ddate,status = :status, name = :name, description = '||''''||'fromord_update_pkg'||''''||' where id = :id_old' using id,ddate,status,name,id_old; end if; if DML = 'I' then begin execute immediate 'insert into ord_master values(:id, :ddate,:status, :name, '||''''||'from ord_update_pkg'||''''||')' using id,ddate,status,name; exception when others then null; end; end if; if DML = 'D' then execute immediate 'delete from ord_master where id = :id' using id; end if; end UPDATE_ORD_MASTER;end ORD_UPDATE_PKG;/
The API call to add this DML procedure is:
Consolidator.AddMobileDMLProcedure("T_SAMPLE11","P_SAMPLE11-M","SAMPLE11.ORD_UPDATE_PKG.UPDATE_ORD_MASTER")
where "T_SAMPLE11"
is the publication name and "P_SAMPLE11-M"
is the publication item name.
A restricting predicate can be assigned to a publication item as it is added to a publication. When a client is synchronizing in high priority mode, the predicate is used to limit data downloaded to the client. This parameter can be null. This parameter is for advanced use. For using a restricting predicate in high-priority replication, see Section 3.7.15, "Priority-Based Replication".
With priority-based replication, you can limit the number of rows per snapshot by setting the flag Priority to 1 (the default is 0).
For example, if you have a snapshot with the following statement:
select * from projects where prio_level in (1,2,3,4)
With the Priority flag set to 0 (the default), all projects with prio_level 1,2,3,4 will be replicated.
In a high priority situation, the application can set the flag to 1, which will cause MGP to check for Restricting Predicate. A Restricting Predicate is a conditional expression in SQL. The developer can set Restricting Predicate in the AddPublicationItem() method, as in the following example:
prio_level = 1
MGP appends (AND) the expression to the snapshot definitions when composing data for the client. In this case, the high priority statement would be:
SELECT * FROM projects where prio_level in (1,2,3,4) AND prio_level = 1; // a restricting predicate snapshot
In this case, only projects with level =1 will be replicated to the client.
This advanced feature is available only through the Consolidator Admin API. It is not available through the Packaging Wizard.
To summarize, there are two steps to enable this feature:
Provide a restricting predicate expression in the AddPublicationItem() function.
Set the PRIORITY flag to 1 in the Mobile Sync API.
This section discusses the shared maps feature in terms of concepts and performance attributes.
Shared maps shrink the size of map tables for large lookup publication items and reduce the MGP compose time. Lookup publication items contain "lookup" data that is not updatable on the clients and that is shared by multiple subscribed clients. When multiple users share the same data, their query subsetting parameters are usually identical.
For example, a query could be the following:
SELECT * FROM WHERE EMP WHERE DEPTNO = :dept_id
In the preceding example, all users that share data from the same department have the same value for dept_id
. The default sharing method is based on subscription parameter values.
In the following example, the query is:
SELECT * FROM WHERE EMP WHERE DEPTNO = ( SELECT DEPTNO FROM
EMP WHERE EMPNO = :emp_id )
In this example, users from the same departments still share data. Their subsetting parameters, however, are not equal because each user has a unique emp_id
. To support the sharing of data for these types of queries (as illustrated by the example), a grouping function can be specified. The grouping function returns a unique group id
based on the client id
.
There is also another possible use for shared maps. It is possible to use shared maps for shared updatable publication items. This type of usage, however, requires implementation of a custom dml procedure that handles conflict resolution.
The performance of the MGP compose cycle is directly proportional to:
NC * NPI
where:
NC
= number of clients.
NPI
= number of publication items that must be composed.
With shared maps, the length of the MGP cycle is proportional to: NC*(NPI - NSPI) + NG*NSPI
where:
NSPI
= number of shared publication items.
NG
= number of groups.
Note that if NG = NC
, the MGP performance is similar in both cases. However, with fewer groups and more shared publication items, the MGP compose cycle becomes faster.
Also note that map storage requirements are governed by the same factors.
To set up a publication item to be shared, use the AddPublicationItem
API and enable the shared flag. It is also possible to toggle the shared property of a publication item once it is added to the publication with the SetPublicationItemMetadata
API. Both the AddPublicationItem
API and the SetPublicationItemMetadata
API allow users to specify a PL/SQL grouping function. The function signature must be the following:
( CLIENT in VARCHAR2, PUBLICATION in VARCHAR2, ITEM in VARCHAR2 ) return VARCHAR2.
The returned value must uniquely identify the client's group. For example, if client A belongs to the group GroupA and client B belongs to the group GroupB, the group function F could return:
F ('A','SUBSCRIPTION','PI_NAME') = 'GroupA'
F ('B','SUBSCRIPTION','PI_NAME') = 'GroupB'
The implicit assumption of the grouping function is that all the members of the GroupA group share the same data, and that all the members of the GroupB group share the same data.. The group function uniquely identifies a group of users with the same data for a particular PUBLICATION ITEM.
For the query example in Section 3.7.16.1, "Concepts", the grouping function could be:
Function get_emp_group_id ( clientid in varchar2, publication in varchar2, item in varchar2 ) return varchar2 is group_val_id varchar2(30); begin select DEPTNO into group_val_id from EMP where EMPNO = clientid ; return group_val_id; end;
NOTE: This function assumes that EMPNO is the Consolidator client id. If the group_fnc is not specified, the default grouping is based on subscription parameters.
Shared maps are not compatible with raw id based clients prior to 5.0.2.
Those clients are supported; however, the map data is private until the clients migrate to 5.0.2 or later.
The migration of the existing mobile server schema to 10g must be done in the following steps to minimize the number of client complete refreshes.
Run one cycle of MGP.
The clients must sync with the server to get the latest changes prepared by the MGP.
Stop the web server and MGP to migrate the server to 10g. This automatically sets all the nonupdatable publication items to shared items. If any shared publication items need to use grouping functions or any publication items need to change their sharing attribute, execute custom code that calls the appropriate consolidator API. See the SetPublicationItemMetadata
API in Section 3.7.16.3, "Usage".
The ShrinkSharedMaps
consolidator API must be called to set the clients to use shared map data and remove old redundant data from the maps.
Start the web server and MGP.
With the Mobile Server, a compatibility error with Oracle database advanced synchronization occurs when the client updates a row at the same time that the server deletes it. All other errors, such as nullity violations or foreign key constraint violations, are synchronization errors.
The Mobile Server does not automatically resolve synchronization errors. Instead, the Mobile Server rolls back the corresponding transactions, and moves the transaction operations into the Mobile Server error queue. Later, Mobile Server database administrators can change these transaction operations and re-execute or purge them from the error queue.
A Mobile Server synchronization conflict occurs if:
The client and the server update the same row.
The client and server create rows with the same primary key values.
The client deletes the same row that the server updates.
See Section 3.8.3, "Resolving Conflicts Using the Error Queue" for more information on conflict resolution techniques.
The Mobile Server uses internal versioning to detect synchronization conflicts. A version number is maintained for each client record as well as for each server record. When a client's changes are applied to the server, the Mobile Server will detect version mismatches and resolve conflicts according to winning rules.
The Mobile Server uses winning rules to automatically resolve synchronization conflicts. The following winning rules are supported:
Client wins
Server wins
When the client wins, the Mobile Server automatically applies client changes to the server. When the server wins, the Mobile Server automatically composes changes for the client.
You can customize the Mobile Server's conflict resolution mechanism by setting the winning rule to "Client Wins" and attaching BEFORE INSERT, UPDATE, and DELETE triggers to database tables. The triggers compare old and new row values and resolve client changes as specified.
For each publication item created, a separate and corresponding error queue is created. The purpose of this queue is to store transactions that fail due to unresolved conflicts. The administrator can attempt to resolve the conflicts, either by modifying the error queue data or that of the server, and then she may attempt to re-apply the transaction via the ExecuteTransaction
API call. The administrator may also purge the error queues through the PurgeTransaction
API call. The Mobile Server error queue is C$EQ, the data is stored in CEQ$.
The execute transaction function re-executes transactions in the Mobile Server error queue.
Syntax
public static void ExecuteTransaction (String clientid, long tid) throws Throwable
The parameters for ExecuteTransaction
are listed in Table 3-44:
Table 3-44 ExecuteTransaction Parameters
Parameter | Description |
---|---|
clientid
|
The Mobile Sync Client name. |
tid
|
The transaction ID. These are generated strings which appear in the error queue. |
Example
Consolidator.ExecuteTransaction("DAVIDL", 100002);
The purge transaction function purges a transaction from the Mobile Server error queue.
Syntax
public static void PurgeTransaction (String clientid, long tid) throws Throwable
The parameters for PurgeTransaction
are listed in Table 3-45:
Table 3-45 PurgeTransaction Parameters
Parameter | Description |
---|---|
clientid
|
The Mobile Server user name. |
tid
|
The transaction ID. These are generated strings which appear in the error queue. |
Example
Consolidator.PurgeTransaction("DAVIDL", 100001);
All synchronization parameters must be set in the POLITE.INI
or polite.txt file. To counter space constraints for the storage card on the WinCE platform, you can utilize the Temp
directory. To begin using the TEMP directory, add the following entry under the ALL DATABASES
section.
TEMPDIR=\Storage Card\Temp
The Oracle database and Oracle Database Lite tables that the Mobile Server synchronizes must use compatible datatypes. Oracle database datatypes are compatible with Oracle Database Lite datatypes.
All Oracle Database Lite based snapshots are created by the Mobile Sync during synchronization. The Mobile Server automatically selects Oracle Database Lite datatypes depending on data precision in the Oracle database. The data conversion values are listed in Table 3-46. The table lists the Oracle database datatypes in the left column and displays the Oracle Database Lite datatypes across the top row.
For Oracle Database Lite Datatypes, see Appendix B in the Oracle Database Lite SQL Reference.
Table 3-46 Oracle Database Lite Datatypes
Oracle Database Datatypes | 1 B | 2 B | 4 B | FLOAT | DOUBLE | NUMBER | DATETIME | LONG- VAR BINARY | VARCHAR |
---|---|---|---|---|---|---|---|---|---|
INTEGER | Y | Y | Y | Y | Y | Y | N | N | N |
VARCHAR2 | N | N | N | N | N | Y | N | N | Y |
VARCHAR | N | N | N | N | N | Y | N | N | Y |
CHAR | N | N | N | N | N | Y | N | N | Y |
SMALLINT | Y | Y | Y | Y | Y | Y | N | N | N |
FLOAT | Y | Y | Y | Y | Y | Y | N | N | N |
DOUBLE PRECISION | Y | Y | Y | Y | Y | Y | N | N | N |
NUMBER | Y | Y | Y | Y | Y | Y | N | N | N |
DATE | N | N | N | N | N | Y | Y | N | N |
LONG RAW | N | N | N | N | N | Y | N | Y | N |
LONG | N | N | N | N | N | Y | N | N | Y |
BLOB | N | N | N | N | N | Y | N | Y | N |
CLOB | N | N | N | N | N | Y | N | N | N |
"Y" indicates unconditionally supported and "N" indicates not supported. In the first three columns that are labeled (because of space limitations), 1 B represents TINYINT, 2 B represents SMALLINT, and 4 B represents INTEGER.