In This Section:
This appendix uses a worksheet approach to help you keep track of the many components that you calculate. If you are using a printed version of this book, you can photocopy the worksheets. Otherwise, you can simulate the worksheets on your own paper. Labels, such as DA and MA, help you keep track of the various calculated disk and memory component values.
To size a database, you must understand the units of storage that Essbase uses. This discussion assumes that you are familiar with the following basic concepts:
How Essbase stores data, as described in Understanding Multidimensional Databases
How Essbase Kernel components manage Essbase data, as described in Managing Database Settings
An Essbase database comprises many components. In addition to an outline file and data file, Essbase uses several types of files and memory structures to manage data storage, calculation, and retrieval operations.
Table 146 describes the major components to consider when you estimate the disk and memory requirements of a database. “Yes” means the type of storage indicated is relevant, “No” means that it is not.
Table 146. Storage Units Relevant to Calculation of Disk and Memory Requirements
Essbase uses disk space for its server software and for each database. Before estimating disk storage requirements for a database, you must know how many dimensions the database includes, the sparsity and density of the dimensions, the number of members in each dimension, and how many of the members are stored members.
To calculate the disk space required for a database:
Calculate the factors identified in Calculating the Factors to Be Used in Sizing Disk Requirements.
Use the process described in Estimating Disk Space Requirements for One Database to calculate the space required for each component of a single database. If a server contains more than one database, you must perform calculations for each database.
Use the procedure outlined in Estimating the Total Essbase Server Disk Space Requirement to calculate the final estimate for the server.
Before estimating disk space requirements for a database, you must calculate the factors to be used in calculating the estimate. Later in the chapter, you use these values to calculate the components of a database. For each database, you add together the sizes of its components.
Table 147 lists the sections that provide instructions to calculate these factors. Go to the section indicated, perform the calculation, then write the calculated value in the Value column.
The potential number of data blocks is the maximum number of data blocks possible in the database.
If the database is already loaded, you can see the potential number of blocks on the Statistics tab of the Database Properties dialog box of Administration Services.
If the database is not already loaded, you must calculate the value.
To determine the potential number of data blocks, assume that data values exist for all combinations of stored members.
Using Table 148, List of Sparse Dimensions with Numbers of Stored Members as a worksheet, list each sparse dimension and its number of stored members. If more than seven sparse dimensions exist, list the dimensions elsewhere and include all sparse dimensions in the calculation.
Multiply the number of stored members of the first sparse dimension (line a.) by the number of stored members of the second sparse dimension (line b.) by the number of stored members of the third sparse dimension (line c.), and so on. Write the resulting value to the cell labeled DA in Table 147, Factors Affecting Disk Space Requirements of a Database.
a * b * c * d * e * f * g (and so on) = potential number of blocks
The Sample.Basic database contains the following sparse dimensions:
As compared with the potential number of blocks, existing blocks refers to those data blocks that Essbase actually creates. For Essbase to create a block, at least one value must exist for a combination of stored members from sparse dimensions. Because many combinations can be missing, the number of existing data blocks is usually much less than the potential number of data blocks.
To see the number of existing blocks for a database that is already loaded, look for the number of existing blocks on the Statistics tab of the Database Properties dialog box of Administration Services. Write the value in the cell labeled DB in Table 147, Factors Affecting Disk Space Requirements of a Database.
If the database is not already loaded, you must estimate a value.
To estimate the number of existing data blocks:
Estimate a database density factor that represents the percentage of sparse dimension stored-member combinations that have values.
Multiply this percentage against the potential number of data blocks and write the number of actual blocks to the cell labeled DB in Table 147, Factors Affecting Disk Space Requirements of a Database.
number of existing blocks = estimated density * potential number of blocks
The following three examples show different levels of sparsity and assume 100,000,000 potential data blocks:
Extremely sparse: 5 percent of potential data cells exist.
.05 (estimated density) * 100,000,000 (potential blocks) = 5,000,000 existing blocks
Sparse: 15 percent of potential data cells exist.
.15 (estimated density) * 100,000,000 (potential blocks) = 15,000,000 existing blocks
Dense: 50 percent of potential data cells exist.
.50 (estimated density) * 100,000,000 (potential blocks) = 50,000,000 existing blocks
The potential, expanded (uncompressed) size of each data block is based on the number of cells in a block and the number of bytes used for each cell. The number of cells is based on the number of stored members in the dense dimensions. Essbase uses eight bytes to store each intersecting value in a block.
To see the number of existing blocks for a database that is loaded, look for the size of an expanded data block on the Statistics tab of the Database Properties dialog box of Administration Services.
If the database is not loaded, you must estimate the value.
To determine the size of an expanded data block:
Using Table 149, Determining the Size of a Data Block as a worksheet, enter each dense dimension and its number of stored members. If more than seven dense dimensions exist , list the dimensions elsewhere, and include all dense dimensions in the calculation.
Multiply the number of stored members of the first dense dimension (line a) by the number of stored members of the second dense dimension (line b) by the number of stored members of the third dense dimension (line c), and so on, to determine the total number of cells in a block.
a * b * c * d * e * f * g (and so on) = the total number of cells
Multiply the resulting number of cells by 8 bytes to determine the expanded block size. Write the resulting value to the cell labeled DC in Table 147, Factors Affecting Disk Space Requirements of a Database.
(Total number of cells) * 8 bytes per cell = expanded block size
The Sample.Basic database contains the following dense dimensions:
Perform the following calculations to determine the potential size of a data block in Sample.Basic:
12 * 8 * 2 = 192 data cells 192 data cells * 8 bytes = 1,536 bytes (potential data block size)
Compression affects the actual disk space used by a data file. The four types of compression (bitmap, run-length encoding (RLE), zlib, and index-value) affect disk space differently. For a discussion of data compression unrelated to estimating size requirements, see Data Compression.
If you are not using compression, or if you have enabled RLE compression, skip this calculation and proceed to Stored Data Files.
Due to sparsity also existing in the block, actual (compressed) block density varies widely from block to block. The calculations in this discussion are only for estimation purposes.
To calculate an average compressed block size when bitmap compression is enabled:
Determine an average block density value.
If the database is loaded, you can see the size of an expanded data block on the Statistics tab of the Database Properties dialog box of Administration Services. Use the value that is displayed for Block Density.
If you want to estimate block density prior to loading data, estimate the ratio of existing data values to potential data values.
To determine the compressed block size, perform the following calculation and write the resulting block size to the cell labeled DD in Table 147, Factors Affecting Disk Space Requirements of a Database.
expanded block size * block density = compressed block size
Assume an expanded block size of 1,536 bytes and a block density of 25% (.25):
1,536 bytes * .25 = 384 bytes (compressed block size)
To estimate the disk-space requirement for a database, make a copy of Table 150 or use a separate sheet of paper as a worksheet for one database. If multiple databases are on a server, repeat this process for each database. Write the name of the database on the worksheet.
Each row of this worksheet refers to a section that describes how to size that component. Perform each calculation and write the results in the appropriate cell in the Size column. The calculations use the factors that you wrote in Table 147, Factors Affecting Disk Space Requirements of a Database.
Table 150. Worksheet for Estimating Disk Requirements for a Database
Work Areas (sum of DE through DH) | |
Linked Reporting Objects Considerations, if needed | |
Total disk space required for the database. Total the size values from DE through DJ and write the result to Table 151, Worksheet for Total Server Disk Space Requirement. |
After writing all the sizes in the Size column, add them together to determine the disk space requirement for the database. Add the database name and size to the list in Table 151, Worksheet for Total Server Disk Space Requirement. Table 151 is a worksheet for determining the disk space requirement for all databases on the server.
Repeat this exercise for each database on the server. After estimating disk space for all databases on the server, proceed to Estimating the Total Essbase Server Disk Space Requirement.
The following sections describe the calculations to use to estimate components that affect the disk-space requirements of a database.
The size of the stored database depends on whether the database is compressed and the compression method chosen for the database. Essbase provides five compression-method options: bitmap, run-length encoding (RLE), zlib, index-value, and none.
Calculating the size of a compressed database is complex for a number of reasons including the following:
For a comprehensive discussion of data compression unrelated to estimating size requirements, see Data Compression. The calculations in this discussion are for estimation purposes only.
The calculation for the space required to store the compressed data files (essxxxxx.pag) uses the following factors:
Number of existing blocks (value DB from Table 147, Factors Affecting Disk Space Requirements of a Database)
Size of expanded data block (value DC from Table 147, Factors Affecting Disk Space Requirements of a Database)
Database density: the percentage of sparse dimension stored-member combinations that have values. (To calculate, see Number of Existing Data Blocks.)
To calculate database size when the compression option is none, use the formula:
Number of blocks * (72 bytes + size of expanded data block)
Write the result in cell labeled DE in Table 150, Worksheet for Estimating Disk Requirements for a Database . Proceed to Index Files.
Because the compression method used can vary per block, the following calculation formulas are general estimates of the database size. Actual implementation could result in numbers larger or smaller than the calculations.
To estimate database size when the compression option is bitmap, use the formula:
Number of blocks * (72 bytes + (average size in bytes of an expanded data block) * (1/64 + proportion of cells that are not #Missing))
For example, if there are 1000 blocks, the average expanded block size is 2000 bytes, and 30% of cells are #missing, the result is:
1000 * (72 + 2000 * (1/64 + 0.7) ) = 1000 * 1503 = 1,503,000 bytes (approximately)
Write the result in cell labeled DE in Table 150, Worksheet for Estimating Disk Requirements for a Database . Proceed to Index Files.
To estimate database size when the compression option is Index-value, use the formula:
Number of blocks * (72 bytes + (1.5 * database density * expanded data block size)
Write the result in cell labeled DE in Table 150, Worksheet for Estimating Disk Requirements for a Database . Proceed to Index Files.
To estimate database size when the compression option is RLE, use the formula for calculating Bitmap Compression.
When the compression method is RLE, Essbase automatically uses the bitmap or Index-value method for a block if it determines that better compression can be gained. Estimating using the bitmap calculation estimates the maximum size.
Write the result in cell labeled DE in Table 150, Worksheet for Estimating Disk Requirements for a Database . Proceed to Index Files.
To estimate database size when the compression option is zlib, use the formula for calculating Bitmap Compression.
Determining the size of a data block when zlib compression is used is difficult. Individual blocks could be larger or smaller than if compressed using other compression types. Calculating using the bitmap compression formula at least provides an approximation to use for this exercise.
Write the result in cell labeled DE in Table 150, Worksheet for Estimating Disk Requirements for a Database . Proceed to Index Files.
The calculation for the space required to store the index files (essxxxxx.ind) uses the following factors:
Number of existing blocks (value DB from Table 147, Factors Affecting Disk Space Requirements of a Database)
The minimum size for the index is 8,216,576 bytes (8 MB). To calculate the size of a database index, including all index files, perform the following calculation:
number of existing blocks * 112 bytes = the size of database index
In the cell labeled DF in Table 150, Worksheet for Estimating Disk Requirements for a Database , write whichever is the larger number, the results of the calculation or 8,216,576.
Assume a database with 15,000,000 blocks.
15,000,000 blocks * 112 = 1,680,000,000 bytes
If you are using bitmap or RLE compression, some fragmentation occurs, the amount of which is based on individual database and operating system configurations and cannot be precisely predicted.
As a rough estimate, calculate 20% of the compressed database size (value DE from Table 150, Worksheet for Estimating Disk Requirements for a Database ) and write the result to the cell labeled DG in the same table.
Calculating fragmentation allowance assuming a compressed database size of 5,769,000,000 bytes:
5,769,000,000 bytes * .2 = 1,153,800,000 bytes
The space required by an outline can have two components.
To estimate the size of the outline file:
Estimate the main area of the outline by multiplying the number of members by a name-length factor between 350 and 450 bytes.
If the database includes few aliases or very short aliases and short member names, use a smaller number within this range. If you know that the member names or aliases are very long, use a larger number within this range.
Because the name-length factor is an estimated average, the following formula provides only a rough estimate of the main area of the outline.
number of members * name-length factor = size of main area of outline
See Limits, for the maximum sizes for member names and aliases.
For memory space requirements calculated later in this chapter, use the size of the main area of the outline.
For disk space requirements, if the outline includes attribute dimensions, calculate the size of the attribute association area for the database. Calculate the size of this area for each base dimension. Multiply the number of members of the base dimension by the sum of the count of members of all attribute dimensions associated with the base dimension, and then divide by 8.
(number of base-dimension members * sum of count of attribute-dimension members)/8 = size of attribute association area for a base dimension
Sum the attribute association areas of each dimension to determine the total attribute association area for the outline.
For the total disk space required for the outline, add together the main outline area and the attribute association area, and write the result of this calculation to the cell labeled DH in Table 150, Worksheet for Estimating Disk Requirements for a Database .
main area of outline + total attribute association area
Assume the outline has the following characteristics:
A base dimension Product (23,000 members—excluding Label Only members and shared members) with two attribute dimensions associated with it—Ounces (20 members) and Pkg Type (50 members)
A base dimension Market (2,500 members—excluding Label Only members and shared members) with one associated attribute dimension, Population (12 members)
Perform the following calculations:
Calculate the main area of the outline:
name-length factor of 400 bytes * 26,000 members = 10,400,000 bytes
Sum these areas for the total attribute association area for the database:
For a total estimate of outline disk space, add the main area of the outline and the total attribute association area:
10,400,000 bytes + 205,000 bytes = 10,605,000 bytes (outline disk space requirement)
Do not use this procedure to calculate outline memory space requirements. Use the process described in Outline Size Used in Memory.
Three processes create temporary work areas on the disk:
For recovery purposes, Essbase maintains a data recovery area on the disk. The size of this area increases until the database is restructured.
During restructuring, Essbase uses a restructuring work area on the disk.
During upgrade from prior releases of Essbase, for recovery purposes, Essbase creates a copy of the database in a migration work area.
To create these temporary work areas, Essbase may require disk space equal to the size of the entire database. Restructuring and migration need additional work space the size of the outline. Because none of these activities occur at the same time, a single allocation can represent all three requirements.
To calculate the size of a work area used for restructuring, migration, and recovery, calculate the sum of the sizes of the following database components from Table 150, Worksheet for Estimating Disk Requirements for a Database :
Use the following formula to calculate the size of the work area:
work area = size of compressed data files + size of index files + fragmentation allowance + outline size
Write the result of this calculation to the cell labeled DI in Table 150, Worksheet for Estimating Disk Requirements for a Database .
You can use the Linked Reporting Objects (LROs) feature to associate objects (also called artifacts) with data cells. The objects can be flat files, HTML files, graphics files, and cell notes. See Linking Objects to Essbase Data.
Two aspects of LROs affect disk space:
The size of the object. Because Essbase copies files used as LROs to the server, you must know the combined size of all files you are using as LROs.
You can set a limit on the size of a linked object, if the linked object is a file (as opposed to a cell note). See Limiting LRO File Sizes.
The size of the LRO catalog, where the Essbase Kernel stores information about LROs. Cell notes and URLs are also stored in the catalog. A catalog entry is stored as an index page. For every catalog entry, Essbase uses 8 KB.
To estimate the disk space requirements for LROs:
Estimate the size of the objects. If a limit is set, multiply the number of LROs by that limit. Otherwise, sum the size of all anticipated LROs.
Size the LRO catalog. Multiply the total number of LROs by 8,192 bytes.
Add together the two areas and write the result of this calculation to the cell labeled DJ in Table 150, Worksheet for Estimating Disk Requirements for a Database .
Assume the database uses 1,500 LROs, which comprise the following:
Perform the following calculations:
Multiply 1,000 * 512 bytes for 512,000 bytes maximum required for the stored URLs.
Calculate the size of the LRO catalog. Multiply 1,500 total LROs * 8,192 bytes = 12,288,000 bytes.
Add together the two areas; for example:
512,000 bytes + 12,288,000 bytes = 12,800,000 bytes total LRO disk space requirement
The earlier calculations in this chapter estimate the data storage requirement for a single database. Often, more than one database resides on the server.
In addition to the data storage required for each database, the total Essbase data storage requirement on a server includes Essbase software. Allow approximately 200 MB (209,715,200 bytes) for the base installation of Essbase software and sample applications. The allowance varies by platform and file system. See the Oracle Hyperion Enterprise Performance Management System Installation Start Here.
To estimate the total server disk space requirement:
In the worksheet in Table 151, Worksheet for Total Server Disk Space Requirement, list the names and disk space requirements that you calculated for each database.
Sum the database requirements and write the total in bytes in the cell labeled DL.
In the cell labeled DM, write the disk space requirement for the software installed on the server; for example, 209,715,200 bytes.
For the total server disk space requirement in bytes, sum the values in cells DL and DM. Write this value in the cell labeled DN.
Convert the total in cell DN to MB by dividing the value by 1,048,576 bytes. Write this value in the cell labeled DO.
Table 151. Worksheet for Total Server Disk Space Requirement
List of Databases (From Table 150, Worksheet for Estimating Disk Requirements for a Database ) | |
---|---|
Total Essbase Server disk requirement in megabytes (MB): DN divided by 1,048,576 bytes |
The minimum memory requirement for running Essbase is 64 MB. On UNIX systems, the minimum requirement is 128 MB. Based on the number of applications and databases and the database operations on the server, the memory you require may be greater.
To estimate the memory required on Essbase Server, use the Worksheet for Total Memory Requirements, Table 155, Worksheet for Total Server Memory Requirement, to collect and total server memory requirements. To calculate the requirements for this worksheet, review the following topics.
You must calculate overall memory used at application startup plus the memory requirements for each database.
Each open application has the following memory requirements at startup:
Multiply the number of applications that will be running simultaneously on the server by the startup requirement and write the resulting value in the cell labeled ML in Table 155, Worksheet for Total Server Memory Requirement.
Calculate memory requirements for each database on Essbase Server.
For each database, make a copy of Table 155 or use a separate sheet of paper as a worksheet for one database. If multiple databases are on Essbase Server, repeat this process for each database. Write the database name on the worksheet.
Each row links to information that describes how to size that component. Perform each calculation and note the results in the appropriate cell in the Size column. Some calculations use the factors that you wrote in Table 153, Factors Used to Calculate Database Memory Requirements . After filling in all the sizes in the Size column, total them to determine the memory requirement for that database.
After estimating disk space for all databases on the server, proceed to Estimating the Total Essbase Server Disk Space Requirement.
Table 152. Worksheet for Estimating Memory Requirements for a Database
Database outline | |
Index cache See Sizing Caches. | |
Cache-related overhead | |
Area for data structures | |
Memory used for data retrievals See Estimating Additional Memory Requirements for Data Retrievals. | |
Memory used for calculations See Estimating Additional Memory Requirements for Calculations.) | |
Summarize the size values from MA through MF for an estimate of the total memory required for a database. | |
Divide the value from MG by 1,048,576 bytes for the total database memory requirement in megabytes (MB). |
In Table 155, Worksheet for Total Server Memory Requirement, enter the name of the database and the total memory requirement in MB, MH.
Before you start the estimate, calculate factors to be used in calculating the estimate.
Table 153, Factors Used to Calculate Database Memory Requirements lists sizing factors with references to sections in this chapter and other chapters that provide information to determine these sizes. Go to the section indicated, perform the calculation, and return to Table 153. Write the size, in bytes, in the Value column this table.
Later in this chapter, you can refer to Table 153 for values to use in various calculations.
Table 153. Factors Used to Calculate Database Memory Requirements
The number of threads allocated through the SERVERTHREADS ESSCMD See the Oracle Essbase Technical Reference. | |
The calculations in this chapter do not account for other factors that affect how much memory is used. The following factors have complex implications and are not included in the discussion:
Cache memory locking. Whether ache memory locking is enabled affects how the operating system and Essbase manage memory. See Deciding Whether to Use Cache Memory Locking.
Different operation types and their associated cache allocations. Data load, data retrieval, and calculation operations set aside memory for the data file, data, and calculation caches, plus some overhead associated with the caches.
The sizes of the retrieval buffer and the retrieval sort buffer. See Changing Buffer Size for a discussion of the significance of the size of the retrieval buffer and the retrieval sort buffer.
Database workload; for example, the complexity of a calculation script or the number and complexity of data queries.
The number of data blocks defined using the CALCLOCKBLOCK setting in the essbase.cfg file in combination with the SET LOCKBLOCK setting, which specifies which CALCLOCKBLOCK setting to use.
The number of Dynamic Calc members in the outline, including members of attribute dimensions.
The attribute association area included in disk space calculations is not a sizing factor for memory. Calculate only the main area of the outline.
For memory size requirements, outline size is calculated using the following factors:
The number of members in the outline
The length, in characters, of member names and aliases
To calculate the outline memory requirement, multiply the number of members by a name-length factor of 300 bytes–400 bytes and write the result in the cell labeled MA in Table 152, Worksheet for Estimating Memory Requirements for a Database.
If the database includes few aliases or very short aliases and short member names, use a smaller number in the 300 byte–400 byte range. If you know that the names or aliases are very long, use a larger number within this range.
Because the name-length factor is an estimated average, the following formula provides only a rough estimate of the main area of the outline:
memory size of outline = number of members * name-length factor
Note:
See Limits, for the maximum sizes for member names and aliases.
Assuming that the outline has 26,000 members and a median name-length, use the following calculation to estimate the outline size used in memory:
26,000 members * 350 bytes per member = 9,100,000 bytes
At startup, Essbase sets aside memory for the index cache, the size of which can be user-specified. To determine the size of the index cache, see Sizing Caches and write the size in the cell labeled MB in Table 152, Worksheet for Estimating Memory Requirements for a Database.
Essbase uses additional memory while it works with the caches.
The calculation for this cache-related overhead uses the following factors:
Index cache (value MB from Table 152, Worksheet for Estimating Memory Requirements for a Database)
The number of server threads (value MJ from Table 153, Factors Used to Calculate Database Memory Requirements )
To calculate the cache-related overhead at startup:
Calculate half the index cache size, in bytes.
index cache size * .5 = index cache-related overhead
Calculate additional cache overhead in bytes using the following formula:
((# of server threads allocated to the Essbase Server process * 3) * 256) + 5242880 bytes = additional cache overhead
Sum the index cache overhead plus the additional cache overhead. Write the result to the cell labeled MC in Table 152, Worksheet for Estimating Memory Requirements for a Database.
cache-related overhead = index cache-related overhead + additional cache overhead
The term logical block refers to an expanded block in memory.
To determine the cell count of a logical block, multiply all members of each dense dimension (including Dynamic Calc and Dynamic Calc and Store members but excluding Label Only and shared members):
Using Table 154, Determining the Number of Cells in a Logical Block as a worksheet, enter each dense dimension and its number of members, excluding Label Only and shared members. If there are more than seven dense dimensions, list the dimensions elsewhere and include all dense dimensions in the calculation.
Multiply the number of members of the first dense dimension (line a.) by the number of members of the second dense dimension (line b.) by the number of members of the third dense dimension (line c.), and so on, to determine the total number of cells in a logical block. Write the result to the cell labeled MI in Table 153, Factors Used to Calculate Database Memory Requirements .
Excluding Label Only and shared members, the dense dimensions in Sample.Basic contain 17 (Year), 14 (Measures), and 4 (Scenario) members. The calculation for the cell count of a logical block in Sample.Basic:
17 * 14 * 4 = 952 cells
At application startup, Essbase sets aside an area of memory based on the following factors:
The number of cells in a logical block (value MI in Table 153, Factors Used to Calculate Database Memory Requirements )
The number of threads on the server (value MJ in Table 153)
To calculate the data structure area in memory:
Use the following formula to calculate the size in bytes:
Number of threads * ((Number of members in the outline * 26 bytes) + (Logical block cell count * 36 bytes))
Write the result in the cell labeled MD in Table 152, Worksheet for Estimating Memory Requirements for a Database.
Assuming 20 threads for the Sample.Basic database, the startup area in memory required for data structures is calculated:
20 threads * ((79 members * 26 bytes) + (952 cells * 36 bytes)) = 726,520 bytes 726,520 bytes / 1,048,576 bytes = .7 MB
In addition to startup memory requirements, operations such as queries and calculations require additional memory. Because of many variables, the only way to estimate memory requirements of operations is to run sample operations and monitor the memory used during them. The following sections provide guidelines.
Essbase processes requests for database information (queries) from many sources. For example, Essbase processes queries from Oracle's Hyperion® Essbase® Spreadsheet Toolkit and from Report Writer. Essbase uses additional memory when it retrieves the data for these queries, especially when Essbase must perform dynamic calculations to retrieve the data. This section describes Essbase memory requirements for query processing.
Essbase is a multi-threaded application in which queries get assigned to threads. Threads are automatically created when Essbase is started. In general, a thread exists until you shut down Essbase Server. See Multithreading.
As Essbase processes queries, it cycles through available threads. For example, assume that 20 threads are available at startup. As each query is processed, Essbase assigns each succeeding query to the next sequential thread. After it has assigned the 20th thread, Essbase cycles back to the beginning, assigning the 21st query to the first thread.
While processing a query, a thread allocates some memory, and then releases most of it when the query is completed. Some memory is released to the operating system, and some is released to the dynamic calculator cache for the database being used. However, the thread holds on to a portion of the memory for possible use in processing subsequent queries. As a result, after a thread has processed its first query, the memory held by the thread is greater than it was when Essbase first started.
Essbase uses the maximum memory for query processing when both conditions are true:
In the example where 20 threads are available at startup, the maximum amount of memory is used for queries when at least 20 queries have been processed and the maximum number of simultaneous queries are in process.
Calculating the Maximum Additional Memory Required
Use the following variables when you calculate the formula in Estimating the Maximum Memory Usage for a Query Before and After Processing:
Determining the Total Number of Threads
The potential number of threads available is based on the number of licensed ports that are purchased. The actual number of threads available depends on settings that you define for the Agent or the server. Use the number of threads on the system as the value for Total#Threads in later calculations.
Estimating the Maximum Number of Concurrent Queries
Determine the maximum number of concurrent queries and use this value for Max#ConcQueries in later calculations. This value cannot exceed the value for Total#Threads.
Estimating the Maximum Memory Usage for a Query Before and After Processing
The memory usage of individual queries depends on the size of each query and the number of data blocks that Essbase needs to access to process each query. To estimate the memory usage, calculate the additional memory Essbase uses during processing and after processing each query.
Choose several queries that you expect to use the most memory. Consider queries that must process large numbers of members; for example, queries that perform range or rank processing.
To estimate the memory usage of a query:
Turn the dynamic calculator cache off by setting the essbase.cfg setting DYNCALCACHEMAXSIZE to 0 (zero). Turning off the dynamic calculator cache enables measurement of memory still held by a thread by ensuring that, after the query is complete, the memory used for blocks during dynamic calculations is released by the ESSSVR process to the operating system.
Start the Essbase application.
Using memory monitoring tools for the operating system, note the memory used by Essbase Server before processing the query. Use the value associated with the ESSSVR process.
Run the query.
Using memory monitoring tools for the operating system, note the peak memory usage of Essbase Server while the query is processed. This value is associated with the ESSSVR process.
Using memory monitoring tools for the operating system, after the query is completed, note the memory usage of Essbase. This value is associated with the ESSSVR process.
Calculate the following two values:
When you have completed the above calculations for all the relevant queries, compare all results to determine the following two values:
Insert the two values from step 7 into the formula in the following statement.
The additional memory required for data retrievals will not exceed the following:
Max#ConcQueries * MAXAdditionalMemDuringP + (Total#Threads - Max#ConcQueries) * MAXAdditionalMemAfterP
Write the result of this calculation, in bytes, to the cell labeled ME in Table 152, Worksheet for Estimating Memory Requirements for a Database.
Because this calculation method assumes that all of the concurrent queries are maximum-sized queries, the result may exceed your actual requirement. It is difficult to estimate the actual types of queries that will be run concurrently.
To adjust the memory used during queries, you can set values for the retrieval buffer and the retrieval sort buffer. See Setting the Retrieval Buffer Size and Setting the Retrieval Sort Buffer Size.
If you cannot perform this test with actual queries, you can calculate a rough estimate for operational query requirements. Requirements for each retrieval vary considerably. Generally, this estimate uses the following fixed factors:
This estimate also uses the following variables:
The memory used for dynamically calculated values, which is based on the following numbers:
The number of blocks specified by the SET LOCKBLOCK command.
The number of cells in a logical block, which includes Dynamic Calc members. See value MH in Table 153, Factors Used to Calculate Database Memory Requirements .
The size of the data cache. See Sizing the Data Cache.
If direct I/O is used, the size of the data file cache. See Sizing the Data File Cache.
You can then use the following two calculations for the memory needed for retrievals:
Buffer and work area used in each retrieval:
retrieval buffer (10,240 bytes) + retrieval sort buffer (20,480 bytes) + formatting buffer (144,000 bytes) + each selected member in the retrieval (40 bytes) + each member of the largest dimension (8 bytes) + dynamic calc area + data cache size + data file cache size
Memory needed for estimated number of concurrent retrievals:
Member storage area for the largest dimension + (number of retrievals * sum of buffer and work areas used in each retrieval)
Summarize the calculations and write the result, in bytes, to the cell labeled ME in Table 152, Worksheet for Estimating Memory Requirements for a Database.
To estimate the maximum memory needed for concurrent queries, assume the following values for this example:
The area required by the largest dimension:
23,000 members * 8 bytes/member = 184,000 bytes
The number of selected members is generalized across all queries to be 10,000 members. The approximate memory requirement equals the following:
10000 members * 40 bytes/member = 400,000 bytes
Estimated memory for retrievals:
184,000 bytes + (20 concurrent inquiries * (10,240 bytes + 20,480 bytes + 144,000 bytes + 761,600 bytes + 3,145,728 bytes + 400,000 bytes)) = 75,824,960 bytes
For existing calculation scripts, you can use the memory monitoring tools provided for the operating system on the server to observe memory usage. Run the most complex calculation and notice the memory use before and while running the calculation. Calculate the difference and use that figure as the additional memory requirement for the calculation script.
For a comprehensive discussion of calculation performance, see Optimizing Calculations.
If you cannot perform a test with a calculation script, you can calculate a very rough estimate for the operational requirement of a calculation by adding the following values:
The size of the outline. For calculations, Essbase uses approximately 30 additional bytes of memory per member of the database outline. For information about concurrent calculations, see Managing Caches to Improve Performance.
The size of the memory area used by the blocks set aside by the SET LOCKBLOCK command.
To calculate the memory requirement in bytes, multiply the specified number of data blocks by the logical size of the data blocks.
For the logical block size, multiply the number of cells (value MI in Table 153, Factors Used to Calculate Database Memory Requirements ) by 8 bytes per cell.
For the total calculation requirement, summarize the memory needed for all calculations that will be run simultaneously and write that total to the cell labeled MF in Table 152, Worksheet for Estimating Memory Requirements for a Database.
You can use Table 155 as a worksheet on which to calculate an estimate of the total memory required on the server.
Table 155. Worksheet for Total Server Memory Requirement
In rows a–g, list concurrent databases (from copies of Table 152, Worksheet for Estimating Memory Requirements for a Database) and enter their respective memory requirements (MH) in the column to the right. | |
To estimate the total Essbase memory requirement on a server:
In the cell labeled ML, record the total startup memory requirement for applications, as described in Estimating Startup Memory Requirement for Applications.
List the largest set of databases that will run concurrently on the server. In the Memory Required column, for the MH value for each database, note the memory requirement estimated in the database requirements worksheet, Table 152, Worksheet for Estimating Memory Requirements for a Database.
Determine the operating system memory requirement, and write the value in megabytes to the cell labeled MN in Table 155.
Total all values and write the result in the cell labeled MO.
Compare the value in MN with the total available random-access memory (RAM) on the server.
In addition, consider memory requirements for client software, such as Oracle Essbase Administration Services, that may be installed on the Essbase Server computer. See the Oracle Hyperion Enterprise Performance Management System Installation Start Here.
If cache memory locking is enabled, the total memory requirement should not exceed two-thirds of available RAM; otherwise, system performance can be severely degraded. If cache memory locking is disabled, the total memory requirement should not exceed available RAM.
If insufficient memory is available, you can redefine the cache settings and recalculate the memory requirements. This process can be iterative. See Fine-Tuning Cache Settings. In some cases, you may need to purchase additional RAM.