Archive for the Uncategorized Category

How to use your existing ODI on premise to seamlessly integrate PBCS (Part 5: Import Metadata Jobs)

Posted in Uncategorized on March 26, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the how to integrate PBCS seamlessly using you existing ODI environment series (Part 4 Here), today we’ll talk about Importing Metadata Jobs.

As you can imagine, import Metadata will also be simple in PBCS, we just need to pay attention in the file format and that’s it.

To import Metadata, we need:

As usual, first we need to select our inbox, then the name of the files for each dimension, the delimiter for each dimension and if you want to clear the members before load the new members you just need to check Clear Members.

That’s it, all dimensions will be load at once, but we need separated files, one for each dimension. These files can also be in zip format that PBCS will automatically unzip for us.

Now the important part, the file format. This are all properties that PBCS expect when loading metadata.

That’s it. With that we finally finished all the setups we need in PBCS for our ODI jobs to work. One import thing that I need to point out is that Oracle will update PBCS with new versions and the file format can change over time. If that happens, you’ll need also to update your ODI Jobs.

It happened to me during more than one project and is not a big deal, but you need to be aware that if your job starts to fail, this is what can be happening.

I hope you guys enjoy it, stay safe and see you soon.

How to use your existing ODI on premise to seamlessly integrate PBCS (Part 4: Outbound Jobs)

Posted in Uncategorized on March 22, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the how to integrate PBCS seamlessly using you existing ODI environment series (Part 3 Here), today we’ll talk about Outbound Jobs.

In the same ways as the Inbound Jobs, the we need to create a job to extract both data and metadata for ODI to consume and populate our DW as well as use the Metadata for validation.

So, to extract data from PBCS is also easy. First, we need to choose the outbox location to enable save as job (Local does the same as for the Inbound job and enable you an once execution only).

For the outbound we need to choose the plan type, that means, we’ll need at least one job for each plan type. We also need to choose the delimiter (I always like pipeline because is easy to see and is not used in any command) and if you use smart list you can choose if you want to export the labels or the names.

And finally, you set the POV to be exported. You can use essbase substitution variables here if you want to and as you can see the export will also have the same format as planning import, accounts on the rows, periods on the columns and the POV (the plan type as well).

You can change the format if you wish, but I advise to maintain the consistency between the jobs for the sake of dynamic components.

After running this job, PBCS will generate a zip file in his outbox, we just need to go there and download it.

For the Metadata export the idea is the same but a little bit simpler than the others, we just need to select our outbox, the dimensions you want to export and the delimiter and that’s it. PBCS will create one zip file per dimension in our outbox.

That’s it for today. I hope you guys enjoy it, stay safe and see you soon.

How to use your existing ODI on premise to seamlessly integrate PBCS (Part 3: Inbound Jobs)

Posted in Cloud, EPM Automate, ODI, PBCS, Uncategorized with tags , , on March 18, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the how to integrate PBCS seamlessly using you existing ODI environment series (Part 2 Here), today we’ll talk about Inbound Jobs.

We already know that we’ll need 4 type of jobs for our integration then let’s see what we need to do to create an Inbound Job:

The first thing we need to do is to set the location as inbox (or outbox in case of an extract). This is what enables the button save as a job. If you choose Local, it enables you to run just for one time.

Second, source type. We have 2 choices there, Essbase and Planning. Essbase has the same format that we are used with Essbase exports, the problem is that if we select Essbase we’ll need to have one job per Plan type.

Planning in the other hand has a special format that is very handy for create dynamic objects in ODI as we can see in the table below:

As we can see, the planning format has a column for the account, then the periods, a POV column and the data load cube name (plan type).

The periods on the column is already a nice thing to have since will be way fester to load and way more optimized since we’ll have one how for the entire year.

Of course, this will only be true when we are talking about forecast because actuals normally, we load just one month per time… even so I prefer this format then have one column for the period and another for data.

The POV is also nice because doesn’t matter how many dimensions we have in one plan type, everything will be in the same column, then for integrate is simple since we just need to concatenate all columns but accounts in one column. I recommend using Regular expression and LISTAGG to do so (best function ever for dynamic components).

And to end we need to inform to each plan type that data will be loaded, very nice, you can load all your plan types at once. One job, one file and everything can be done by one generic component in ODI.

After that we need to choose the delimiter used in the file, in this case, I choose pipeline because it’s very hard to have this character in the middle of any metadata.

Finally, we just need to inform the name of the file. That’s all we need to create a job that we can call using EPMAutomate anytime we need it.

How to use your existing ODI on premise to seamlessly integrate PBCS (Part 2: PBCS)

Posted in Uncategorized on March 10, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the how to integrate PBCS seamlessly using you existing ODI environment series (Part 1 Here), today we’ll talk about PBCS itself.

For us to export or import data from PBCS, we need to prepare our PBCS environment to receive and send all the data we need. Let’s take a look on this.

PBCS has 2 very different interface:

  • The regular interface, same as planning
  • And the Simplified interface (That planning also have) but in PBCS it’s way more useful

The regular interface is almost exactly like Planning one, but the important thing is that we can do anything we need to configure the environment on it. We need to use the simplified interface.

The simplified interface has some new features like be optimized to a tablet or a smartphone but the main feature we are looking for is the inbox/outbox. The inbox/outbox is the only point of contact with the external word and everything that need to be loaded needs to be uploaded to the inbox first and everything that is extracted from PBCS will be available in the outbox to be downloaded.

Knowing this we’ll need to setup for our design:

  • Data and Metadata load
    • Upload the file with the data to the inbox.
    • Create a job to import the data.
    • Execute the job.
  • Data and Metadata Export:
    • Create an export job (For data export is possible to create a Business Rules to export the data directly to the inbox folder: /u03/lcm/)
    • Download the files
    • Unzip and import the file with ODI

For Data/metadata load we need first to upload a file to the inbox, then we need to have an import data/metadata job created and finally we need to run the job to load.

The file can be in .zip format that PBCS automatically unzip the file. This is very good since we may move some big files around

For Data/Metadata extract, we need first to create an export job and run it (you can do data export using the business rules, same old dataexport from essbase, and the path to export would be /u02/lcm/), download the file and finally use ODI to unzip it and load it to our tables.

But what are jobs? Jobs are basically templates with information about what you want to do. For example, a data export Job will contain the POV of the data you want to export, and every time you ran the job that POV will be exported.

In the console menu we have everything we need to create and monitoring our jobs. All data Jobs are created clicking in the Actions menu, Export and Import options and to check the job results you can go to inbox/outbox explorer.

And that’s it for today. I hope you guys enjoy and next post we’ll create all the jobs we need. Stay safe and see you soon.

How to use your existing ODI on premise to seamlessly integrate PBCS (Part 1: Solution)

Posted in Uncategorized on March 9, 2021 by RZGiampaoli

Hey guys, how are you? Today we’ll going to talk about how to integrate PBCS seamlessly using you existing ODI environment.

I said PBCS but this approach can be user for any cloud application available out there (with just changing the way of connect/upload data/API that needs to be used to integrate). Other than that, everything else is valid.

Let’s think a little bit about PBCS. PBCS is a close box that can only be accessed by HTML, SFTP, REST API or EPM Automate. Then for integration we basically have 2 options, REST API and EPM Automate since everything else is way too manual to be called integration.

Both will work in almost the same way, then to keep it simple we’ll be using EPM Automate here. In our case we have a ODI on premise designed like this:

  • We have a lot of different sources that are loaded in our stage area.
  • In the stage area we do whatever transformation we need in the data before we load it to the DW schema
  • During the load into our DW we validate all the data POV against the metadata in Planning or Essbase.

From everything described here the only two changes we need to do is the source of the metadata for validation and the target of the data load.

For the metadata, instead of reading the Planning repository or extract the metadata from Essbase, we’ll export from PBCS the metadata and load it to a table, and this is what we’ll use for validating the data before load.

This step is very important because we don’t want to get errors during data load and slow down the process we are creating. Also, it gives the business a fallout report that they can work to fix the invalid members in PBCS.

For the data load, instead of loading this data directly to Planning we’ll be exporting the data in the right format to a txt file, zip them, and use EPMAutomate to load it to PBCS.

Remember, you chose the REST API to do so, the only thing you need to do differently is a little bit of programing, but the logic will be the same.

Then in the end we’ll have a design like this:

As I said, we can see by this schema that any cloud app can be integrated in kind of the same way. We can have everything we have until today and we just need to figure out the bridge we’ll have between our on-premises DB and our target app.

If the target app is VPN enable, everything get’s way easier since for ODI, this will be just another source, in fact, we could even have DB Links between our environments.

Then keep your mind open for the approach here more than the content we are talking about (PBCS).

I hope your guys enjoy this quick start of our series and see you soon.

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 9: Analyzes)

Posted in Uncategorized on February 23, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the Essbase Statistics DW: How to automatically administrate Essbase using ODI series (Part 9 Here), today we’ll talk about the analyzes we can do with all the data we have so far.

With all this information in our DW we can do a lot of things. We could automatically set the caches using ODI and Maxl or Esscmd but change settings in Essbase is more complicated than just an update.

We need to restart the applications, we need to take in consideration the design of each cube, we need the feeling and knowledge of the administrators, but can be done.

In fact, we can create thresholds to see if the cache is 20% off what it supposed to be and then apply the changes at night. We can also start to record the sessions (be aware that the tables get very big very quick), and analyze the retrieve/calculation times to see if the performance is getting better or not and them send a alert in case it was not after a automatic chance or something like that.

But what we can really do is provide precise information about each cube. We send a daily email for each administrator saying the actual stats for each cube, the actual setting of each cache, the recommended size, and we also send a table with more information about that application.

Attached we can add an excel with the raw data for further analysis.

This helps a lot the administrators to have a better understanding of our cubes and to decide and plan better maintenance strategies.

Of course, we can have a dashboard (and this would be the best approach) with all the statistics there showing the growing of the cube, projections, alerts of when the cubes will reach the maximum cache size, well, anything that could be needed.

The important thing here is to realize how useful this DW would be for keep our Essbase cubes healthy. And keep in mind that is a lot of other stats that can be gathered from Essbase that can do a lot for us.

For example, we can get the dimension settings and number of members and use ODI (or any language like java) to manipulate the outline and test the best Sparse/Dense settings for our cubes.

I hope you guys enjoy our series, be safe and see you soon.

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 4: Dynamic Calculator Cache)

Posted in Uncategorized on February 4, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the Essbase Statistics DW: How to automatically administrate Essbase using ODI series (Part 3 Here), today we’ll talk about Dynamic Calculator Cache.

The Dynamic calculator cache basically it is a buffer to store all uncompressed blocks on memory to calculate the dynamic members in the dense dimensions. Expanded Blocks including: Store, Never Share, Dynamic Calc members and dynamic time series members.

This is important because show how everything is related in Essbase. If you set the CALCLOCKBLOCK to 1000 you need to be sure that the data cache can hold 2000 uncompressed blocks. If not doesn’t matter the setting you put, Essbase will put half what fits in the data cache

We also need the size of the expanded blocks is all the store members plus the dynamic calc members and dynamic time series members together plus 8.

We also need the largest expanded block across all database on the machine and the maximum number of expected concurrent users. This can be analyzed by gathering the session information into our DW as well, but for this post it’ll be a constant number.

This information can be acquired by:

EASMAXLESSCMDJAVA
Data Cache Right click in a cube edit properties->Storagequery database sample.basic list all file information;listfiles “” “sample” “basic”;maxl.execute(“query database ” + s_application + “.” + s_cube + ” list all file information”);
Amount of members Right click in a cube edit properties->DimensionsNoneNoneselection.executeQuery(dimension.getName(), IEssMemberSelection.QUERY_TYPE_DESCENDANTS, 0, null, null, null);

In resume Dynamic Calculator Cache is:

  • A buffer in memory that Essbase uses to store all the blocks needed for calculate a Dynamic Calc member in a dense dimension
  • To find the optimal size of this cache we need:
    • CALCLOCKBLOCK size: it is half the number of expanded blocks that fit into the data cache
    • Expanded Blocks including: Store, Never Share, Dynamic Calc members and dynamic time series members
    • The largest expanded block across all databases on the machine.
    • The maximum number of expected concurrent users
      • Can be analysed by gathering the session info into a table and analyse the patterns but for this presentation is a constant number based in experience
  • To calculate the Maximum Calculator Cache, we need to multiply:
    • CALCLOCKBLOCK: (Data Cache in bytes (already calculated) / Size of the expanded Block in bytes) / 2
    • The largest expanded Block in bytes on the server
    • The maximum number of expected concurrent Users (Constant)

And that’s it, we talked about the 5 most important caches in Essbase, how do we calculate them and how we get them.

So far, we talked about these caches and how they are calculated:

Cache TypeDescription
IndexA buffer in memory that holds index pages (.ind files)
Data fileA buffer in memory that holds compressed data files (.pag files). Essbase allocates memory to the data file cache during data load, calculation, and retrieval operations, as needed. (only when direct I/O is in effect)
DataA buffer in memory that holds uncompressed data blocks. Essbase allocates memory to the data cache during data load, calculation, and retrieval operations, as needed
CalculatorA buffer in memory that Essbase uses to create and track data blocks during calculation operations
Dynamic calculatorA buffer in memory that Essbase uses to store all of the blocks needed for a calculation of a Dynamic Calc member in a dense dimension

And here we have a resume of all calculations needed to have the exactly amount of cache per application we have:

Cache TypeDescription
Indexnumber of existing Blocks * 112 bytes = the size of database index
Data fileCombined size of all essn.pag files, if possible; otherwise, as large as possible
Data0.125 * the value of Data File Cache size
CalculatorBitmap size in bytes * Number of bitmaps:
Bitmap size in bytes: Max((member combinations on the bitmap dimensions / 8), 4)
Number of bitmaps: Maximum number of dependent parents in the anchoring dimension + 2 constant bitmaps
Dynamic CalculatorC * S * U:
C is the value of the appropriate CALCLOCKBLOCK setting. (Data Cache in bytes / Size of the expanded Block
S is the size of the largest expanded block across all databases on the machine. Multiply the number of members in each dense dimension together and multiply by 8 bytes
U is the maximum number of expected concurrent users

From now on we’ll going to see how we can create a process to extract and handle this information and how can we create a DW and use it to keep our Essbase apps with the right cache all the time.

I hope you guys enjoy it and see you soon.

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 3: Calculator Cache)

Posted in Uncategorized on February 3, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the Essbase Statistics DW: How to automatically administrate Essbase using ODI series (Part 2 Here), today we’ll talk about Calculator Cache.

The concept behind the Calculator cache is a little tricky but I think is very nice to understand since it gives us a very good tip about Essbase design. This cache is used to track what Essbase already calculated and what he didn’t yet.

Essbase split the sparse dimensions in 2 groups:

  • Bitmap dimensions: they are all the sparse dimension that fits inside the bitmap
    • You can think about this bitmap as a matrix that will hold all members from one or more sparse dimension, depending of the available memory. Each member occupies 1 bit of memory. (it’s like a check that says if the member was calculated or not)
  • Anchoring dimensions: For the rest of the sparse dimensions that didn’t fit in the bitmap. The anchoring dimension will define the number of bitmaps that Essbase can create at same time, if there’s enough memory.

The calculator cache controls the size of the bitmap, what means, the number of sparse dimensions that can fit in the bitmap and the amount of anchoring dimensions depends of the size calculator cache

But how it works? Essbase will start from the first sparse dimension in the outline and will try to fit as much dimensions possible inside the bitmap until he cannot put the next sparse dimensions completely inside the bitmap.

All the remain sparse dimension becomes anchoring dimensions.

The number of bitmaps that can be created at same time is determined by the maximum number of dependent parents in the anchoring dimensions.

Dependent parent is the number of parents that is above a leaf member. We need to find the member that has the biggest number of levels above him, meaning, the max number of dependent parents is the same as the number of Levels in a dimension

If the leaf member with most parents above him has 3 parents the level of that dimension will also be 3, then for dependent parents we just need to ask for the level of the dimensions.

The size of the bitmap and the number of anchoring dimensions depends of the size of the calculator cache.

What that means? Let’s consider this outline for example:

Sparse Dims# of MembersDependent Parents
S110NA
S210NA
S320NA
S430NA
S52003

In this outline we have 5 sparse dimensions, 4 flat ones (S1 to S4) and one with a 3-level hierarchy (S5).

We have 3 options of cache settings that we can choose from:

  • Single anchoring multiple bitmaps.
  • Single anchoring single bitmap.
  • And multiple anchoring single bitmap.

To use the first option, we need to get the first 4 sparse dimensions from the outline and multiply all of them, split the result by 8 to transform it in bytes and this will define the size of the bitmap.

After that we need to define how many bitmaps we will have at same time in memory. For that we need to get the level of the anchoring dimensions and sum 2 on it. (2 is a constant number). This will define the number of bitmaps.

Finally, we just need to multiply the size of the bitmap by the number of bitmaps and this will give us the size of the cache we must set to use option 1.

The other options are a smaller version of the first one. Option 2 we don’t have multiple bitmaps, we just set the anchoring to 1, and option 3 we fit in the bitmap less dimensions.

We can see the calculations for each option below:

OptionCalculator Cache
Single anchoring, multiple bitmaps(10*10*20*30)/8=7500
3+2=5
7500*5=37500 bytes
Single anchoring, single bitmap(10*10*20*30)/8=7500
1
7500*1=7500 bytes
Multiple anchoring, single bitmap(10*10*20)/8=250
1
250*1=250 bytes

The interesting thing here is to see why is important to have our outline designed as hourglass for this cache. If we put the most complex sparse dimension in other position than the last one, we will never be able to have concurrent bitmaps and the size of the bitmap will be huge.

Then after this trick explanation the only information we need to calculate the calculator cache are:

  • The amount of member in each sparse dimension.
  • The amount of levels in the biggest sparse dimension.
    • The biggest sparse dimensions should be placed in the last place on the outline.

All this information can be acquired by:

EASMAXLESSCMDJAVA
Amount of members Right click in a cube edit properties->DimensionsNoneNoneselection.executeQuery(dimension.getName(), IEssMemberSelection.QUERY_TYPE_DESCENDANTS, 0, null, null, null);
Levels Double click in outline-> count the parentsNoneNoneIEssDimension dimension = (IEssDimension) otl.getDimensions().getAt(i);     IEssIterator level = dimension.getLevels();

Then resuming everything we saw in this post:

  • Calculator cache is a buffer in memory that Essbase uses to create and track data blocks during calculation operations
  • Essbase split the sparse dimensions in 2 groups:
    • Bitmap dimensions: Any sparse dimensions that can be fit inside the bitmap. Each member combination placed in the bitmap occupies 1 bit of memory, and the entire dimension must fit inside the bitmap.
    • Anchoring dimensions: The remaining sparse dimensions that do not fit into the bitmap.
  • Because the calculator cache controls the size of the bitmap, the number of sparse dimensions that can fit in the bitmap and anchoring dimensions depends on the size of the calculator cache
  • Essbase starts with the first sparse dimension in the outline and fits as many as possible into the bitmap.
  • Essbase stops the process when it cannot fit another complete sparse dimension into the bitmap.
  • All the remaining sparse dimensions becomes anchoring dimensions
  • The number of bitmaps used is determined by the maximum number of dependent parents for any members in the anchoring dimension
  • The max number of dependent parents is the same as the number of Levels in a dimension
  • Depending of the cache setting we can have 3 options
    • Single anchoring multiple bitmaps.
    • Single anchoring single bitmap.
    • And multiple anchoring single bitmap.
  • And to calculate everything we just need 2 things:
    • The amount of member in each sparse dimension.
    • The amount of levels in the biggest sparse dimension.
      • The biggest sparse dimensions should be placed in the last place on the outline.

I hope you guys enjoy it, be safe and see you next time.

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 2: Data File Cache and Data Cache)

Posted in Uncategorized on February 2, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the Essbase Statistics DW: How to automatically administrate Essbase using ODI series (Part 1 Here), today we’ll talk about Data File Cache and Data Cache.

The data file cache is deprecated, and I don’t know if much people use it, but it’s used to define the maximum size of the data cache. The data file cache is used to hold compressed data files (essn.pag) in memory and is used only when direct I/O option is in effect. To find it maximum size we need to sum all essn.pag files, if possible; otherwise, as large as possible.

We can find this information in EAS edit properties, storage or Maxl query database file information or in Esscmd list files or java using Maxl as we can see here:

EASMAXLESSCMDJAVA
Right click in a cube edit properties->Storagequery database sample.basic list all file information;listfiles “” “sample” “basic”;maxl.execute(“query database ” + s_application + “.” + s_cube + ” list all file information”);

Now let’s talk about the data cache. The name looks almost the same, but his usage is very different. The data cache is used hold uncompressed data blocks in memory. It has a direct relationship with the Data file cache and the recommended size would be 12.5 % of the summing of all essn.pag files (or 0.125 * Data File Cache size. That is why we even talked about the data file cache and calculated its size before.

The same thing that happens with the index cache happens here with the data cache. If we have too much historical data, we may decrease the optimum size to fit the data usage.

We gather this information in the same way as the data cache file:

EASMAXLESSCMDJAVA
Right click in a cube edit properties->Storagequery database sample.basic list all file information;listfiles “” “sample” “basic”;maxl.execute(“query database ” + s_application + “.” + s_cube + ” list all file information”);

Then to resume what we talked about in this post:

  • The data file cache is used to hold compressed data files (essn.pag) in memory and is used only when direct I/O option is in effect.
  • It is calculated by summing all essn.pag files, if possible; otherwise, as large as possible.
  • The data cache is used hold uncompressed data blocks in memory.
  • The recommended size would be 12.5 % of the summing of all essn.pag files (or 0.125 * Data File Cache size).
  • Same case as Index cache, just the working blocks are used (no historical).

I’ll stop here since we have a big one coming and I want to address the Calculator Cache alone in the next post.

Be safe and see you soon.

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 1: Index Cache)

Posted in Uncategorized on February 1, 2021 by RZGiampaoli

Hey guys, how are you? Today we’ll going to talk about how to automatically administrate Essbase using ODI.

As you guys know, in order to have good performance in an Essbase cube, we must keep vigilance and follow up its growth and its data movements, so we can distribute caches and adjust the database parameters accordingly. But this is a very difficult task to achieve, since Essbase statistics are not temporal and only tells you the cube statistics is in that specific time frame.

The performance of an Essbase app has a direct relationship with the configuration setting that we set and must be based on the design and usage of each cube. To find out the optimal values for these configurations we need to look at the Essbase statistics.

The problem with this is that the Essbase statistics is a snapshot of the cube at that moment and every time we shut down a cube the run time statistics are cleared, meaning that these statistics are more or less accurate depending of the amount time the cube is online.

That also means that if we look in the statistics after a restart of the cube, we can get the wrong information and we can make a poor setting of caches and other things making the cube slow or even crashing them.

The best thing we can do to make this process more reliable is to create a historical DW with the Essbase statistics information, this way we can create a process to keep improving Essbase setting in a more pragmatically way extract, analyze, change setting and so on.

These are the most important caches that we can set, but we can do all kind of things with the information we’ll gathering from Essbase.

Cache TypeDescription
IndexA buffer in memory that holds index pages (.ind files)
Data fileA buffer in memory that holds compressed data files (.pag files). Essbase allocates memory to the data file cache during data load, calculation, and retrieval operations, as needed. (only when direct I/O is in effect)
DataA buffer in memory that holds uncompressed data blocks. Essbase allocates memory to the data cache during data load, calculation, and retrieval operations, as needed
CalculatorA buffer in memory that Essbase uses to create and track data blocks during calculation operations
Dynamic calculatorA buffer in memory that Essbase uses to store all of the blocks needed for a calculation of a Dynamic Calc member in a dense dimension

Let’s start with the Index cache. Index cache holds in memory the indexes that Essbase uses to find requested data.

Every time we retrieve or calculate any data from Essbase, first Essbase try to find that index in the index cache. If the index is not there, Essbase will find the index in the index files, get the data and store this index in the index cache memory to make it faster for the next time we need the same information. For each index that Essbase store there, it will take 112 bytes of memory.

Them to calculate the maximum size of this cache we just need to multiply the number of existing blocks by 112 bytes and that’s it.

Just to make sure everyone is in the same page: a Block is the combination of all store like members from all dense dimensions. Essbase creates a new Block when a new unique combination of data in the sparse dimensions is inserted/calculated.

This table shows the different ways to get the information we need from Essbase. As you can see, we don’t need to know java/C++ or any language to do extract this information, we can just use maxl, esscmd or even manually using EAS.

EASMAXLESSCMDJAVA
Right click in a cube edit properties->Statisticsquery database sample.basic get dbstats data_block;select sample basic; getdbstats;(IEssCube) cube.getPropertyNames();

One interesting thing about this cache (and almost every cache in Essbase) is that, as I said, it holds in memory the indexes that Essbase used at least once to find the requested data. That means, we will only find in the cache, indexes that we used at least once since the last time the cube starts.

Then, if we have a cube with historical data, the chances of these old data never hit the index are bigger then the new data. We need to take this in consideration when we set this cache.

Historical Data Usage

In this case, 70% of the cube contains historical data that are rarely requested, meaning, we need just 30% of the maximum size for the index cache.

A way to figure out if the cache is well set is looking at the hit ratio in EAS, but we still have the issue that this info reset every time when the cube restarts.

If we are resuming what we just said about Index Cache, we have the follow:

  • The index Cache is used to hold in memory all used indexes since the application is turned on.
  • The maximum index cache size is calculated by multiplying the number of existing Blocks * 112 bytes
    • A Block is the combination of all store like members from all dense dimensions. Essbase creates a new Block when a new unique combination of data in the sparse dimensions is inserted/calculated
  • The optimal index cache will vary depending of the design and use of the cube.
    • If the cube has a lot of historical data, the chances of these index never hit the cache are big
    • For that we can define a historical percentage that we can remove from the maximum index cache to find a more realistic cache.
    • This historical percentage can be obtained by analyzing the outline and see how many members are in use

That’s it for today. Next post we’ll talk about the Data File Cache. Stay safe and see you soon.