Hi all, today let’s see how we may create dynamic data load components for any number of Hyperion Planning applications. As you know, all data related to Hyperion Planning actually resides in an Essbase database and ODI has a Knowledge module that loads data into it called “IKM SQL to Hyperion Essbase (DATA)”. In this post we will tweak this Knowledge Module allowing it to read the Planning Metadata repository and dynamically build its own data store mappings and definitions depending on which Hyperion Planning application it will load data to. But first let’s take a look on how the original “IKM SQL to Hyperion Essbase (DATA)” works:

As you can see, it is a fairly simple KM with two main steps: “Prepare for loading” and “Load data into Essbase”. If you open “Prepare for loading” step you will see some Jython code there getting all configurations and options to be used in your data load and since our objective here is to make it dynamic to load any Hyperion Planning application we should focus on the bellow settings:

These settings are getting all the connection information related to the target data store in the Interface ODI component. If you already worked with Hyperion Planning and ODI, you know that the target data store contains all dimensions from a particular Hyperion Planning application and a column named “Data” that will contain the value for that particular intersection. This data store belongs to a data model, which is connected to a logical schema that is tied (by a context) to a certain physical schema that contains the application server/user/password/application/database.
In this example we will consider that we have five Hyperion Planning applications and all of them resides in the same server and uses the same Essbase username and password to connect to them, but if your architecture is different from that, please feel free to customize this information as well. In order to make this KM dynamic, let’s add two more options to it called “APPLICATION” and “CUBE” and modify the application and database information code to get their value from options instead of the target data store, like this:

These two options will contain two ODI variables that will indicate which application/cube you will be loading. Keep those two variables in mind, as we will talk more about those later in the post. Now if you open “Load data into Essbase” step you will see that this one is responsible to actually fetch the data from a table based in the source and target data store columns and mappings as below:

If we want it to load any Hyperion Application, we need to change this to not rely on the target data store (that is a static information) in order to know which dimensions to load for a specific application. If you are a reader of this blog, you probably already know where the answer to this problem relies, right? If you thought about Hyperion Planning metadata repository, you are right! Let’s change this KM to read the metadata repository, figure out which dimensions that application has and load data to it. First let’s create another option in this KM called “PLANNING_SCHEMA”, which will contain the actual database schema name where the Hyperion Planning application resides in Oracle. Then we will add the following query in the source tab of “Load data into Essbase” step:

Befire we continue, this command LISTAGG only exists in Oracle DB 11.2 or newer. If you DB is in an older version please try the WM_CONCAT command. This is a undocumented command but works very well. You need to change the SQL projection from:
LISTAGG(MAPPING, ‘,’) WITHIN GROUP (ORDER BY 1) MAPPING
To
WM_CONCAT (MAPPING) MAPPING
Also you will need to create and add to this source tab a logical schema that points to the database that contains the Hyperion Planning repository. One very important thing about this technique: it only works when you have all Hyperion Planning applications repository in a single database because even though the schema name will be changed dynamically, giving us the possibility to load any number of applications with a single component, the connection to the metadata database will be a single one. Of course that you can further change your process to accept dynamic topology information as seen in our previous post here, but in this example we will keep things simpler to demonstrate the technique. After you add this query in source, it’s just a matter to change the target code to something like this:

Our source query will retrieve one row only (called #MAPPING) with the exact number of dimensions that one specific planning application has (based on “PLANNING_SCHEMA” option) and will pass it to the target tab, where it will be used to retrieve the data. But you may be thinking now…. What about the source table for this data load? It may have different formats, different names, and different sources… it will not work in a dynamic load component…. and you are right, it won’t, so that’s why we need something extra to make this component work: an INBOUND_GENERIC table.
INBOUND_GENERIC is a table that contains all possible columns (and each column is a planning dimension) for all Planning applications that you will need to load. For example: if you need to load five applications and together they have 15 distinct dimensions, you will have an INBOUND_GENERIC table with 15 columns (one for each dimension name) plus three auxiliary columns: APP_NAME (the Hyperion Planning application name), CUBE (the cube that you will load data within that application) and INTERFACE_NAME (that is the job name that is loading that data). These three columns give you the flexibility to have in the same table information about different applications, cubes and even inbound jobs! For example, if you have only one application, but contains several inbound jobs, you still may have one single Essbase load component for your entire architecture, saving you time for any maintenance or new development. Also a good performance trick here is to have this table partitioned by INTERFACE_NAME, so you may easily truncate the partition before any new inbound job runs (and even parallel inbound executions).
Also you may be thinking now: ok, I have 10 inbound jobs for 3 different applications. I may create this INBOUND_GENERIC table and load all data to it and have it loaded by this generic Essbase load component, but I’ll still need to create 10 inbound interfaces to load the INBOUND_GENERIC table, right? Yes, you will, but it is worthy. Here are some good points of this architecture:
- You will have only one single component that loads data to Essbase and all the possible future maintenance will be done in just one point, not in multiple;
- Since it is modular, it can be added to any other inbound interface easily;
- ODI constraints work great in this architecture. Think about it: you may have one constraint for each column and you may validate it against the planning metadata repository in order to check if all data is correct before loading to Essbase, which means no more “Unknown Members”, see our post about it here;
After those changes, you should end up with an interface more or less like this:

It doesn’t matter what you do in your mapping here because the KM will build the columns based in the planning metadata repository. Also is important to notice that no transformation will be done in this component, so be sure to do you ETL when you load INBOUND_GENERIC table. In resume, this component considers that the data is already validated, transformed and ready to be consumed by Essbase. The filter here contains which application/cube/interface you are currently loading.
Now it is just a matter to add this interface to an ODI scenario and add the following ODI input variables to it:
- APP_NAME: Planning application name;
- CUBE: Planning cube/database name;
- PLANNING_SCHEMA: Oracle schema name which contains the Planning Metadata installation;
Then you may loop this scenario for any number of applications as you may see in our post about it here.
Thanks everyone! I hope you have enjoyed it!