10 Important Things to Improve ODI Integrations with Hyperion Planning Part 9 (Generic Data Load)
Hi all, today let’s see how we may create dynamic data load components for any number of Hyperion Planning applications. As you know, all data related to Hyperion Planning actually resides in an Essbase database and ODI has a Knowledge module that loads data into it called “IKM SQL to Hyperion Essbase (DATA)”. In this post we will tweak this Knowledge Module allowing it to read the Planning Metadata repository and dynamically build its own data store mappings and definitions depending on which Hyperion Planning application it will load data to. But first let’s take a look on how the original “IKM SQL to Hyperion Essbase (DATA)” works:
As you can see, it is a fairly simple KM with two main steps: “Prepare for loading” and “Load data into Essbase”. If you open “Prepare for loading” step you will see some Jython code there getting all configurations and options to be used in your data load and since our objective here is to make it dynamic to load any Hyperion Planning application we should focus on the bellow settings:
These settings are getting all the connection information related to the target data store in the Interface ODI component. If you already worked with Hyperion Planning and ODI, you know that the target data store contains all dimensions from a particular Hyperion Planning application and a column named “Data” that will contain the value for that particular intersection. This data store belongs to a data model, which is connected to a logical schema that is tied (by a context) to a certain physical schema that contains the application server/user/password/application/database.
In this example we will consider that we have five Hyperion Planning applications and all of them resides in the same server and uses the same Essbase username and password to connect to them, but if your architecture is different from that, please feel free to customize this information as well. In order to make this KM dynamic, let’s add two more options to it called “APPLICATION” and “CUBE” and modify the application and database information code to get their value from options instead of the target data store, like this:
These two options will contain two ODI variables that will indicate which application/cube you will be loading. Keep those two variables in mind, as we will talk more about those later in the post. Now if you open “Load data into Essbase” step you will see that this one is responsible to actually fetch the data from a table based in the source and target data store columns and mappings as below:
If we want it to load any Hyperion Application, we need to change this to not rely on the target data store (that is a static information) in order to know which dimensions to load for a specific application. If you are a reader of this blog, you probably already know where the answer to this problem relies, right? If you thought about Hyperion Planning metadata repository, you are right! Let’s change this KM to read the metadata repository, figure out which dimensions that application has and load data to it. First let’s create another option in this KM called “PLANNING_SCHEMA”, which will contain the actual database schema name where the Hyperion Planning application resides in Oracle. Then we will add the following query in the source tab of “Load data into Essbase” step:
Befire we continue, this command LISTAGG only exists in Oracle DB 11.2 or newer. If you DB is in an older version please try the WM_CONCAT command. This is a undocumented command but works very well. You need to change the SQL projection from:
LISTAGG(MAPPING, ‘,’) WITHIN GROUP (ORDER BY 1) MAPPING
To
WM_CONCAT (MAPPING) MAPPING
Also you will need to create and add to this source tab a logical schema that points to the database that contains the Hyperion Planning repository. One very important thing about this technique: it only works when you have all Hyperion Planning applications repository in a single database because even though the schema name will be changed dynamically, giving us the possibility to load any number of applications with a single component, the connection to the metadata database will be a single one. Of course that you can further change your process to accept dynamic topology information as seen in our previous post here, but in this example we will keep things simpler to demonstrate the technique. After you add this query in source, it’s just a matter to change the target code to something like this:
Our source query will retrieve one row only (called #MAPPING) with the exact number of dimensions that one specific planning application has (based on “PLANNING_SCHEMA” option) and will pass it to the target tab, where it will be used to retrieve the data. But you may be thinking now…. What about the source table for this data load? It may have different formats, different names, and different sources… it will not work in a dynamic load component…. and you are right, it won’t, so that’s why we need something extra to make this component work: an INBOUND_GENERIC table.
INBOUND_GENERIC is a table that contains all possible columns (and each column is a planning dimension) for all Planning applications that you will need to load. For example: if you need to load five applications and together they have 15 distinct dimensions, you will have an INBOUND_GENERIC table with 15 columns (one for each dimension name) plus three auxiliary columns: APP_NAME (the Hyperion Planning application name), CUBE (the cube that you will load data within that application) and INTERFACE_NAME (that is the job name that is loading that data). These three columns give you the flexibility to have in the same table information about different applications, cubes and even inbound jobs! For example, if you have only one application, but contains several inbound jobs, you still may have one single Essbase load component for your entire architecture, saving you time for any maintenance or new development. Also a good performance trick here is to have this table partitioned by INTERFACE_NAME, so you may easily truncate the partition before any new inbound job runs (and even parallel inbound executions).
Also you may be thinking now: ok, I have 10 inbound jobs for 3 different applications. I may create this INBOUND_GENERIC table and load all data to it and have it loaded by this generic Essbase load component, but I’ll still need to create 10 inbound interfaces to load the INBOUND_GENERIC table, right? Yes, you will, but it is worthy. Here are some good points of this architecture:
- You will have only one single component that loads data to Essbase and all the possible future maintenance will be done in just one point, not in multiple;
- Since it is modular, it can be added to any other inbound interface easily;
- ODI constraints work great in this architecture. Think about it: you may have one constraint for each column and you may validate it against the planning metadata repository in order to check if all data is correct before loading to Essbase, which means no more “Unknown Members”, see our post about it here;
After those changes, you should end up with an interface more or less like this:
It doesn’t matter what you do in your mapping here because the KM will build the columns based in the planning metadata repository. Also is important to notice that no transformation will be done in this component, so be sure to do you ETL when you load INBOUND_GENERIC table. In resume, this component considers that the data is already validated, transformed and ready to be consumed by Essbase. The filter here contains which application/cube/interface you are currently loading.
Now it is just a matter to add this interface to an ODI scenario and add the following ODI input variables to it:
- APP_NAME: Planning application name;
- CUBE: Planning cube/database name;
- PLANNING_SCHEMA: Oracle schema name which contains the Planning Metadata installation;
Then you may loop this scenario for any number of applications as you may see in our post about it here.
Thanks everyone! I hope you have enjoyed it!
July 22, 2014 at 7:07 am
Hi,I tried using the code you have provided above,but it fails in the target code as #MAPPING is not getting evaluated.Please suggest
July 22, 2014 at 9:32 am
Hi Preksha how are you? 2 questions. Did the query run in your sql developer? Also, which version is your database where planning is?
Thanks.
July 23, 2014 at 2:45 am
Hi,i am good.Your blog is a of great help for people who are new to ODI 🙂
My source query runs fine and gives all the dimensions of a cube as output.But the result #MAPPING is not getting evaluated in the target tab.I have not changed technology in the target tab…should it be Jython or Oracle.Please suggest.
Planning DB version is Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production.Thanks.
July 23, 2014 at 4:59 pm
Hi Preksha, it’s good to know that our blog is helping you. The target technology must be Jython. I think this is your only problem. The DB version is good and if your query is bring the dimensions it should work after you change the technology.
Thanks!
July 24, 2014 at 8:02 am
Hi,i have set the technology as Jython,i am still not able to execute.
Following is my evaluated target Query:
sql= “””select #MAPPING
from SBM_DSF.INBOUND_GENERIC INBOUND_GENERIC
where (1=1) And (INBOUND_GENERIC.APP_NAME=#HYPERION.APP_NAME and INBOUND_GENERIC.CUBE_NAME=#HYPERION.CUBE_NAME and INBOUND_GENERIC.INTERFACE_NAME=#HYPERION.INTERFACE_NAME) “””
I am getting following error:
java.sql.SQLSyntaxErrorException: java.sql.SQLSyntaxErrorException: ORA-00923: FROM keyword not found where expected
Please help.Thanks.
July 24, 2014 at 8:08 am
Can you also sent to us the result of #MAPPING ? Maybe the result of your source tab is returning something that is breaking the sql in the target. Just execute the source Sql and sent us the result to see if we can help. Thanks!
July 24, 2014 at 8:52 am
Following is the result of the source query:
ACCOUNT “Account”,ATTRIBUTEVALUE “AttributeValue”,CHART OF ACCOUNT “Chart of Account”,CURRENCY “Currency”,CUSTOM1 “Custom1″,CUSTOM2 “Custom2″,DATA “Data”,ENTITY “Entity”,PERIOD “Period”,SCENARIO “Scenario”,STRATEGY “Strategy”,VERSION “Version”,YEARS “Years”
Thanks.
July 24, 2014 at 10:07 am
The problem here is the CHART OF ACCOUNT dimension. It contains spaces in its name. You will need to change the source SQL to inclosure the “source” columns with double quotes as well, like this:
Change this in your source SQL:
select UPPER(OBJECT_NAME)
to:
select ‘”‘||UPPER(OBJECT_NAME)||'”‘
This will return something like this:
“CHART OF ACCOUNT” “Chart of Account”
Then it should work 🙂
Thanks!
July 24, 2014 at 11:28 am
Hi,Really appreciate your help.I am able to go past the first error,but now the table INBOUND_GENERIC is not getting recognized.
java.sql.SQLSyntaxErrorException: java.sql.SQLSyntaxErrorException: ORA-00942: table or view does not exist.
Thanks Again!!
July 24, 2014 at 11:37 am
This one is probably about some database grant issue to this table. Probably the dataserver user that you added in ODI Topology does not have access to this table. Please verify that and let us know. You may also test this connection outside of ODI with your SQL tool, connecting with the dataserver user and trying to query the INBOUND_GENERIC table. Hope it helps.
July 28, 2014 at 4:32 am
Hi,
I am checked the topology and have tested the connection outside ODI.I am able to connect.
I have granted all permissions on the table itself.
Still i face the same error 😦
-Preksha.
July 28, 2014 at 10:54 am
Hum…is your Planning schema and Inbound schema in the same dataserver? If no, then this may be the problem. You see, we modified the “Command on Source” tab to add a SQL there that is pointing to the Planning schema dataserver, so it makes an “obligation” to have the Inbound schema in the same dataserver as well, since the KM uses the srcCx = odiRef.getJDBCConnection(“SRC”) API to get the source connection here (that will be the Planning database connection). Look at this example (hope it clarifies):
You have your Planning schema as PLANNING_SCHEMA and your Inbound schema as HYP_ETL in the same dataserver. You may use another schema as ODI_RUN to connect in the dataserver and this ODI_RUN needs to have permission to all necessary Planning tables in PLANNING_SCHEMA and also select/insert in the HYP_ETL.INBOUND_GENERIC table. Does it make sense? In resume you need to have the three schemas in the same database and the connection schema (ODI_RUN) needs to have access to both PLANNING and HYP_ETL schemas.
Hope it helps.
Thanks!
July 30, 2014 at 4:12 am
It Worked 😀
Thanks a lot!!!!
September 23, 2014 at 7:20 pm
Hello, Thanks for the wonderful approach. I still need clarification on how the mapping takes place between source and target? I notice the evaluated Target Query in Preksha’s comment, where that query was specified? Please enlighten on this as I am a new bee for ODI
September 25, 2014 at 1:53 pm
The target query has two parts: The first one is the #MAPPING variable and this variable represents all the necessary columns for you to dynamically load a Hyperion Planning app. This #MAPPING information comes from the result of the “Command on Source” query. Everything else (like table name, filters and so on) comes from what you have specified in the ODI Interface object (inside of its Mapping main tab). Hope it helps.