Archive for the Hyperion Planning Category

We’ll be at KScope 14

Posted in EPM, Hyperion Planning, Kscope 14, ODI, ODI Architecture, ODI Architecture with tags on March 18, 2014 by RZGiampaoli

Hi guys how are you doing? We are very happy and proud to announce that our article “Unleashing Hyperion Planning Security using ODI” was approved for KScope 14! We’ll be there to talk about one of our preferred subject areas: How to improve the EPM tools potential using ODI. We’ll be very pleased if you guys show up in our presentation. It’ll be great to find everyone there and talk about EPM and other cool stuffs.

In our session we’ll show some tricks to unleash the planning security and enable it to be create at cell level, automaticaly and using any attribute dimension or UDA.

You can expect a lot of technical informations, a new way to view and work with Hyperion Planning and ODI plus some real facts about what has happened after a successful implementation of this approach.

And remember, if you sign up by March 25th you’ll take advantage of the Kscope early bird rates, then don’t waste more time and let’s be part of the greatest EPM event in the world.

Thank you very much everybody and we’ll be waiting for you in Kscope 14.

Kscope 14 Speaker

10 Important Things to Improve ODI Integrations with Hyperion Planning Part 8 (Building Planning DataStore)

Posted in EPM, Hyperion Planning, ODI, ODI Architecture, ODI Architecture, ODI Mapping on March 10, 2014 by RZGiampaoli

In order to create a process that is able to load any application and dimension using one single ODI interface we need to make some code changes to the KM that is responsible to load metadata into Hyperion Planning. First, we need to understand the ODI concept of a KM. KM is a set of instructions that will take the information from what exists in the source and target data stores of an ODI interface and construct a SQL command based in those data stores. In a nutshell the ODI KM is code generator based in the information that you set in the interfaces, data stores, topology and so on.

As we know the default Hyperion Integration KM is able to load only one application and dimension at a time because of the need of a target data store for each dimension in each application. If we take a deeper look in the KM to see what it does behind the scenes we will see something like this:
KM Behind the ScenesFigure 1 – KM behind the scenes.

Basically what the KM does is translate the Planning application data store to a SQL query, and as we know, we get this data store by reversing a Planning application inside ODI. Fair enough, but this also means that if we could somehow have the same information that ODI has to reverse this application dimension to a data store we could easily end up with the same SQL created from that data store information. As we already showed before we have the Planning application repository itself where all the information about a Hyperion application is stored. We only need to read this information to get the same information provided by the ODI data store.

Knowing this the only thing left is to change the default KM according to our needs, and for this we need to make three changes on it:

  • Make the application name that it is going to be loaded dynamic;
  • Make the dimension name that is going to be loaded dynamic;
  • Change the way that the KM builds its SQL command that will load metadata to Hyperion Planning. Currently it builds its SQL command based on the source and target data stores and the interface mappings;

Default KM Behind the ScenesesFigure 2– Default KM behind the scenes.

In Figure 2 we can see how a default planning integration KM works. Basically it has two main steps: “Prepare for loading” and “Load data into planning”. The first one is responsible to set all information regarding connections, log paths, load options and so on. The second step is responsible to retrieve all source data based in the interface mapping and the source/target data store and load it to planning. In our case, the application and dimension names resides on the first step and the SQL command resides in the second step so we already know where we need to change the code.

But we need to analyze further to know what exactly we need to change. For the application name ODI gets it from “<%=snpRef.getInfo(“DEST_CATALOG”)%>” API function that returns the application name based in the destination target store that is connected to a logical schema that finally resolves into a physical schema that contains the application name itself. If we change it to an ODI variable we will be able to encapsulate this interface into an ODI package and loop it passing the application name as a parameter, making it independent of the target data store topology information and giving us the a ability to load any Hyperion planning application using one single interface.

The dimension name follows the same logic: ODI gets it from “<%=snpRef.getTargetTable(“RES_NAME”)%>” API function that returns the resource name from the target data store that in this case is the dimension name itself. Again if we changed it to an ODI variable we will be able to encapsulate this interface into an ODI package and loop it passing the dimension name as a parameter, making it independent of the target data store resource name and enabling us to load any dimension with one interface.

The third part is the most complex one. ODI data stores for planning applications are so different from one dimension to another that they require one data store object for each dimension. In figure 10 we can see that ODI relies on “odiRef.getColList” API command to return all mappings done in the target dimension data store, which has the correct dimension format required to load that dimension metadata into planning.

So the big question is: How can we change the “Load data into planning” step to use a dynamic SQL to create dynamic interface mappings to load to any application/dimension? The answer is to rely again on the “Command on Source/Target” concept and on the planning repository metadata information.

Instead of getting the mapping information from the ODI data store object, we can query Planning repository to get the same mapping for all dimensions and applications being loaded. The result of this query is a formatted mapping, identically of what ODI would have generated if we used the default planning development, but with the big advantage of being entirely dynamic to any application and dimension.

Dynamic KM behind the scenes
Figure 3 – Dynamic KM behind the scenes.

In figure 3 we can see an example using an Attribute dimension. The command on source will query HSP_OBJECT and HSP_ATTRIBUTE_DIM of a given application (defined by #SCHEMA_APP variable) to retrieve information about one attribute dimension (defined by #DIMENSION variable). Those variables are passed from an external ODI package that will be used to loop all applications and dimensions that we want to load.

Dimension Datastore Information

Table 1 – Dimensions Data Store information.

If we take a further look into all different data stores that a Planning application could have, we will see a pattern regarding the information that we need to send to Planning to load metadata depending of each dimension, as we can see in the Table 1.

The logic to create the dynamic mapping columns is exactly the same used to create the inbound and the extract tables. The only difference is that for the inbound and extract tables we need to put all columns together and for the KM mapping we need to, depending of the selected dimension, take the right information in the application repository. This information will help us to create the necessary mapping that contains the right source columns and the right alias of those columns, which will inform Planning about what that metadata column stands for.

Since our metadata tie out table contains standard columns for all dimensions we don’t need to worry about adjustments when we change to another dimension, and since our source metadata table already has the metadata information in the correct planning format, we don’t even need any kind of transformation here, it is just a matter to read from the metadata source table and load directly to Planning.

In the Figure 3 example we will use the SRC_MEMBER, SRC_PARENT and SRC_ALIAS as the mapping columns and for the Planning alias the only one that is dynamic is the member name alias that identifies the dimension name. To get this information we need to query the application repository looking for the attributes into HSP_ATTRIBUTE_DIM and for its name in HSP_OBJECT table, and finally we can use the OBJECT_NAME column to get the dimension name alias.

Executing this query we will get a one line mapping string that will be passed as a parameter (#MODEL) from “Command on Source” to “Command on Target” and will enable ODI to load metadata to that specific dimension/application. If we execute this interface and look at the query created in ODI operator we will see that the result is the same as a default KM would create but with the big advantage of being entirely dynamic. Following this logic, we would only need to change the value of the #SCHEMA_APP and #DIMENSION variables to get another application\dimension loaded.

Off course we need to work a little more to get the mapping for the other dimensions as Account or Entity, but the idea will be always the same: query the application repository to get the data store information depending on the dimension\application selected.

Dimensions Mapping informationTable 1 – Dimensions Mapping information

In table 1 we can see all the possible mapping combination that we can have in a planning application for the mainly planning dimensions and we notice that some information are dynamic (dependent of the planning repository) and some are fixed. To put everything together in one single query here are some tips:

  • The majority of the columns are fixed and can be obtained with a simple “select ‘Any string’ from dual”;
  • The easiest way to create this SQL is to create separated SQLs for each different kind of information and put everything together using Unions statements;
  • Split the final query in small queries to get the different categories presented in table 1;
  • Use the MULTI_CURRENCY column in HSP_SYSTEMCFG table to find out if that application is a multicurrency one or not;
  • For aggregations and plan type mapping we need to get the name of the plan type itself and for this we use the HSP_PLAN_TYPE table;
  • When the query is ready you need to add a filter clause to filter the dimension from where that information belongs;

With the query ready the only missing step is to insert it into the “Command on Source” tab inside the Planning IKM and pass the string generated by it to the “Command on Target” tab as we can see in the figure 3.

This ends all the preparations that we need for learn how to build a ODI interface that will dynamically load metadata into any number of Planning applications.

Thanks you and see you in the next post.

10 Important Things to Improve ODI Integrations with Hyperion Planning Part 7 (Smart Metadata Loading)

Posted in EPM, Hyperion Planning, ODI Architecture, ODI Architecture with tags , , , , , on September 13, 2013 by RZGiampaoli

Hello everybody. The time arrived to put some intelligence behind our metadata load. After some years working with Hyperion Planning, ODI and DRM (or any other metadata repository), I figure out that 90% of the metadata does not change in the month cycle maintenance (in a normal Planning application). That means, 90% of the time that a metadata integration takes is useless. It’s a lot of time if you are maintaining a big client as one of mine that the maintenance cycle took more than 8 hours for all their regions.

Luckily for them I figure out a very effective and easy way to decrease that time and now it takes less than 30 minutes for the entire maintenance cycle. Basically I developed a method that categorizes each metadata row in our tables, and based in this category the interface knows what it need to do with that data. Let’s see how it works.

After we have an inbound and extract tables with all metadata from source and target systems (as we saw in the part 5 of our series), we need to compare them and decide what to do with each metadata member. For this tie out process we created the metadata tie out table that is a merge of both inbound and extract tables containing all source and target columns with a prefix identifying each one of them plus a column called CONDITION. This extra column is used to describe what the metadata load process should do with that particular member. It is important for this table to have all sources and target columns because then we can actually see what has changed from source to target metadata of that member.

Metadata tie out process will be responsible to read both source and extract tables and populate the metadata tie out table with all source, extract and CONDITION information. The tie out process has a built in intelligence that analyzes several different load situations to each member and populates the final result in the CONDITION column. The tie out process always searches for a parent/member/application/dimension combination in the source table and match it to the parent/member/application/dimension on the target table. The process uses this combination because these are the information that represents a unique member in Planning.

Here are the possible CONDITION statuses created by the tie out process:

CONDITION status

When it happens

Match

All metadata information from the inbound source table is equal to the extract table information, so no further action is needed.

No Match

Any column from the inbound source table is not equal to the extract table information. This member will need to be updated in the target Planning Application.

Exists only in Source

If it is a new member and exists only in the inbound source metadata table it needs to be loaded to the Planning Application.

Exists only in the Application

If a member was deleted on the source system but still remains in the planning application. For those cases we created a “Deleted Hierarchy” member and move the deleted members under it. The process doesn’t physically delete the member to keep the data associated with it intact.

Moved Member

If a member moves from one parent to the other and needs to be updated in the Planning Application.

Changed Attribute member

When one attribute is moved from his parents to another parent.

Reorder sibling members

When a new member needs to be inserted in the place where other member previously belongs or a member changed place order with one of its siblings.

Deleted Share Members

When one shared member stops to exist in the inbound table and needs to be deleted from the Planning Application.

The first four conditions status are achieved by a “Full Outer Join” between the Inbound and the Extract table and a “Case When” to define the CONDITION column as we can see in the below:

Tieout Query example

Tieout Query example

This query compares all metadata columns in the source and extracts tables to see what has changed and adds to the CONDITION column what the load process should do with that row afterwards. For the other four conditions status we need to work in the data just created by the figure 9 queries.

  • Moved Members: When we execute the query from Figure 9 we get an unexpected behavior regarding moved members. A moved member is a member that changed from one parent to another. Since the query compares the member and parent names to decide if that is a new, modified or deleted member, it will consider that the source member is a new member (because it has a new parent) and the extracted member will be considered as a deleted member (because its parent/member combination does not exist in the source) generating two rows in the tie out table instead of one. To solve this issue the tie out process merge those two rows into a single one. This merge happens for all multiple rows that have the same member name but one with “Existing only in Source” condition and another one with “Exists only in the Application” condition;
  • Changed Attribute Member: Attribute members require a special logic because Hyperion Planning treats them differently. When you want to move an attribute member from one parent to another, you first need to delete the member and then insert it back in the new parent. So this is a two-step operation, instead of the normal move member operation. When the process deletes the attribute first Hyperion Planning automatically removes its value from its associated dimension member. If we don’t load the associated dimension members again their attribute values will be missing in the end of the metadata load process. To solve this issue the metadata tie out process searches for all dimension members that have a moved attribute associated with it and change their condition to NO_MATCH. This will guarantee that after moving the attribute to a new parent the process also loads all the dimension members again with its attribute values. Another particularity with attributes is that if an attribute doesn’t exist anymore in the source system it is deleted from the planning application. It is not moved to a deleted hierarchy because no data is associated directly with the attribute member, thus no data is lost;
  • Reorder sibling members: When a single member is added to an existing parent member and this parent member has other child members, planning adds the new member in the end of the list. This is because Hyperion planning doesn’t have enough information to know in which order to insert this new member as it does not have its sibling’s orders to compare to it. So the tie out process also search for all existing siblings of the new member and mark them as NO_MATCH to indicate that they should be loaded all together. This way Hyperion Planning will have all siblings orders and will load the members in the correct order;
  •  Deleted Share Members: If a share member ceases to exist in the source metadata, it is removed completely from the planning application. There is no reason to move them to a deleted hierarchy member because no data is associated directly with it;

When the tie out process finishes populating the metadata tie out table we will have all information to load only the necessary members to Planning. As this table is centralized and has all applications and dimensions in it, it is just a matter to loop it for every application and dimension needed to be loaded by the generic load component. To accomplish this,the next post will show how to make the KM and the ODI models dynamic enought to handle this.

See you next time.

10 Important Things to Improve ODI Integrations with Hyperion Planning Part 6 (Metadata validation when loading data to Hyperion Planning)

Posted in EPM, Hyperion Planning, ODI Architecture, ODI Architecture with tags , , , , , on July 25, 2013 by radk00

Hi all! It’s good to be back! As we have seen in our last post, we can easily extract all existing metadata information from any number of planning applications, but what can we do with all this information? Well, we can do a lot of great stuff. One of them, as mentioned in the last post, is to compare the existing metadata information to the new metadata that we will load to planning and load just what have changed. This will be covered in details in a later post. Today I’ll be talking about a simpler but very powerful usage of this existing metadata information: metadata validation when loading data to Hyperion Planning!

Everyone that has loaded data into Hyperion Planning using ODI already passed through this situation: you get some data to load into planning and let’s say that this data load takes five minutes to complete. You are happy, the business team is happy, but for some reason in a random day the data load takes six hours to complete. The end user complains, you go check the logs and you find something like that:

2009-07-30 18:00:52,234 DEBUG [DwgCmdExecutionThread]: Error occurred in sending record chunk…Cannot end data load. Analytic Server Error(1003014): Unknown Member [ACT001] in Data Load, [0] Records Completed
2009-07-30 18:00:52,234 DEBUG [DwgCmdExecutionThread]: Sending data record by record to Essbase

This error happens when you are trying to send data using a member that is not part of your outline. ODI is great to load data into Hyperion Planning, but it has a weird behavior when you have an unknown member in you data load. In a perfect world, ODI reads its source database, gets a big chunk of data and sends straight to Essbase. Essbase process it, sends and OK to ODI and ODI sends another big chunk of data. This works pretty fast because it loads big chunks of data at a time, but if you have an unknown member in the data load, Essbase will send to ODI an error stating that there is one unknown member in that data chunk and ODI will switch to “record by record” mode. In this mode ODI will not send a chunk of data but it will send record by record to Essbase and this may take forever depending on how much data you have to load.

I don’t really know why ODI behaves like this, but this is what happens in reality. To avoid that we have a very powerfull technique that we will always talk about: metadata from planning repository. We already know how to read all existing metadata from a Planning application, so it is just a matter to compare all members that we will be sending to Hyperion Planning against all existing metadata in that application prior to the load. This will guarantee that only existing members of that application will be sent to Essbase, insuring that ODI will not flip to “record by record” mode.

How can we accomplish that? We have a lot of possibilities as it depends on the current database structure and tables that your system has, but I will display one that is generic enough to be applicable to any system.

1) Create a table with the following structure:

Figure7

You may want to add more columns to this table if you need, but those should be fine for our example. This table is used to store all existing metadata from any number of Hyperion Planning applications that you may have. You may populate it using the techniques seen in our last post.

2) Create an inbound table containing the data that you will send to Hyperion Planning. This table will contain one column for each dimension that you may have plus a column named “DATA” that will contain the value of that intersection and APP_NAME that will contain the name of the Hyperion Planning application which that data will be loaded to. Here is one example:

Figure8

Why do we need to create this table? As I said, there are many ways to do this verification and maybe you may not need to create it, but I strongly recommend doing so. The reason is that you create a data layer between your source system and Hyperion Planning that is centralized in one single point where you can have data for all applications that you may have (you may partition this table by app for example) and you may add centralized ODI check constraints in one single table as we can see below.

3)  Create ODI check constraints to validate all dimension members. For each dimension column in your INBOUND_GENERIC table, you will create an ODI check constraint that will validate those members against the existing metadata in that application. Let’s use ACCOUNT as an example:

figure1

Go to INBOUND_GENERIC model in ODI and add a New Reference constraint. Change the type to “Complex user reference” and select your model that contains the TBL_EXISTING_METADATA table.

figure2

Go to Expression tab and add your constraint SQL there as below:

figure3

Here we are comparing all members in ACCOUNT column in our INBOUND_GENERIC table against all ACCOUNT members in TBL_EXISTING_METADATA table that has a specific PLAN_TYPE and DATASTORAGE. Again, this is just an example and you may tweak it to your reality. You will do this to all dimensions that you may have and you may also add other constraints as duplicated keys, invalid amounts and so on:

figure4

4) The last part is just a matter to select a CKM in your ODI interface that will load the INBOUND_GENERIC table and see the results. You will have INBOUND_GENERIC table with only metadata that exists in your Hyperion Planning application and an E$ table (created by ODI CKM) with all non-existing members in your outline!

figure5

Now you may load from INBOUND_GENERIC table to Hyperion Planning application with the guarantee that it will always run fast without worrying about unknown members in your outline. Also as a bonus, you have E$ table with all members that are missing in the outline, so you may use it to warn the end users/support team and so on.

I hope you all enjoy!