Archive for the Hyperion Essbase Category

Automating Essbase Copy Outline Operation using Java API

Posted in ACE, BSO, Cubes, Essbase, Hacking, Hyperion Essbase, Java, Migration, Oracle with tags , , , , , , on August 9, 2017 by RZGiampaoli

Hi guys how are you? Did you guys ever tried to automate the process of coping a cube outline from one application to another?

Well, there’s an easy way to do that. Basically you copy the .otl from the server file system over the other cube. The problem is that if the cube is not empty, the database becomes corrupted since we just replaced an .otl file for another strange .otl file (no restructure happened).

Then if you want to copy the outline to an existing cube (that has data) this is not a solution.

The thing is, the only two possible ways to do what we want is the EAS “Save as” operation and the migration wizard. These both operations work because they copy the .otl file as .otn and then run a restructure in the database. The restructure “synchronize” the cube with the new outline, making the process safe for a cube that has data on it.

The problem is, none of these can be automated and there’re no way to do this operation using Maxl or EssCmd.

In fact, even using the Java API, it’s hard to figure out how to do that because all the copy methods seem to copy all kind of objects but the outline.

The good news is, we figured out a way to replicate the “Save as” operation using the Java API after hours of frustration and tears…

Here we go:

Save As Java code

The code is really simple. We need to connect in the essbase server, lock the target outline (the one we’ll overwrite) and then copy the outline from one application to another. To do that we are going to use the functions “lockOlapFileObject” and “copyOlapFileObjectToServer”.

This process that we just described will create an .otn file in the target cube. Now comes the great catch of this code (that is not documented anywhere):

If we open the target outline in EAS we will still see the old metadata. To commit the changes, we need to perform a restructure to merge the new outline (.otn) with the old one (.otl) updating the metadata.

To do that we are going to use the functions in the class “IEssCubeOutline” to “open”, “restructureCube” and “close” the target outline.

That is it. This process will do exactly what the “Save As” in EAS does, which means that you can copy outlines from one application to another even when the target database contains data.

I hope you guys enjoy and see you soon.

Advertisement

Kscope 17 is approaching fast!!! And we’ll be there!

Posted in ACE, Data Warehouse, Essbase, Hyperion Essbase, Java, Kscope 17, ODI, ODI Architecture, Oracle, Performance, Tips and Tricks, Uncategorized with tags , , , , , , , , on June 8, 2017 by RZGiampaoli

Hi guys how are you? We are sorry for being away for so much time but this year we have a lot of exiting things going one, then let’s start with what we’ll be doing at Kscope 17!

This year we’ll present 2 sessions:

Essbase Statistics DW: How to Automatically Administrate Essbase Using ODI (Jun 28, 2017, Wednesday Session 12 , 9:45 am – 10:45 am)

In order to have a performatic Essbase cube, we must keep vigilance and follow up its growth and its data movements so we can distribute caches and adjust the database parameters accordingly. But this is a very difficult task to achieve, since Essbase statistics are not temporal and only tell you the cube statistics is in that specific time frame.

This session will present how ODI can be used to create a historical statistical DW containing Essbase cube’s information and how to identify trends and patterns, giving us the ability for programmatically tune our Essbase databases automatically.

And…

Data Warehouse 2.0: Master Techniques for EPM Guys (Powered by ODI)  (Jun 26, 2017, Monday Session 2 , 11:45 am – 12:45 pm)

EPM environments are generally supported by a Data Warehouse; however, we often see that those DWs are not optimized for the EPM tools. During the years, we have witnessed that modeling a DW thinking about the EPM tools may greatly increase the overall architecture performance.

The most common situation found in several projects is that the people who develop the data warehouse do not have a great knowledge about EPM tools and vice-versa. This may create a big gap between those two concepts which may severally impact performance.

This session will show a lot of techniques to model the right Data Warehouse for EPM tools. We will discuss how to improve performance using partitioned tables, create hierarchical queries with “Connect by Prior”, the correct way to use multi-period tables for block data load using Pivot/Unpivot and more. And if you want to go ever further, we will show you how to leverage all those techniques using ODI, which will create the perfect mix to perform any process between your DW and EPM environments.

These presentations you can expect a lot of technical content, some very good tips and some very good ideas to improve your EPM environment!

Also I’ll be graduating in this year leadership program and this year we’ll be all over the place with the K-Team, a special team created to make the newcomers fell more welcome and help them to get the most of the kscope.

Also Rodrigo will be at Tuesday Lunch and Learn for the EPM Data Integration track on Cibolo 2/3/4.

And of course we will be around having fun an gathering new ideas for the next year!!!

And the last but not least, this year we’ll have a friend of us making his first appearance at Kscope with the presentation OBIEE Going Global! Getting Ready for More Than +140k Users (Jun 26, 2017, Monday Session 4 , 3:15 pm – 4:15 pm).

A standard Oracle Business Intelligence (OBIEE) reporting application can hold more or less 1,200 users. This may be a reasonable number of users for the majority of the companies out there, but what happens when an IT leader like Dell decides to acquire another IT giant like EMC and all of their combined 140,000-plus users need to have access to an HR OBIEE instance? What does that setup looks like? What kind of architecture do we need to have to support those users in a fast and reliable way?
This session shows the complexity of Dell’s OBIEE environment, describing all processes and steps performed to create such environment, meeting the most varied needs from business demands and L2 support, always aiming to improve environment stability. This architecture relies on a range of different technologies to support that huge amount of end users such as LDAP & SSL, Kerberos, SSO, SSL, BigIP, Shared Folders using NAS, Weblogic running into a cluster within #4 application servers.
If the challenge was not hard enough already, all of this setup also needed to consider Dell’s legacy OBIEE upgrade from v11.1.1.6.9 to v11.1.1.7.160119, so we will explain what were the pain points, considerations and orchestration needed to do all of this in parallel.

Thank you guys and see you there!

kscope17logo-pngm

Let’s Join DEVEPM @ KSCOPE 16

Posted in ACE, EPM, Essbase, ETL, Hyperion Essbase, Hyperion Planning, InfraStructure, Kscope 16, ODI, ODI 10g, ODI 11g, ODI 12c, ODI Architecture, ODTUG, Oracle Database, OS Command, Performance, Tips and Tricks with tags , , , , , , , , , , , , on April 5, 2016 by RZGiampaoli

Hi Guys how are you?

Just a quickly post about this year KSCOPE. This year we’ll have 2 excellent sessions:

Take a Peek at Dell’s Smart EPM Global Environment:

Ricardo Giampaoli , TeraCorp

Co-presenter(s): Rodrigo Radtke de Souza, Dell

When: Jun 27, 2016, Session 2, 10:15 am – 11:15 am

Topic: EPM Applications – Subtopic: Planning

In a fast-moving business environment, finance leaders are successfully leveraging technology advancements to transform their finance organizations and generate value for the business.
Oracle’s Enterprise Performance Management (EPM) applications are an integrated, modular suite that supports a broad range of strategic and financial performance management tools that help business to unlock their potential.

Dell’s global financial environment contains over 10,000 users around the world and relies on a range of EPM tools such as Hyperion Planning, Essbase, Smart View, DRM, and ODI to meet its needs.

This session shows the complexity of this environment, describing all relationships between those tools, the techniques used to maintain such a large environment in sync, and meeting the most varied needs from the different business and laws around the world to create a complete and powerful business decision engine that takes Dell to the next level. 

Incredible ODI Tips to Work with Hyperion Tools

Ricardo Giampaoli , TeraCorp

Co-presenter(s): Rodrigo Radtke de Souza, Dell

When: Jun 27, 2016, Session 6, 4:30 pm – 5:30 pm

Topic: EPM Platform – Subtopic: EPM Data Integration

ODI is an incredible and flexible development tool that goes beyond simple data integration. But most of its development power comes from outside-the-box ideas.

  • Did you ever want to dynamically run any number of “OS” commands using a single ODI component?
  • Did you ever want to have only one data store and loop different sources without the need of different ODI contexts?
  • Did you ever want to have only one interface and loop any number of ODI objects with a lot of control?
  • Did you ever need to have a “third command tab” in your procedures or KMs to improve ODI powers?
  • Do you still use an old version of ODI and miss a way to know the values of the variables in a scenario execution?
  • Did you know ODI has four “substitution tags”? And do you know how useful they are?
  • Do you use “dynamic variables” and know how powerful they can be?
  • Do you know how to have control over you ODI priority jobs automatically (stop, start, and restart scenarios)?

If you want to know the answer to all this questions, please join us in this session to learn the special secrets of ODI that will take your development skills to the next level.

Join us in KSCOPE 16 and book our 2 sessions in schedule. They will be very good sessions and I’m sure that you’ll learn some new stuff that will help you in your EPM Environment!

SpeakerSquare (1)

Remotely Ziping files with ODI

Posted in 11.1.1.9.0, ACE, Configuration, EPM, Essbase, ETL, Hacking, Hyperion Essbase, InfraStructure, ODI, ODI 10g, ODI 11g, ODI 12c, ODI Architecture, ODI Architecture, OS Command, Performance, Remotely, Tips and Tricks, Zip Files with tags , , , , , , , , , , , on April 5, 2016 by RZGiampaoli

Hi guys how are you? It has been a long time since last time I wrote something but it was for a good reason! We were working in our two Kscope sessions! Yes, this year we will have 2 sessions and I think they will be great!

Anyway, let us get to the point!

Today I want to talk about something that should be very simple to do it but in the end, it is a nightmare…. Zip a file in a remote server…

A little bit of context! I was working in a backup interface for one client and, because their cubes are very big, I was trying to improve the performance as much as I can.

Part of the backup was to copy the .ind and .pag files and the data extract files as well. For an app we are talking in 30 gb of .pag and 40 gb of data extract files.

Their ODI infrastructure is like this:

Infrastructure

Basically I need to extract/copy data from Essbase server to the disaster recovery server (DR Server). Nothing special here. The problem is, because the size of the files I wanted to Zip the files first and then send it to the DR server.

If you use the ODI tools to Zip the file, what it does is bring all the files to the ODI Agent server, zip everything and the send it back. I really do not want all this traffic in the network and all the time lost in this process (also, the agent server is a LOT less powerful then the Essbase server).

Regular odi tools zip process

Then I start to research how I could do that (and thank you my colleague and friend Luis Fernando Cairo that help me a lot doing a lot of tests on this)

First of all we have three main options here:

  1. Create a .bat file and run it remotely: I did not like it because I do not want a lot of .bats all over the places
  2. Use windows invoke command: I need a program in the server like 7 zip or so and I don’t have access to install freely and I do not want to install zip’s program all over the places too
  3. Use Psexec to execute a program in the server: Same as the previous one.

Ok, I figure out that in the end I’ll need to create/install something in the server… and I rate it. Well, let’s at least optimize the problem right!

Then I was thinking, what I have in common in all Hyperion servers? The answer is JAVA.

Then I thought, I can use the JAR command to zip a file:

jar cfM file.zip *.pag *.ind

Where:

c: Creates a new archive file named jarfile (if f is specified) or to standard output (if f and jarfile are omitted). Add to it the files and directories specified by inputfiles.

f: Specifies the file jarfile to be created (c), updated (u), extracted (x), indexed (i), or viewed (t). The -f option and filename jarfile are a pair — if present, they must both appear. Omitting f and jarfile accepts a “jar file” from standard input (for x and t) or sends the “jar file” to standard output (for c and u).

M: Do not create a manifest file entry (for c and u), or delete a manifest file entry if one exists (for u).

Humm, things start to looks better. Now I had to decide if I would use the Invoke command or Psexec.

I started trying the Invoke command, but after sometime I figure out that I can’t execute the jar command using invoke.

Then my last alternative was Psexec.

The good thing about it is that is a zip file that you need just to unzip in the agent server, set it in the Environment Variables (PATH) and you are good to go.

It works amazingly.

You can run anything remotely with this and it’s a centralized solution and non-invasive as well (what I liked).

You just need to:

psexec \\Server  -accepteula  -w “work dir” javapath\jar cfM file.zip *.pag *.ind

Where:

-w: Set the working directory of the process (relative to remote computer).

-accepteula: This flag suppresses the display of the license dialog.

There’s one catch, for some unknown reason, the ODI agent does not get the PATH correctly then you need to use the complete path where it was “Installed”. The ODI is like this:

OdiOSCommand “-OUT_FILE=Log_Path/Zip_App_Files-RUM-PNL.Log” “-ERR_FILE =Log_Path /Zip_App_Files-RUM-PNL.err”

D:\Oracle\PSTools\psexec \\server -accepteula -w \\arborpath\APP\RUM\PNL\ JAVA_PATH\jdk160_35\bin\jar cfM App_Files-RUM-PNL.zip *.pag *.ind

With this, we will have a process like this:

Remotly Zip Process

This should not be something that complicate but it is and believe me, I create a very fast process and the client is very happy.

I hope you guys enjoy it and see you soon.

Using templates to create dynamic rules in Calcmanager 11.1.2.4

Posted in 11.1.2.4, ACE, BSO, Business Rules, CalcScript, Calculation Manager, Calculation Script, EPM, Essbase, Hacking, Hyperion Essbase, Hyperion Planning, New Features, Oracle, Performance, Templates, Tips and Tricks with tags , , , , , , , , , , , on January 1, 2016 by RZGiampaoli

Hi guys and happy new year!!!

And to start well the new year what’s best then a post?

Today I want to talk about the new version of Calculation manager (11.1.2.4). I know that it is out for a while now but still I think it has some cool features that are not explored.

In all Planning project, sooner or later, we come to a time that we need to create a currency conversion Rule (at least I like to create a custom Rule for performance reasons). Also some companies uses a lot of currencies.

Before continue I need to say that in our case I find out that less code is equal a less performance. What I mean by that is that for the forecast horizon range period for example, instead of use “IF” and test my 15/18 months horizon I triplicate the code using “FIX” and using “SET EMPTYMEMBERSETS ON ;”.

This set command ignores the “FIX” if it returns an empty set. This approach increases the performance a lot, some times more than 8 times (In this currency example, if I ran it at channel level with “IF”, toke 8 hours, with “FIX” takes 1 hours).

Ok that means I rarely use “IF” in my Rules.

Well, you can already imagine the size and row boring and prone error is the Rules if I use only “FIX” right? However, with the “Template” feature in calcmanager and the ability to call any template or rule using a script this nightmare turns in to a dream!

Let us see how it works!

A Currency conversion for forecast applications normally has two parts:

First parts is a period range part.

Second part is the currency conversion itself.

With calcmanager, we can create two template, one for the period and the other for call the currency conversion part.

Then for the Currency conversion calculation, I create a simple core template with just a formula and a script on it:

UDA Loop Template

The “dtp_Quote_UDA”  is a DTP (design time prompt) variable with a function that will insert double quotes in every value that comes from the “dtp_UDA” DTP variable (this will be used to get values from the outside template), this way we can have use just one variable to do two papers, currency name and UDA value. The code is:

@QUOTE([dtp_UDA])

The inside the Currency calculation script we will have:

Currency Script

As we can see inside the script, I used the “dtp_Quote_UDA” as well the “dtp_UDA”. This simplify the amount of parameters I need to pass and the maintenance as well. Let’s think, we need the same information, one with double quotes, for the UDA values and other without quotes, for the Rate name.

With this technic we need to pass just once the value, let’s say BRL, and in the code Calcmanager will replace before the execution in all places, and we’ll have @UDA(Entity,”BRL”) as well HSP_Rate_BRL.

This is awesome because now I have just 8 line of code that will be transformed in any amount of times I want. The best thing is, or everything is right or everything is wrong J

Because calcmanger now we have a layer between the code written and the code generated, and this is pretty cool because opens a huge windows for creativity. You can even generate the entire code dynamically.

Ok, the next step is to loop this template once for each currency we have. For this, I created another template. This one will be used for the Forecast horizon period range as well for loop the currencies.

Period loop template

Again, the code is pretty simple, just  two fixes and one script.

For the “Period FIX” we use two DTP variable to get the value of Year and period from the outside rule ([dtp_Period] and [dtp_Year]).

The product fix is just something related with our architecture and we do not need to bother about it.

Now the “Loop Currency” is a script that will call N number of times our first template. How can we do that with a script?

Basically every time you drag and drop a template inside a rule or to another template behind the graphic design calcmanager generate a command line. This code exists thanks to its API, and you can use it to manipulate and generate almost any kind of code inside calcmanger.

Currency loop template

As we can see, inside the script we have a “Fix” for the USD currency, (that is the only different conversion) and one row for each currency.

Each row is calling a template “%Template(name:=Currency Conversion – 2 – UDA Loop” from an application “application:=”WWOPS””, a plan type, “plantype:=”Pnl””, and is passing two DTS values, one for the UDA and other for the Entity, “dtps:=(“dtp_UDA”:=[[AED]],”dtp_Entity:=[[dtp_Entity]])”.

As you can see, you can pass a DTP variable using the variable itself (dtp_Entity:=[[dtp_Entity]]).

If you want to create this API code and don’t know how to write the right syntax you can just drag your template to a rule/template, set everything and change your view to “Edit Script” or “View Script”.

Edit script

Now we just need to create the rule that will call this template for the three range of periods we have:

Currency rule

Again a simple design with a small amount of components. Here we have our SET commands, a main fix and the three templates, each one calling the previous template for a different period of range.

Period Range

The final result is a Rule with 1213 rows generated from a 8 rows template. This is the magic of calcmanager and templates. You can simplify everything, you can create dynamic aggregations, that will change depending of the application and cube, you can create codes that changes depending of the member that is coming from the forms, everything with small set of code that is reusable anytime we want!

Rule code 1Currency code 2

…….

A dynamic way to build a currency rule in calcmanager. A lot faster to build and a lot easier to maintain, since if a new currency start to be used you just need to copy and paste one line in “Currency Loop” script, change the currency and it’s done.

Build Rules using templates looks more work and some time a little bit complicate but I remember well how much time I expend changing BRs and I can guarantee that this way is much faster and easier to develop and mainly to maintain.

In the end we just create a Rule and two templates that contains just one core calculation, in my case a script calling 47 times this core, some fixes, and that’s all. It was less than 60 rows of written code to generate 1213 rows. Pretty good for me 🙂

Rules ante templates

Hope you guys enjoy and I wish a happy new year for all and you dears ones.

Happy new year!!!! A new year full of surprises!

 

10 Important Things to Improve ODI Integrations with Hyperion Planning Part 10 (1/2) (Generic Data Extract)

Posted in ASO, BSO, CalcScript, Calculation Script, Data Extract, EPM, Hyperion Essbase, MDX, MDX Query, NONEMPTYSUBSET, ODI, ODI Architecture, Performance on August 20, 2014 by RZGiampaoli

Hi guys, we are finally getting at the end of our series about important things to make your ODI integration with Hyperion Planning better, faster and smarter. Today we will talk about extract data from Essbase and because this topic involves a lot of how Essbase itself works, we decided to split it in two parts.

The first one we will talk about how Essbase works and what is the best (I mean faster) way to extract data from it. In the second part, we will show how we can create a dynamic interface using ODI and both of the fastest ways to extract data from yours cubes.

Well, this is the first hint: we will need to have two different approach for two different occasions, and for everybody understand why I am saying that, let us talk a little about how BSO cubes works.

I will not enter in too many details here because there are many excellent blogs that talks about Essbase, like the blogs of my friends Glenn and Cameron for example.

As everybody knows, a BSO cube works by blocks right? The block is based in the combination of the store members in the dense dimensions. This is what defines the size of the block. The amount of the block will be defined by the amount of sparse members in the outline.

A very common way to configure the density of the cube is to set the Account and Period dimension as Dense. Normally we do that because the majority of the calculations happens between periods and accounts members, and putting both inside a block makes Essbase uses his resources better since it needs to bring only one block to the memory and then do all the calculation that uses account and period. (Off course this is not true to all the cases and I have some examples of this)

OK, said that let us talk about the four ways to extract data from Essbase:
First, we have the Report Script. This one I do not have too much to say since it has the worst performance from the four ways. I can only say that you should not use it.

The second possible way is using the Essbase Java API. We will not discuss about this today because it is a little complex and needs some other programming skills (But do not worry, we will talk about it in a near future)

The third way is the old Calculation Script. This is one of the best and most flexible way to extract data from Essbase. It was the fastest way too until an undocumented command from MDX came up to the internet.

The forth one is the MDX. Basically this is the SQL language of Essbase. It is normally used to query ASO cubes but you can use it to query BSO cubes too. MDX is a very flexible language and offers a good way to get data from Essbase and it has some cool commands that Calculation script does not have. (Off course, the other way around is true too).

Ok, now that we had some briefing about the extraction method that we can use (ODI can use any of them as extracting engine) let us talk about which one is better.

I ran some tests with both, MDX and Calculation Script and I conclude some very interesting things: Both are the best and faster way to extract data for Essbase, but everything depending on what you want to extract.

First the cube:

OutlineThis is a pretty forward cube with Period and Account as Dense, 8 Sparse dimensions and 4 attribute dimensions.

StatisticsThe block size has a decent size (This is a 64 bits Essbase) and we have 81186 level 0 blocks to play around.

The CalcScript:

Calcscript for one monthAs we can see, this common CalcScript will extract some Lvl0 Accounts from February.

After build the CalcScript, we created a very simple ODI interface that uses this CalcScript to extract data. Basically what ODI does when you set a CalcScript as extraction engine is to execute the calculation script in Essbase server making it create a file. After the file is created, it loads the file into a table. The entire process took 89 sec and loaded 35228 rows (including missing blocks).

ODI Execution CalcScript 1 monthNow lets see the difference when we use MDX. Same process, first I created a MDX and then I associated it into an ODI interface. This is the MDX:

MDX 1 monthAs I mentioned before, CalcScript was the fastest way to extract data from Essbase until an undocumented command appears in the internet. (I do not remember where I found it, but it was 3 years ago).

Anyway, the command is the NONEMPTYSUBSET. Well, how is it different from the NON EMPTY command? Well, the NON EMPTY will remove all empty members AFTER the query is executed and the NONEMPTYSUBSET will evaluate you subets of members BEFORE asking the data to Essbase and then will query it. This makes a huge difference, and this is what makes the MDX faster than CalcScript.

One important thing is that because of this pre evaluation of the Subsets, checking if the subset of members are empty or not, it is very important to create a big “Block” of members to use this command. This is why I used currency in the columns (Currency always has data but it could be periods too) and I created a nested Crossjoins to get all the other hierarchical dimensions. Everything else I put in the WHERE.

If you create a MDX using one axis for each dimension the NONEMPTYSUBSET will test that combination and if it’s empty it’ll bring no data. For example, in the query below the NONEMPTYSUBSET will first evaluate the combination of lvl0 of channel and account plus currency and everything that exists in the WHERE but not what is in the second NONEMPTYSUBSET. Of course, it will end into an empty “Block” of members.

Wrong MDX ExampleResuming, you must have only one NONEMPTYSUBSET per query.

And the results? It’s true, the same query, almost same amount of rows (remember, it took off the missing) took only 21 seconds to be executed. This is more than 4 times faster than our CalcScript execution.

ODI MDX Execution one monthOk, before Cameron ask me how this is different from what he wrote in his Post let me continue showing 2 more tests. What happens if you want to extract an entire year of data? Let me show you the results and then some explanations.

The CalcScript:

CalcScript twelve monthsSame CalcScript from before but now I’m getting the Feb:Jan instead only Feb. The results:

ODI Execution CalcScript twelve monthsDid it extract 20 times more rows and took almost the same time? Indeed it did. In fact it was exactly the same time. Why you are asking? Because I am showing the overall execution time and you need to remember that first it extract to a file and then it loads the file to a table. In the first execution the “Insert new rows” step (this is the step that loads the file) took 0 seconds to be executed, and in the second execution, it took 13 seconds, giving us 102 sec for the CalcScript execution. If we remove the time to create the file too, the extract process took the same amount of time for a month or a year.

Well this was easily explained in the beginning of this post. Because we are extracting data from a BSO cube and the Period dimension is part of the block, for Essbase it is exactly the same to export one month or one year because it’ll bring the block to the memory and we’ll have all periods there.

In fact, in some cases it takes more time to export only one month than a year because for Essbase, filtering a month from the block is an extra step.

Really Interesting right? However, how do MDX behaves when we need it to extract a year of data?

Let us see. First the MDX:

MDX twelve monthsSame as before but now with [FEB]:[JAN], and the result is:

ODI MDX Execution twelve monthsWhat? 151 seconds? Why is that? What is this number? Is this the 21 seconds for one month times 12 (12 months) plus the time to load to the table? You bet. MDX does not handle the block in the same way that CalcScript does. Maybe this is explained by the fact that MDX was born to query ASO cubes.

I do not know the reason but what I do know is that, if you need to extract an entire year and your Period dimension is inside the block, probably the best way will be using CalcScript. I said probably because CalcScripts are specially vulnerable regarding the design of the cube. I mean, everything counts, the size of the block, how dense is the block and so on.

In some cases I saw a huge difference in both, calculating and extracting data using CalcScript, if the cube is closer to 100% dense and if it is not. In addition, for extracting data I found that extremely small blocks make the process extremely faster, and when I say small blocks, I mean only periods as dense. Off course, making the block this small will impact in the calculation (you will need to re-think your formulas, the storage settings in the account dimension, caches and other things), but the extract will be extremely fast.

Anyway, my point is that depending on density, block size, settings, cube design, weather and some other stuff, the behavior of the cube could and probably will change, making the choice harder. But at least we know that we have two very good approaches and that depending of the data that you will need to extract, we will have one choice in mind.

In the next blog we will show you how to handle both methods using ODI and how to make the interface dynamically extract data from any number of cubes with any possible design.

I hope you enjoy this post see you next time.

Really using ODI 12c for Data Integration with Oracle Hyperion Planning/Essbase

Posted in EPM, Hacking, Hyperion Essbase, Hyperion Planning, Kscope 14, ODI, ODI 12c, ODI Architecture with tags , , , , , on July 18, 2014 by RZGiampaoli

On Kscope 14 Oracle announced that ODI 12c would not have support to Planning/Essbase and HFM. You people could imagine that this bomb would make a lot of noise in EPM world.

Because we did not liked the ODI 12c interface, we did not care too much about this announcement, but after Oracle post this: https://blogs.oracle.com/dataintegration/entry/using_odi_12c_for_data, we heard a lot of users complaining about it.

Also we thought a little about our dear users and imagined how our beautiful ODI environment that orchestrates the entire EPM environment would look like if we had to create a lot of extra steps to export data from tables to files and use some CMD commands to load it into Planning/Essbase/HFM.

We decided to make some tests to see if we were able to use ODI 11 KMs with minimal changes to them (in order to increase acceptance of this method), and do our part to help the EPM world.

After a couple of hours (it was more like 6 hours…) we figure out how to make the KMs from ODI 11 work with minimal changes in ODI 12. Let us stop talking and start showing what we need to change to make ODI 12 compatible with EPM.

Technologies Technologies We do not need to import or change anything in the Technologies tree because Oracle left all the technologies for the EPM tools there.

Jars

We do not need to change or copy any Jar because Oracle was kind enough to maintain them there. The only thing we need to do is to Import the EPM KMs from the ODI 11 to ODI 12.

KMs KMSWe have seven KM that we need to import from ODI 11.

For Essbase:

RKM Hyperion Essbase

LKM Hyperion Essbase DATA to SQL

LKM Hyperion Essbase METADATA to SQL

IKM SQL to Hyperion Essbase (DATA)

IKM SQL to Hyperion Essbase (METADATA)

For Planning:

RKM Hyperion Planning

IKM SQL to Hyperion Planning

After we import the KMs, we need to changes a few things on them.

RKMs

In the RKMs we saw the first changes Oracle did in the way ODI works. In the 11 version, ODI used the <%=odiRef.getModel(“ID“)%> substitution method to get the ID of the model there we would like to reverse. If we take a look in the master repository, in ODI 12 we’ll see a new column in the table: Master Repository This is the first thing we need to update in the RKM. ODI 11 used the ID, and if you try to use it as is, Oracle created a validation that makes the interface fails. (It was a good error message because we could figure out the problem reading it). Global ID Error Then, basically we need to update it to the new GLOBAL_ID.

RKM Hyperion Essbase

Step Reset MetaData:

We need to update the code:

SnpsReverseResetTable -MODEL=<%=odiRef.getModel(“ID“)%>

To

SnpsReverseResetTable -MODEL=<%=odiRef.getModel(“GLOBAL_ID“)%>

Step Start the Reverse:

We need to update the code to the new standard and also enclosure the substitution method with double quotes because now it returns a string instead of a numeric value.

imod = <%=snpRef.getModel(“ID“)%>

To

imod = “<%=snpRef.getModel(“GLOBAL_ID“)%>”

Step Set MetaData:

We need only to update to GLOBAL_ID.

SnpsReverseSetMetaData -MODEL=<%=snpRef.getModel(“ID“)%>

To

SnpsReverseSetMetaData -MODEL=<%=snpRef.getModel(“GLOBAL_ID“)%>

RKM Hyperion Planning

For the Planning KM we just need to update the code exactly in the same way we did in the Essbase KM.

Step Reset MetaData:

SnpsReverseResetTable -MODEL=<%=odiRef.getModel(“ID“)%>

To

SnpsReverseResetTable -MODEL=<%=odiRef.getModel(“GLOBAL_ID“)%>

Step Start the Reverse:

imod = <%=snpRef.getModel(“ID“)%>

To

imod = “<%=snpRef.getModel(“GLOBAL_ID“)%>”

Step Set MetaData:

SnpsReverseSetMetaData -MODEL=<%=snpRef.getModel(“ID“)%>

To

SnpsReverseSetMetaData -MODEL=<%=snpRef.getModel(“GLOBAL_ID“)%>

After this, just create a model and reverse something to test it. RKM Execution

Results are these: Planning ReverseEssbase Reverse

With this we are ready with the RKM part and we can start to create interfaces. Sweet.

IKMs

The IKMs were the easiest part. Oracle created some categories to group similar KMs. We need just to choose one of the 3 possibilities to make the KM shows in the Mappings (the new Interface object of ODI 12): Integration Type

We choose “Control Append” since the EPM IKMs does not have any type of integration strategy, but really does not matter with one you choose (I just like things organized).

IKM SQL to Hyperion Essbase (DATA)

We need to double click the KM and select the integration type: KM Integration type selectionIKM SQL to Hyperion Essbase (METADATA)

For the Metadata we need to do the same thing, add the Integration type (We chose “Control Append” again)

IKM SQL to Hyperion Planning

Same thing for the Planning IKM. Add the Integration type. (We chose “Control Append“) IKMs executionsWe are done with the IKM part.

LKMs

LKM Hyperion Essbase DATA to SQL

There is no change in this KM. Works as is. I like it.

LKM Hyperion Essbase METADATA to SQL

This was the tricky one. And not because it’s too different than the others. In fact, if it was not for a BUG in the API it would not need any changes to make it works (On Monday we will open a SR with Oracle about this bug).

The thing is: In the step “Begin Essbase Metadata Extract”, we have a line with the follow command: stagCols = ‘<%=snpRef.getColList(“”, “[CX_COL_NAME]=[COL_NAME] “, “, “, “”, “”)%>’

This command basically create a mapping between the temporary table and Essbase to start the extract. Essbase and its API is case sensitive with this information, and the pattern COL_NAME, in this version, seems to be bugged or changed the behavior (that I do not believe is the case, I will explain later).

In this version instead of bring something like this:

stagCols = ‘PARENTNAME=ParentName, MEMBERNAME=MemberName , ALIAS=Alias

It’s bring something like this:

stagCols = ‘PARENTNAME=PARENTNAME, MEMBERNAME= MEMBERNAME, ALIAS= ALIAS

The pattern is always returning in Uppercase, even when you change the execution to Stage or Target areas, and because of this, we got an error saying that the member PARENTNAME is not valid for the dimension XXXXX (And this does not happen in ODI 11 if you set the execution to Stage or Target areas).

Anyway, we start to test the behavior and we found out that in ODI 12 it is impossible (by now) to create an Oracle table like this: Oracle Model test Because the pattern COL_NAME always return, uppercase no matter what and it removes the double quotes from the expression. This is why we think it is a bug, because it removes a functionality from the Oracle DB as well.

For a quickly workaround (yes we know it’s ugly, but Oracle will fix that in no time and we’ll be able to get rid of it) we changed the code from:

stagCols = ‘<%=snpRef.getColList(“”, “[CX_COL_NAME]=[COL_NAME] “, “, “, “”, “”)%>’

To

stagCols = ‘<%=snpRef.getColList(“”, “[CX_COL_NAME]=[COL_DESC] “, “, “, “”, “”)%>’

After this, we just need to copy the column name to the description inside the attributes in the model: Models Details Double clicking the column, it will open its properties and in the description tab we only need to copy the name of the column to its description because the COL_DESC will get exactly what is written there (with one exception: if you put a quote or double quotes it will turn it in space, but we will not use it here). Description workaroundThis workaround does the trick: LKMs Executions Yes, we know that we could use a lot of other methods, as Flexfields for example, to do this or even do some code changes in the KM, but the idea here is to show that is possible to use ODI 12 with minor changes. And with this we can use ODI 12 in our EPM environment just like we do with the 11 version.

We didn’t test HFM here because we don’t have it installed in our infra, but probably the amount of changes for HFM is the same as it’s for Essbase and Planning.

We hope you guys like this and please give us any feedback if you guys implement this solution. As far as we test it, it should work exactly like ODI 11.

See you next time.

——————Edited on 07/22/2014——————-

About the Case sensitive issue of [COL_NAME].

We installed the ODI 12.1.2.3 today and repeated the tests we did in the previous version.

The [COL_NAME] for IKM is working, and that means, the issue to create an Oracle table with 2 columns with the same name but different case is solved.

But the [COL_NAME] for LKM still doesn’t works. All LKM have the same behavior and we can see that in the logs when we tried to replicate the IKM behavior in a LKM.

LKM testIKM test

As we can see, the behavior was fixed in the IKM but not in the LKM. This happens because the KMs are executed in different ways depending on its type (I, L, CKM) as we can see here:

http://docs.oracle.com/middleware/1212/odi/ODIKD/odiref_reference.htm#CIADFCFJ

For now please stick with our workaround until Oracle fix this in the LKMs.

See you next time.