Archive for Hyperion Essbase

How to “Save As” an Essbase Outline in ODI using Java

Posted in Uncategorized with tags , , , on April 20, 2020 by RZGiampaoli

Hey guys how are you?

Today I’ll going to show you how to “Save As” an essbase outline using ODI and Java. I normally use Maxl and OS commands to do this kind of things but turns out, there’s no Maxl to do that.

In fact, this is very interesting even if you don’t like java, because it’ll show exactly what Essbase does behind the scenes to save a outline. Let’s start.

First of all we’ll going to need some essbase API libraries. In ODI, the Client and the Agent already include some Essbase Jars in the Lib folder (one Lib folder for the Client and one for the Agent).

If you need anything outside what you have there you need to copy from essbase to the Lib folders and restart the agent. In this case we’ll need to import these:

import com.essbase.api.base.EssException;
import com.essbase.api.datasource.IEssCube;
import com.essbase.api.datasource.IEssOlapFileObject;
import com.essbase.api.datasource.IEssOlapServer;
import com.essbase.api.domain.IEssDomain;
import com.essbase.api.metadata.IEssCubeOutline;
import com.essbase.api.session.IEssbase;

After that we need to create a few String variables to help us organize our code:

String s_userName = <%=odiRef.getInfo("SRC_USER_NAME")%>;
String s_password = <%=odiRef.getInfo("SRC_PASS")%>;
String olapSvrName = <%=odiRef.getInfo("SRC_DSERV_NAME")%>;
String s_provider = "Embedded";
String appNameFrom = "Juno_M";
String appNameTo = "Juno";
String database = "Analysis";
IEssbase ess = null;
IEssOlapServer olapSvr = null;

Since I’m using an ODI procedure, I can set in the Command on Source tab the Essbase connection I want and then I can get in the Command on Target the User name, password and the server name as well, using the ODI substitution API, this way I can use what is store in the Topology without to worry about hard-code any password in the code.

In the next step we need to connect in Essbase using:

ess = IEssbase.Home.create(IEssbase.JAPI_VERSION);

IEssDomain dom = ess.signOn(s_userName, s_password, false, null, s_provider);

olapSvr = dom.getOlapServer(olapSvrName);

olapSvr.connect();

Basically what this is doing is to instantiate and essbase server, connection in the domain using the Command on Source information and then connect into a specific Olap server. After this we are ready to start execute some commands. And now it gets interesting. This is exactly what essbase does behind the scenes:

  1. It Locks the Outline we want to copy:
    • olapSvr.lockOlapFileObject(appNameTo, database, IEssOlapFileObject.TYPE_OUTLINE, database);
  2. It does an OS File copy from the source app folder to the target app folder:
    • olapSvr.copyOlapFileObjectToServer(appNameTo, database, IEssOlapFileObject.TYPE_OUTLINE, database,”\\server\D$\path”+”\”+appNameFrom+”\”+database+”\”+database+”.otl”,true);
    • As you can see, the command ask for the name of the app you want to save the outline, the type of the object (that is OUTLINE), the folder path for the source Outline and the last parameter is a true or false to tell if we want to unlock the object or to lock. True is unlock
    • If you look into the target folder during this step, you’ll see that Essbase will copy the source .otl to the target folder as .otn
  3. Then we need to open the target outline using:
    • IEssCubeOutline otl = olapSvr.getApplication(appNameTo).getCube(database).getOutline();
    • otl.open();
  4. Last thing you need to do is to merge the .otn into the .otl files:
    • otl.restructureCube(IEssCube.EEssRestructureOption.KEEP_ALL_DATA);
    • otl.close();
    • We just need to ask for the cube to restructure and pass the option KEEP_ALL_DATA, after that we can close the outline

Interesting thing here is that if you get a outline, rename to .otn, put this file inside a folder and force the cube to restructure (via EAS), it’ll automatically merge the created .otn with the .otl.

Also, this is why oracle recommend to have double the size of the cube in memory, because when we do a restructure, we have 2 outlines open at same time, the .otn and the .otl.

Here’s the entire code:

import com.essbase.api.base.EssException;
import com.essbase.api.datasource.IEssCube;
import com.essbase.api.datasource.IEssOlapFileObject;
import com.essbase.api.datasource.IEssOlapServer;
import com.essbase.api.domain.IEssDomain;
import com.essbase.api.metadata.IEssCubeOutline;
import com.essbase.api.session.IEssbase;

String s_userName = <%=odiRef.getInfo("SRC_USER_NAME")%>;
String s_password = <%=odiRef.getInfo("SRC_PASS")%>;
String olapSvrName = <%=odiRef.getInfo("SRC_DSERV_NAME")%>;
String s_provider = "Embedded";
String appNameFrom = "Juno_M";
String appNameTo = "Juno";
String database = "Analysis";
IEssbase ess = null;
IEssOlapServer olapSvr = null;

try {

    ess = IEssbase.Home.create(IEssbase.JAPI_VERSION);
    IEssDomain dom = ess.signOn(s_userName, s_password, false, null, s_provider);
    olapSvr = dom.getOlapServer(olapSvrName);
		olapSvr.connect();
		olapSvr.lockOlapFileObject(appNameTo, database, IEssOlapFileObject.TYPE_OUTLINE, database);
		olapSvr.copyOlapFileObjectToServer(appNameTo, database, IEssOlapFileObject.TYPE_OUTLINE, database,											"#ESSBEXTRACT_FOLDER"+"\\"+appNameFrom+"\\"+database+"\\"+database+".otl",true);
		IEssCubeOutline otl = olapSvr.getApplication(appNameTo).getCube(database).getOutline();
		otl.open();
		otl.restructureCube(IEssCube.EEssRestructureOption.KEEP_ALL_DATA);
		otl.close();

} 
	catch (EssException e) 
{
	System.err.println("Error: " + e.getMessage());
	throw new Exception("Error: " + e.getMessage());
} 
	finally 
{
	/* clean up the connection to essbase */
	if (olapSvr != null && olapSvr.isConnected() == true)
		olapSvr.disconnect();
	if (ess != null && ess.isSignedOn() == true)
		ess.signOff();
}

I hope you guys enjoy this one and see you soon.

Automating Essbase Copy Outline Operation using Java API

Posted in ACE, BSO, Cubes, Essbase, Hacking, Hyperion Essbase, Java, Migration, Oracle with tags , , , , , , on August 9, 2017 by RZGiampaoli

Hi guys how are you? Did you guys ever tried to automate the process of coping a cube outline from one application to another?

Well, there’s an easy way to do that. Basically you copy the .otl from the server file system over the other cube. The problem is that if the cube is not empty, the database becomes corrupted since we just replaced an .otl file for another strange .otl file (no restructure happened).

Then if you want to copy the outline to an existing cube (that has data) this is not a solution.

The thing is, the only two possible ways to do what we want is the EAS “Save as” operation and the migration wizard. These both operations work because they copy the .otl file as .otn and then run a restructure in the database. The restructure “synchronize” the cube with the new outline, making the process safe for a cube that has data on it.

The problem is, none of these can be automated and there’re no way to do this operation using Maxl or EssCmd.

In fact, even using the Java API, it’s hard to figure out how to do that because all the copy methods seem to copy all kind of objects but the outline.

The good news is, we figured out a way to replicate the “Save as” operation using the Java API after hours of frustration and tears…

Here we go:

Save As Java code

The code is really simple. We need to connect in the essbase server, lock the target outline (the one we’ll overwrite) and then copy the outline from one application to another. To do that we are going to use the functions “lockOlapFileObject” and “copyOlapFileObjectToServer”.

This process that we just described will create an .otn file in the target cube. Now comes the great catch of this code (that is not documented anywhere):

If we open the target outline in EAS we will still see the old metadata. To commit the changes, we need to perform a restructure to merge the new outline (.otn) with the old one (.otl) updating the metadata.

To do that we are going to use the functions in the class “IEssCubeOutline” to “open”, “restructureCube” and “close” the target outline.

That is it. This process will do exactly what the “Save As” in EAS does, which means that you can copy outlines from one application to another even when the target database contains data.

I hope you guys enjoy and see you soon.

Kscope 17 is approaching fast!!! And we’ll be there!

Posted in ACE, Data Warehouse, Essbase, Hyperion Essbase, Java, Kscope 17, ODI, ODI Architecture, Oracle, Performance, Tips and Tricks, Uncategorized with tags , , , , , , , , on June 8, 2017 by RZGiampaoli

Hi guys how are you? We are sorry for being away for so much time but this year we have a lot of exiting things going one, then let’s start with what we’ll be doing at Kscope 17!

This year we’ll present 2 sessions:

Essbase Statistics DW: How to Automatically Administrate Essbase Using ODI (Jun 28, 2017, Wednesday Session 12 , 9:45 am – 10:45 am)

In order to have a performatic Essbase cube, we must keep vigilance and follow up its growth and its data movements so we can distribute caches and adjust the database parameters accordingly. But this is a very difficult task to achieve, since Essbase statistics are not temporal and only tell you the cube statistics is in that specific time frame.

This session will present how ODI can be used to create a historical statistical DW containing Essbase cube’s information and how to identify trends and patterns, giving us the ability for programmatically tune our Essbase databases automatically.

And…

Data Warehouse 2.0: Master Techniques for EPM Guys (Powered by ODI)  (Jun 26, 2017, Monday Session 2 , 11:45 am – 12:45 pm)

EPM environments are generally supported by a Data Warehouse; however, we often see that those DWs are not optimized for the EPM tools. During the years, we have witnessed that modeling a DW thinking about the EPM tools may greatly increase the overall architecture performance.

The most common situation found in several projects is that the people who develop the data warehouse do not have a great knowledge about EPM tools and vice-versa. This may create a big gap between those two concepts which may severally impact performance.

This session will show a lot of techniques to model the right Data Warehouse for EPM tools. We will discuss how to improve performance using partitioned tables, create hierarchical queries with “Connect by Prior”, the correct way to use multi-period tables for block data load using Pivot/Unpivot and more. And if you want to go ever further, we will show you how to leverage all those techniques using ODI, which will create the perfect mix to perform any process between your DW and EPM environments.

These presentations you can expect a lot of technical content, some very good tips and some very good ideas to improve your EPM environment!

Also I’ll be graduating in this year leadership program and this year we’ll be all over the place with the K-Team, a special team created to make the newcomers fell more welcome and help them to get the most of the kscope.

Also Rodrigo will be at Tuesday Lunch and Learn for the EPM Data Integration track on Cibolo 2/3/4.

And of course we will be around having fun an gathering new ideas for the next year!!!

And the last but not least, this year we’ll have a friend of us making his first appearance at Kscope with the presentation OBIEE Going Global! Getting Ready for More Than +140k Users (Jun 26, 2017, Monday Session 4 , 3:15 pm – 4:15 pm).

A standard Oracle Business Intelligence (OBIEE) reporting application can hold more or less 1,200 users. This may be a reasonable number of users for the majority of the companies out there, but what happens when an IT leader like Dell decides to acquire another IT giant like EMC and all of their combined 140,000-plus users need to have access to an HR OBIEE instance? What does that setup looks like? What kind of architecture do we need to have to support those users in a fast and reliable way?
This session shows the complexity of Dell’s OBIEE environment, describing all processes and steps performed to create such environment, meeting the most varied needs from business demands and L2 support, always aiming to improve environment stability. This architecture relies on a range of different technologies to support that huge amount of end users such as LDAP & SSL, Kerberos, SSO, SSL, BigIP, Shared Folders using NAS, Weblogic running into a cluster within #4 application servers.
If the challenge was not hard enough already, all of this setup also needed to consider Dell’s legacy OBIEE upgrade from v11.1.1.6.9 to v11.1.1.7.160119, so we will explain what were the pain points, considerations and orchestration needed to do all of this in parallel.

Thank you guys and see you there!

kscope17logo-pngm

Using templates to create dynamic rules in Calcmanager 11.1.2.4

Posted in 11.1.2.4, ACE, BSO, Business Rules, CalcScript, Calculation Manager, Calculation Script, EPM, Essbase, Hacking, Hyperion Essbase, Hyperion Planning, New Features, Oracle, Performance, Templates, Tips and Tricks with tags , , , , , , , , , , , on January 1, 2016 by RZGiampaoli

Hi guys and happy new year!!!

And to start well the new year what’s best then a post?

Today I want to talk about the new version of Calculation manager (11.1.2.4). I know that it is out for a while now but still I think it has some cool features that are not explored.

In all Planning project, sooner or later, we come to a time that we need to create a currency conversion Rule (at least I like to create a custom Rule for performance reasons). Also some companies uses a lot of currencies.

Before continue I need to say that in our case I find out that less code is equal a less performance. What I mean by that is that for the forecast horizon range period for example, instead of use “IF” and test my 15/18 months horizon I triplicate the code using “FIX” and using “SET EMPTYMEMBERSETS ON ;”.

This set command ignores the “FIX” if it returns an empty set. This approach increases the performance a lot, some times more than 8 times (In this currency example, if I ran it at channel level with “IF”, toke 8 hours, with “FIX” takes 1 hours).

Ok that means I rarely use “IF” in my Rules.

Well, you can already imagine the size and row boring and prone error is the Rules if I use only “FIX” right? However, with the “Template” feature in calcmanager and the ability to call any template or rule using a script this nightmare turns in to a dream!

Let us see how it works!

A Currency conversion for forecast applications normally has two parts:

First parts is a period range part.

Second part is the currency conversion itself.

With calcmanager, we can create two template, one for the period and the other for call the currency conversion part.

Then for the Currency conversion calculation, I create a simple core template with just a formula and a script on it:

UDA Loop Template

The “dtp_Quote_UDA”  is a DTP (design time prompt) variable with a function that will insert double quotes in every value that comes from the “dtp_UDA” DTP variable (this will be used to get values from the outside template), this way we can have use just one variable to do two papers, currency name and UDA value. The code is:

@QUOTE([dtp_UDA])

The inside the Currency calculation script we will have:

Currency Script

As we can see inside the script, I used the “dtp_Quote_UDA” as well the “dtp_UDA”. This simplify the amount of parameters I need to pass and the maintenance as well. Let’s think, we need the same information, one with double quotes, for the UDA values and other without quotes, for the Rate name.

With this technic we need to pass just once the value, let’s say BRL, and in the code Calcmanager will replace before the execution in all places, and we’ll have @UDA(Entity,”BRL”) as well HSP_Rate_BRL.

This is awesome because now I have just 8 line of code that will be transformed in any amount of times I want. The best thing is, or everything is right or everything is wrong J

Because calcmanger now we have a layer between the code written and the code generated, and this is pretty cool because opens a huge windows for creativity. You can even generate the entire code dynamically.

Ok, the next step is to loop this template once for each currency we have. For this, I created another template. This one will be used for the Forecast horizon period range as well for loop the currencies.

Period loop template

Again, the code is pretty simple, just  two fixes and one script.

For the “Period FIX” we use two DTP variable to get the value of Year and period from the outside rule ([dtp_Period] and [dtp_Year]).

The product fix is just something related with our architecture and we do not need to bother about it.

Now the “Loop Currency” is a script that will call N number of times our first template. How can we do that with a script?

Basically every time you drag and drop a template inside a rule or to another template behind the graphic design calcmanager generate a command line. This code exists thanks to its API, and you can use it to manipulate and generate almost any kind of code inside calcmanger.

Currency loop template

As we can see, inside the script we have a “Fix” for the USD currency, (that is the only different conversion) and one row for each currency.

Each row is calling a template “%Template(name:=Currency Conversion – 2 – UDA Loop” from an application “application:=”WWOPS””, a plan type, “plantype:=”Pnl””, and is passing two DTS values, one for the UDA and other for the Entity, “dtps:=(“dtp_UDA”:=[[AED]],”dtp_Entity:=[[dtp_Entity]])”.

As you can see, you can pass a DTP variable using the variable itself (dtp_Entity:=[[dtp_Entity]]).

If you want to create this API code and don’t know how to write the right syntax you can just drag your template to a rule/template, set everything and change your view to “Edit Script” or “View Script”.

Edit script

Now we just need to create the rule that will call this template for the three range of periods we have:

Currency rule

Again a simple design with a small amount of components. Here we have our SET commands, a main fix and the three templates, each one calling the previous template for a different period of range.

Period Range

The final result is a Rule with 1213 rows generated from a 8 rows template. This is the magic of calcmanager and templates. You can simplify everything, you can create dynamic aggregations, that will change depending of the application and cube, you can create codes that changes depending of the member that is coming from the forms, everything with small set of code that is reusable anytime we want!

Rule code 1Currency code 2

…….

A dynamic way to build a currency rule in calcmanager. A lot faster to build and a lot easier to maintain, since if a new currency start to be used you just need to copy and paste one line in “Currency Loop” script, change the currency and it’s done.

Build Rules using templates looks more work and some time a little bit complicate but I remember well how much time I expend changing BRs and I can guarantee that this way is much faster and easier to develop and mainly to maintain.

In the end we just create a Rule and two templates that contains just one core calculation, in my case a script calling 47 times this core, some fixes, and that’s all. It was less than 60 rows of written code to generate 1213 rows. Pretty good for me 🙂

Rules ante templates

Hope you guys enjoy and I wish a happy new year for all and you dears ones.

Happy new year!!!! A new year full of surprises!

 

Article “Unleashing Hyperion Planning Security Using ODI” was published in OTN

Posted in Hyperion Planning, ODI, Configuration, Security with tags , , , , , , , , , , , , , on July 16, 2015 by RZGiampaoli

Hi guys!

OTN just published a new article from devepm entitled “Unleashing Hyperion Planning Security Using ODI”.

This article is all about Planning security and how you can use ODI to create cell level security based in anything you like (in our case is Attribute Dimension).

This is a study case and it’s in production for 3 years right now.

Please take a look and let us know your thoughts.

Thank you and see you soon.

10 Important Things to Improve ODI Integrations with Hyperion Planning Part 9 (Generic Data Load)

Posted in EPM, Hyperion Planning, ODI, ODI Architecture with tags , , , , , on April 25, 2014 by Rodrigo Radtke de Souza

Hi all, today let’s see how we may create dynamic data load components for any number of Hyperion Planning applications. As you know, all data related to Hyperion Planning actually resides in an Essbase database and ODI has a Knowledge module that loads data into it called “IKM SQL to Hyperion Essbase (DATA)”. In this post we will tweak this Knowledge Module allowing it to read the Planning Metadata repository and dynamically build its own data store mappings and definitions depending on which Hyperion Planning application it will load data to. But first let’s take a look on how the original “IKM SQL to Hyperion Essbase (DATA)” works:

9_1

As you can see, it is a fairly simple KM with two main steps: “Prepare for loading” and “Load data into Essbase”. If you open “Prepare for loading” step you will see some Jython code there getting all configurations and options to be used in your data load and since our objective here is to make it dynamic to load any Hyperion Planning application we should focus on the bellow settings:

9_2

These settings are getting all the connection information related to the target data store in the Interface ODI component. If you already worked with Hyperion Planning and ODI, you know that the target data store contains all dimensions from a particular Hyperion Planning application and a column named “Data” that will contain the value for that particular intersection. This data store belongs to a data model, which is connected to a logical schema that is tied (by a context) to a certain physical schema that contains the application server/user/password/application/database.

In this example we will consider that we have five Hyperion Planning applications and all of them resides in the same server and uses the same Essbase username and password to connect to them, but if your architecture is different from that, please feel free to customize this information as well. In order to make this KM dynamic, let’s add two more options to it called “APPLICATION” and “CUBE” and modify the application and database information code to get their value from options instead of the target data store, like this:

9_3

These two options will contain two ODI variables that will indicate which application/cube you will be loading. Keep those two variables in mind, as we will talk more about those later in the post. Now if you open “Load data into Essbase” step you will see that this one is responsible to actually fetch the data from a table based in the source and target data store columns and mappings as below:

9_4

If we want it to load any Hyperion Application, we need to change this to not rely on the target data store (that is a static information) in order to know which dimensions to load for a specific application. If you are a reader of this blog, you probably already know where the answer to this problem relies, right? If you thought about Hyperion Planning metadata repository, you are right! Let’s change this KM to read the metadata repository, figure out which dimensions that application has and load data to it. First let’s create another option in this KM called “PLANNING_SCHEMA”, which will contain the actual database schema name where the Hyperion Planning application resides in Oracle. Then we will add the following query in the source tab of “Load data into Essbase” step:

9_5

Befire we continue, this command LISTAGG only exists in Oracle DB 11.2 or newer. If you DB is in an older version please try the WM_CONCAT command. This is a undocumented command but works very well. You need to change the SQL projection from:

LISTAGG(MAPPING, ‘,’) WITHIN GROUP (ORDER BY 1) MAPPING

To

WM_CONCAT (MAPPING)  MAPPING

Also you will need to create and add to this source tab a logical schema that points to the database that contains the Hyperion Planning repository. One very important thing about this technique: it only works when you have all Hyperion Planning applications repository in a single database because even though the schema name will be changed dynamically, giving us the possibility to load any number of applications with a single component, the connection to the metadata database will be a single one. Of course that you can further change your process to accept dynamic topology information as seen in our previous post here, but in this example we will keep things simpler to demonstrate the technique. After you add this query in source, it’s just a matter to change the target code to something like this:

9_6

Our source query will retrieve one row only (called #MAPPING) with the exact number of dimensions that one specific planning application has (based on “PLANNING_SCHEMA” option) and will pass it to the target tab, where it will be used to retrieve the data. But you may be thinking now…. What about the source table for this data load? It may have different formats, different names, and different sources… it will not work in a dynamic load component…. and you are right, it won’t, so that’s why we need something extra to make this component work: an INBOUND_GENERIC table.

INBOUND_GENERIC is a table that contains all possible columns (and each column is a planning dimension) for all Planning applications that you will need to load. For example: if you need to load five applications and together they have 15 distinct dimensions, you will have an INBOUND_GENERIC table with 15 columns (one for each dimension name) plus three auxiliary columns: APP_NAME (the Hyperion Planning application name), CUBE (the cube that you will load data within that application) and INTERFACE_NAME (that is the job name that is loading that data). These three columns give you the flexibility to have in the same table information about different applications, cubes and even inbound jobs! For example, if you have only one application, but contains several inbound jobs, you still may have one single Essbase load component for your entire architecture, saving you time for any maintenance or new development. Also a good performance trick here is to have this table partitioned by INTERFACE_NAME, so you may easily truncate the partition before any new inbound job runs (and even parallel inbound executions).

Also you may be thinking now: ok, I have 10 inbound jobs for 3 different applications. I may create this INBOUND_GENERIC table and load all data to it and have it loaded by this generic Essbase load component, but I’ll still need to create 10 inbound interfaces to load the INBOUND_GENERIC table, right? Yes, you will, but it is worthy. Here are some good points of this architecture:

  • You will have only one single component that loads data to Essbase and all the possible future maintenance will be done in just one point, not in multiple;
  • Since it is modular, it can be added to any other inbound interface easily;
  • ODI constraints work great in this architecture. Think about it: you may have one constraint for each column and you may validate it against the planning metadata repository in order to check if all data is correct before loading to Essbase, which means no more “Unknown Members”, see our post about it here;

After those changes, you should end up with an interface more or less like this:

9_7

It doesn’t matter what you do in your mapping here because the KM will build the columns based in the planning metadata repository. Also is important to notice that no transformation will be done in this component, so be sure to do you ETL when you load INBOUND_GENERIC table. In resume, this component considers that the data is already validated, transformed and ready to be consumed by Essbase. The filter here contains which application/cube/interface you are currently loading.

Now it is just a matter to add this interface to an ODI scenario and add the following ODI input variables to it:

  • APP_NAME: Planning application name;
  • CUBE: Planning cube/database name;
  • PLANNING_SCHEMA: Oracle schema name which contains the Planning Metadata installation;

Then you may loop this scenario for any number of applications as you may see in our post about it here.

Thanks everyone! I hope you have enjoyed it!