Archive for the ODI Category

How to use your existing ODI on premise to seamlessly integrate PBCS (Part 3: Inbound Jobs)

Posted in Cloud, EPM Automate, ODI, PBCS, Uncategorized with tags , , on March 18, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the how to integrate PBCS seamlessly using you existing ODI environment series (Part 2 Here), today we’ll talk about Inbound Jobs.

We already know that we’ll need 4 type of jobs for our integration then let’s see what we need to do to create an Inbound Job:

The first thing we need to do is to set the location as inbox (or outbox in case of an extract). This is what enables the button save as a job. If you choose Local, it enables you to run just for one time.

Second, source type. We have 2 choices there, Essbase and Planning. Essbase has the same format that we are used with Essbase exports, the problem is that if we select Essbase we’ll need to have one job per Plan type.

Planning in the other hand has a special format that is very handy for create dynamic objects in ODI as we can see in the table below:

As we can see, the planning format has a column for the account, then the periods, a POV column and the data load cube name (plan type).

The periods on the column is already a nice thing to have since will be way fester to load and way more optimized since we’ll have one how for the entire year.

Of course, this will only be true when we are talking about forecast because actuals normally, we load just one month per time… even so I prefer this format then have one column for the period and another for data.

The POV is also nice because doesn’t matter how many dimensions we have in one plan type, everything will be in the same column, then for integrate is simple since we just need to concatenate all columns but accounts in one column. I recommend using Regular expression and LISTAGG to do so (best function ever for dynamic components).

And to end we need to inform to each plan type that data will be loaded, very nice, you can load all your plan types at once. One job, one file and everything can be done by one generic component in ODI.

After that we need to choose the delimiter used in the file, in this case, I choose pipeline because it’s very hard to have this character in the middle of any metadata.

Finally, we just need to inform the name of the file. That’s all we need to create a job that we can call using EPMAutomate anytime we need it.

ODI Hidden Gems – Unique temporary object names

Posted in Gems, ODI, Tips and Tricks with tags , , on March 11, 2021 by radk00

Hi all, I was not going to write about this one because I thought that this “hidden” gem was already known to every single ODI 12 developer out there, but I still get questions on why sometimes some specific data loads fails when they run in parallel, and they work fine when they run in serial. Most of the times, those are related to how ODI handles the temporary objects that it creates to do ETL (like C$, I$, E$ tables).

Let us see one example, which is the default in ODI. I have created one very simple mapping that has one source and one target table.

This mapping is loading data from two databases that resides in different data servers, so it will need to create a C$ table to be able to transfer the data. If we look at ODI Operator, we will notice the following:

The C$ table that it created is named as C$_0SECTIONTYPE. By default, ODI will create this name based on the source component that the data was generated from, so in this case it was a table called SECTIONTYPE. The “0” in front of it is an incremental number that would increase if you had another source with the same name in the mapping. For example, if you had SECTIONTYPE mapped twice as source tables, one would be loaded as C$_0SECTIONTYPE and the other C$_1SECTIONTYPE. ODI does that so we do not have a clash between names within the same ODI mapping.

However, what would happen if you tried to run the same mapping or another mapping that also contains SECTIONTYPE as source at the same time? As you may imagine, one mapping would interfere in the other, since both C$ tables would be called C$_0SECTIONTYPE and both mappings would be trying to load/read/drop it at the same time, which would cause a failure (in a good scenario) or wrong data (in a bad scenario).

To avoid this kind of issue to happen, ODI 10/11 developers were very creative in the past and would create some Java variables and some tweaks in some ODI KMs to make the temp table names dynamic. However, ODI 12 introduces something way simpler to handle this kind of situation, however, its not default and it is kind of “hidden”. If you go back to the mapping, click on the Physical tab, and scroll all the way down. You will notice a check box that says “Use Unique Temporary Object Names” that is unchecked. If you check this one and run it again, you will see the difference.

ODI now created a table named C$_0SECTIONTYPEAIHLNQMAVPK7Q1FR66UM225DF3, which is totally unique, and it will never clash with another mapping running in parallel. But then, another question arises: should I go to each mapping to check this option, if I want all to be unique? Well, the answer is no. There is another “hidden” gem that you can use.

Go to ODI Topology and double click your ODI agent, then go to Properties.

Luckily, you may enable the use of unique temporary objects at an ODI agent level, so you do not need to go back to each mapping and changing them.

That is it for today. See ya!

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 8: ODI)

Posted in Essbase, Java, ODI with tags , , on February 19, 2021 by radk00

Now its time to glue it all together with ODI. ODI is great here because it can work with different technologies without any effort. In our case, we will run the Java codes that we displayed in this series using Java BeanShell Technology.

Although ODI is great to execute any kind of technology code out there, it does not have a good way for you to write and debug your code while you are developing it. So, I always prefer to create the code in an external Java IDE (like Eclipse), test it and copy the “Main” portion of it into an ODI Procedure.

Another advantage to use ODI is that we can get the connection information from topology itself and reuse everywhere within the package using the command on source/target technique, where you define the connection in the command on source tab and get the information in the command on target tab.

To glue all together in ODI is very simple. We may have all the java codes in one procedure with as many steps as we want (depending in what kind of statistics we want to get, in our case, three). Then we may have another procedure that will hold our pivot queries that we use to transform and load the data into our DW tables. Finally, we may even create our own metrics based on the knowledge that we have from Essbase. Down below is one example of metrics that we may retrieve from the stats that we just gathered.

When we put it all in an ODI package, it will look like this. In this example, we also added a send email component just to inform the users about the job completion.

That’s it folks! Next post will be the last one of this series. Stay tuned!

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 7: Essbase DW)

Posted in Essbase, Java, ODI with tags , , on February 17, 2021 by radk00

Before we glue it all together in ODI, let us organize the data that we just got from Essbase. In the last post we saw that we gathered all the information we needed inside a generic stage table. Although a generic table is great from an extract perspective, we may decide to split, organize, and load this information in some kind of “historical DW tables” which will help us to analyze the data better over time.

For this series of posts, we have 3 different kind of information: DB Statistics, File Statistics and Outline Statistics. Their structure are very different, so we will generate one historical table for each kind of statistic.

Since we created a generic table to hold all extracted information in rows, now we need to PIVOT the data into columns and load it into the historical tables. This can be easily achieved with a SQL like the one below:

First, we define the columns to be pivoted and use a consolidation function on the data column, like (SUM, AVG, MIN, MAX, COUNT…). Then we specify the data to be pivoted and this data must be a constant in the “IN” clause. Finally, this data is loaded to the DW table using a Control Append approach, since we want to keep adding the current statistics to a historical table for an analysis over time.

If we follow this approach for all three metrics that we retrieved from Essbase, we will end up with three DW tables, like the ones below:

File statistics:

Outline statistics:

DB statistics:

That is all for today! See you!

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 6: Java codes)

Posted in Essbase, Java, ODI with tags , , on February 9, 2021 by radk00

Hi all, let us demonstrate some examples of Java codes that you may use to retrieve statistics out of Essbase. Since we want to store those stats in an Oracle table, let us begin with an example of how to connect to an Oracle DB, which is simple, as we just need the DB URL, username, and password. Below is how we may create the connection. We also have a prepare statement that we will use to do inserts in our Oracle table.

Since we have a generic table, we can have just one prepare statement and use it to insert all kind of stats there.

Connecting to Essbase is also easy, the only additional information is that you also need to pass the Provider Service, which may be “Embedded” or by Provider Service. Here we are basically doing a sign in to the provider service and then select the OLAP server that we want to use.

Now let us see the first code to get some cube stats. First, we get all the cubes from all the apps and for each cube we get its properties (which is an array in a Key and a Value format). This information is added to the prepare Statement and executed (inserted) into the DB.

The result will be something like this:

Let us jump to a second example. In Java we can issue any Maxl command using the “IEssMaxlSession” class. The result set contains columns and rows, similar with what we get in EAS. Then we need to loop through the rows and get the columns that we need. This information is also added to the prepare Statement and executed (inserted) into the DB.

The result will be something like below.

The last example is how we can get stats from the Essbase outline. We can get member information using IEssMemberSelection with a custom query or find the member directly in the outline using find member. The results contain a set of members that we may loop and analyze its properties. In this case we decided to categorize the results by its storage type.

Again, the result will be something like the one below.

That is it for today. Next post I will show you how you can glue it all together with ODI. See you soon!

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 5: Automating using Java Essbase API)

Posted in Essbase, Java, ODI with tags , , on February 8, 2021 by radk00

Hi all! Let us talk now about how we can automate this stat gathering using Java Essbase API. Java is the key technology here since it can easily connect and manipulate Essbase through its API. It also can connect to Oracle Database to store our results, run OS commands and more all in one single code. Java is also great since it may be easily deployed to ODI using Procedures and scheduled using ODI Operator. All in all, combining ODI and Java code creates a powerful and seamlessly integration going beyond the database boundaries.

Let us begin with some Java Essbase API basics. The main goal is to develop one single code that will connect in Essbase, retrieve the statistics information and load that in the Oracle database.

Essbase API is very similar with what we see in EAS, in the sense that the structure of the classes follows the same architecture as in a Essbase server (Server->App->Cube->Otl), which makes it easy to find out what you are looking for when looking at the API documentation.

Since we will store this information in an Oracle table, we will also need to know a little bit about Oracle Java API, but luckily this one is straight forward.

With those two sets of APIs, we are good retrieve all the information that we need from Essbase. Each stat has its own number of columns and metrics, so if we create one table for each kind of structure it will be very tricky to maintain and harder to create any kind of generic code. The best way to extract information is to have just one table where we have the properties in the rows instead of columns, this way we have just one structure for all kind of information, no matter the number of columns that returns and we may create generic code around it. Our final table would look like below.

That is it for this post. Next one I will share some examples on how to connect to Oracle DB, Essbase and how to retrieve some stats out of it. Stay tuned!

Is my ODI version certified for that technology?

Posted in ODI, ODI Architecture, Versions with tags , on January 14, 2021 by radk00

Hi all, today’s post is a quick one, but I hear this question very often. Is ODI version XXXXX certified to XXXXX technology version? It is very easy to check it since Oracle keeps it all in one place called “Oracle Fusion Middleware Supported System Configurations” page, but sometimes people get a little bit confused about it.

Once you reach the website, select your ODI version (remember, ODI is inside Oracle Fusion Middleware). My ODI is 12.2.1.4, so let’s check it out:

There will be multiple tabs there. If you want to check in which system your ODI Agent is certified to run for example, you may click on the System tab:

It gives us a lot of information regarding which system, version, java version is supported for your agent. If you want to check which DB version is certified, just click on Database tab:

One last import thing that is always checked is which technology and versions are supported for Source-Target ETL. This can be easily checked on “ODI Source-Target” tab:

That is its folks. Very simple, but useful post. See you soon!

Generate JSON objects with ODI in a very easy way

Posted in ODI, Tips and Tricks with tags , on November 9, 2020 by radk00

Hi all. I came across a requirement to create Json objects from a set of Oracle tables. We have ODI, so it was natural that the solution would be created there. However, working with JSON objects in ODI is not that easy. For those who already worked with “Complex File” technology knows what I am talking about. Too much setup and any misconfiguration will cause an error that is generally very hard to troubleshoot.

In the past, I did several mappings that would read from Complex Files (including Json), but this time it was an outbound process, so I would need to create the Json objects, not read from them. I tried to search on the internet, but nothing was clear. I was not sure if the Complex File technology would work to create outbound files and I was not in the mood to play with XSD files this time, so I needed to find some other solution.

Talking to a friend/co-worker of mine, he asked me if we could not leverage the Oracle’s JSON_OBJECT function somehow in ODI. First, I did not know what that function was about, so I researched about it:

The SQL/JSON function JSON_OBJECT takes as its input one or more property key-value pairs. It returns a JSON object that contains an object member for each of those key-value pairs.

It looked very interesting. So, I give it a try in SQL Developer. It worked very well:

Basically, you may add any number of columns to the function and it will create a valid JSON object out of it. Pretty neat, right? Now it is just a matter of how to add it to ODI, which thankfully is very easy. First, I created a model in the File technology:

It contains only one column, with 4000 characters. Then I created a mapping and mapped all the required columns to the JSON_OBJECT function, like this:

That’s it. Pretty simple. When we run the mapping, we have an outbound file like this:

Now you may send the outbound file to any application that needs to consume JSON objects. One thing that you may need to consider is that you will probably get into trouble if your Json objects ends up being larger then 4000 characters each. I didn’t test it, but either Oracle or ODI will probably complain about it.

I hope you have liked it. This is not a “traditional” way to create Json objects, but for sure it is the easiest one! See you next time!

Oracle SQL for EPM Tips and Tricks S01EP14

Posted in ODI, Oracle, Oracle Database, Performance, SQL, Tips and Tricks with tags , , , on October 15, 2020 by RZGiampaoli

Hey guys how are you? Continuing our SQL series (S01EP13), today I’ll share a very hand little query that I use very often for check data duplication. In fact, this would be an upgrade version of ODI’s pk check.

An upgrade version because in ODI, if you enable PK check, if he finds duplication, he eliminate both data. This code I’ll show you, you would choose if you want to keep the last created duplication or the oldest one, but only one will be eliminated.

I have a test table with this values:

If I want to check for duplicate PK, I can just run this query here:

The Idea here is, we have 2 queries. The first one will check if the ROWID it has is bigger or smaller (you choice) than the MIN or MAX ROWID (depending of your previous choice) than the second sub query by any joins you want to check.

In this case, we wanted to check only if the PK column had duplicated values, but we could check any other column by just replace it in the join. In fact, we could have any amount of columns in the join and that would check if there’s any duplications in all columns you inserted there.

Then you can select the first using > and MIN or the last by using < and MAX as well you can select what column you want to check in the where clause.

One important thing to mention is that this query is meant to work as a delete because it’ll keep what was not in the select. What I mean is, if you have more than one duplication, it’ll bring, in this case, all the rows that has the ROWID > then the one selected in the first query:

Then if I have multiple duplications, the query will return everything that needs to be deleted and the only one remaining was the first one inserted (3, Chuck, Giampaoli).

I hope you enjoy this little trick and see you soon.

Oracle SQL for EPM Tips and Tricks S01EP13

Posted in ACE, Data Warehouse, Hacking, Hyperion Planning, ODI, Oracle, Oracle 11.2.0, Oracle Database, SQL with tags , , , , , on April 1, 2020 by RZGiampaoli

Hey guys how are you? Let’s take a look today in the opposite of S01EP12 situation, in fact we’ll use the same example again to show how can we convert a string in a list of values in a easy and dynamic way, starting with this query here:

I’ll transform this query in a with and I’ll use REGEXP to put this back into a list of values. This is very useful when we extract metadata from essbase for example, because essbase exports the UDA’s as a list of values. Of coarse this has many uses other than this but let’s keep this one in mind.

Now what we need to do is to split the strings by comma in this case, then the idea is to count the amount of commas we have in a row and split the strings by that amount.

The idea here is to use the REGEXP_COUNT to count how many words we have in between the commas and then use it to multiply the rows in the CONNECT BY LEVEL. For example, if we have 3 words, the connect by will create 3 rows of the same row, one with the LEVEL = 1 another with the LEVEL =2 and the last one with LEVEL=3.

With that we just need to use the REGEXP_SUBSTR to extract the words based in the LEVEL, this way we’ll have the REGEXP_SUBSTR(STR, ‘[^,]+’, 1, LEVEL (that will be 1 for the first row, 2 for the second and 3 for the third one).

I hope this can be useful and see you soon.