How to use your existing ODI on premise to seamlessly integrate PBCS (Part 6: Database Design and EPM Automate)

Posted in PBCS with tags on March 29, 2021 by Rodrigo Radtke de Souza

Hi all! Before jumping straight to ODI, let us take a moment to talk about how our database design will look like to be the “middle tier” between our sources and PBCS. We will split our design in 4 parts:

  • Sources: Can be any kind of source (Oracle, SQL Server, Teradata, File system, FTP…)
  • Stage Area: Where we’ll work the data
  • DW: Will have all the information in the right format that PBCS expects
  • File System: The data/metadata from the DW will be exported in the right format/layout and then zipped

While source and stage areas are common across several architectures out there, we need to take a moment to talk about the DW layer, where we have the DW table which will contain the data that we want to load to PBCS and a Metadata validation table. Before we can load data into PBCS we need to make sure that there is no invalid member on the file, and therefore we create a job to export metadata from PBCS in the first place. All existing metadata will be stored in a metadata table with the follow format:

Generally, we build this table to be partitioned by App_Name or Plan_Type, so we can retrieve its information in a faster manner, but it all depends on the size and the number of your applications. With this approach, we may validate all data in the DW tables against the Metadata table and remove/fix it before sending it to PBCS.

The “auxiliary” tool that we will use to send/extract data between our DW and PBCS is EPM Automate. The EPM Automate utility will act as a bridge between ODI and PBCS as it enables Service Administrators to automate many repeatable tasks including (but not limited to) the following:

  • Import and export metadata and data
  • Refresh the application
  • Run business rules
  • Upload files, list files and delete files from PBCS
  • Copy data from one database to another
  • Run a Data Management batch rule
  • Export and import application and artifact snapshots
  • Import pre-mapped balance data into Oracle Account Reconciliation Cloud
  • Import currency rates, pre-mapped transactions, and profiles into Oracle Account Reconciliation Cloud
  • Copy profiles to a period to initiate the reconciliation process
  • Deploy the calculation cube of an Oracle Hyperion Profitability and Cost Management Cloud application
  • Clear, copy, and delete Point of Views in Profitability and Cost Management applications
  • Export and import template in Profitability and Cost Management applications
  • Replay Oracle Smart View for Office load on a service instance to enable performance testing under heavy load

Next post we will show one example of how to Load Data using EPM Automate with ODI. Stay tunned!

Advertisement

How to use your existing ODI on premise to seamlessly integrate PBCS (Part 5: Import Metadata Jobs)

Posted in Uncategorized on March 26, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the how to integrate PBCS seamlessly using you existing ODI environment series (Part 4 Here), today we’ll talk about Importing Metadata Jobs.

As you can imagine, import Metadata will also be simple in PBCS, we just need to pay attention in the file format and that’s it.

To import Metadata, we need:

As usual, first we need to select our inbox, then the name of the files for each dimension, the delimiter for each dimension and if you want to clear the members before load the new members you just need to check Clear Members.

That’s it, all dimensions will be load at once, but we need separated files, one for each dimension. These files can also be in zip format that PBCS will automatically unzip for us.

Now the important part, the file format. This are all properties that PBCS expect when loading metadata.

That’s it. With that we finally finished all the setups we need in PBCS for our ODI jobs to work. One import thing that I need to point out is that Oracle will update PBCS with new versions and the file format can change over time. If that happens, you’ll need also to update your ODI Jobs.

It happened to me during more than one project and is not a big deal, but you need to be aware that if your job starts to fail, this is what can be happening.

I hope you guys enjoy it, stay safe and see you soon.

How to use your existing ODI on premise to seamlessly integrate PBCS (Part 4: Outbound Jobs)

Posted in Uncategorized on March 22, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the how to integrate PBCS seamlessly using you existing ODI environment series (Part 3 Here), today we’ll talk about Outbound Jobs.

In the same ways as the Inbound Jobs, the we need to create a job to extract both data and metadata for ODI to consume and populate our DW as well as use the Metadata for validation.

So, to extract data from PBCS is also easy. First, we need to choose the outbox location to enable save as job (Local does the same as for the Inbound job and enable you an once execution only).

For the outbound we need to choose the plan type, that means, we’ll need at least one job for each plan type. We also need to choose the delimiter (I always like pipeline because is easy to see and is not used in any command) and if you use smart list you can choose if you want to export the labels or the names.

And finally, you set the POV to be exported. You can use essbase substitution variables here if you want to and as you can see the export will also have the same format as planning import, accounts on the rows, periods on the columns and the POV (the plan type as well).

You can change the format if you wish, but I advise to maintain the consistency between the jobs for the sake of dynamic components.

After running this job, PBCS will generate a zip file in his outbox, we just need to go there and download it.

For the Metadata export the idea is the same but a little bit simpler than the others, we just need to select our outbox, the dimensions you want to export and the delimiter and that’s it. PBCS will create one zip file per dimension in our outbox.

That’s it for today. I hope you guys enjoy it, stay safe and see you soon.

How to use your existing ODI on premise to seamlessly integrate PBCS (Part 3: Inbound Jobs)

Posted in Cloud, EPM Automate, ODI, PBCS, Uncategorized with tags , , on March 18, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the how to integrate PBCS seamlessly using you existing ODI environment series (Part 2 Here), today we’ll talk about Inbound Jobs.

We already know that we’ll need 4 type of jobs for our integration then let’s see what we need to do to create an Inbound Job:

The first thing we need to do is to set the location as inbox (or outbox in case of an extract). This is what enables the button save as a job. If you choose Local, it enables you to run just for one time.

Second, source type. We have 2 choices there, Essbase and Planning. Essbase has the same format that we are used with Essbase exports, the problem is that if we select Essbase we’ll need to have one job per Plan type.

Planning in the other hand has a special format that is very handy for create dynamic objects in ODI as we can see in the table below:

As we can see, the planning format has a column for the account, then the periods, a POV column and the data load cube name (plan type).

The periods on the column is already a nice thing to have since will be way fester to load and way more optimized since we’ll have one how for the entire year.

Of course, this will only be true when we are talking about forecast because actuals normally, we load just one month per time… even so I prefer this format then have one column for the period and another for data.

The POV is also nice because doesn’t matter how many dimensions we have in one plan type, everything will be in the same column, then for integrate is simple since we just need to concatenate all columns but accounts in one column. I recommend using Regular expression and LISTAGG to do so (best function ever for dynamic components).

And to end we need to inform to each plan type that data will be loaded, very nice, you can load all your plan types at once. One job, one file and everything can be done by one generic component in ODI.

After that we need to choose the delimiter used in the file, in this case, I choose pipeline because it’s very hard to have this character in the middle of any metadata.

Finally, we just need to inform the name of the file. That’s all we need to create a job that we can call using EPMAutomate anytime we need it.

ODI Hidden Gems – Unique temporary object names

Posted in Gems, ODI, Tips and Tricks with tags , , on March 11, 2021 by Rodrigo Radtke de Souza

Hi all, I was not going to write about this one because I thought that this “hidden” gem was already known to every single ODI 12 developer out there, but I still get questions on why sometimes some specific data loads fails when they run in parallel, and they work fine when they run in serial. Most of the times, those are related to how ODI handles the temporary objects that it creates to do ETL (like C$, I$, E$ tables).

Let us see one example, which is the default in ODI. I have created one very simple mapping that has one source and one target table.

This mapping is loading data from two databases that resides in different data servers, so it will need to create a C$ table to be able to transfer the data. If we look at ODI Operator, we will notice the following:

The C$ table that it created is named as C$_0SECTIONTYPE. By default, ODI will create this name based on the source component that the data was generated from, so in this case it was a table called SECTIONTYPE. The “0” in front of it is an incremental number that would increase if you had another source with the same name in the mapping. For example, if you had SECTIONTYPE mapped twice as source tables, one would be loaded as C$_0SECTIONTYPE and the other C$_1SECTIONTYPE. ODI does that so we do not have a clash between names within the same ODI mapping.

However, what would happen if you tried to run the same mapping or another mapping that also contains SECTIONTYPE as source at the same time? As you may imagine, one mapping would interfere in the other, since both C$ tables would be called C$_0SECTIONTYPE and both mappings would be trying to load/read/drop it at the same time, which would cause a failure (in a good scenario) or wrong data (in a bad scenario).

To avoid this kind of issue to happen, ODI 10/11 developers were very creative in the past and would create some Java variables and some tweaks in some ODI KMs to make the temp table names dynamic. However, ODI 12 introduces something way simpler to handle this kind of situation, however, its not default and it is kind of “hidden”. If you go back to the mapping, click on the Physical tab, and scroll all the way down. You will notice a check box that says “Use Unique Temporary Object Names” that is unchecked. If you check this one and run it again, you will see the difference.

ODI now created a table named C$_0SECTIONTYPEAIHLNQMAVPK7Q1FR66UM225DF3, which is totally unique, and it will never clash with another mapping running in parallel. But then, another question arises: should I go to each mapping to check this option, if I want all to be unique? Well, the answer is no. There is another “hidden” gem that you can use.

Go to ODI Topology and double click your ODI agent, then go to Properties.

Luckily, you may enable the use of unique temporary objects at an ODI agent level, so you do not need to go back to each mapping and changing them.

That is it for today. See ya!

How to use your existing ODI on premise to seamlessly integrate PBCS (Part 2: PBCS)

Posted in Uncategorized on March 10, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the how to integrate PBCS seamlessly using you existing ODI environment series (Part 1 Here), today we’ll talk about PBCS itself.

For us to export or import data from PBCS, we need to prepare our PBCS environment to receive and send all the data we need. Let’s take a look on this.

PBCS has 2 very different interface:

  • The regular interface, same as planning
  • And the Simplified interface (That planning also have) but in PBCS it’s way more useful

The regular interface is almost exactly like Planning one, but the important thing is that we can do anything we need to configure the environment on it. We need to use the simplified interface.

The simplified interface has some new features like be optimized to a tablet or a smartphone but the main feature we are looking for is the inbox/outbox. The inbox/outbox is the only point of contact with the external word and everything that need to be loaded needs to be uploaded to the inbox first and everything that is extracted from PBCS will be available in the outbox to be downloaded.

Knowing this we’ll need to setup for our design:

  • Data and Metadata load
    • Upload the file with the data to the inbox.
    • Create a job to import the data.
    • Execute the job.
  • Data and Metadata Export:
    • Create an export job (For data export is possible to create a Business Rules to export the data directly to the inbox folder: /u03/lcm/)
    • Download the files
    • Unzip and import the file with ODI

For Data/metadata load we need first to upload a file to the inbox, then we need to have an import data/metadata job created and finally we need to run the job to load.

The file can be in .zip format that PBCS automatically unzip the file. This is very good since we may move some big files around

For Data/Metadata extract, we need first to create an export job and run it (you can do data export using the business rules, same old dataexport from essbase, and the path to export would be /u02/lcm/), download the file and finally use ODI to unzip it and load it to our tables.

But what are jobs? Jobs are basically templates with information about what you want to do. For example, a data export Job will contain the POV of the data you want to export, and every time you ran the job that POV will be exported.

In the console menu we have everything we need to create and monitoring our jobs. All data Jobs are created clicking in the Actions menu, Export and Import options and to check the job results you can go to inbox/outbox explorer.

And that’s it for today. I hope you guys enjoy and next post we’ll create all the jobs we need. Stay safe and see you soon.

How to use your existing ODI on premise to seamlessly integrate PBCS (Part 1: Solution)

Posted in Uncategorized on March 9, 2021 by RZGiampaoli

Hey guys, how are you? Today we’ll going to talk about how to integrate PBCS seamlessly using you existing ODI environment.

I said PBCS but this approach can be user for any cloud application available out there (with just changing the way of connect/upload data/API that needs to be used to integrate). Other than that, everything else is valid.

Let’s think a little bit about PBCS. PBCS is a close box that can only be accessed by HTML, SFTP, REST API or EPM Automate. Then for integration we basically have 2 options, REST API and EPM Automate since everything else is way too manual to be called integration.

Both will work in almost the same way, then to keep it simple we’ll be using EPM Automate here. In our case we have a ODI on premise designed like this:

  • We have a lot of different sources that are loaded in our stage area.
  • In the stage area we do whatever transformation we need in the data before we load it to the DW schema
  • During the load into our DW we validate all the data POV against the metadata in Planning or Essbase.

From everything described here the only two changes we need to do is the source of the metadata for validation and the target of the data load.

For the metadata, instead of reading the Planning repository or extract the metadata from Essbase, we’ll export from PBCS the metadata and load it to a table, and this is what we’ll use for validating the data before load.

This step is very important because we don’t want to get errors during data load and slow down the process we are creating. Also, it gives the business a fallout report that they can work to fix the invalid members in PBCS.

For the data load, instead of loading this data directly to Planning we’ll be exporting the data in the right format to a txt file, zip them, and use EPMAutomate to load it to PBCS.

Remember, you chose the REST API to do so, the only thing you need to do differently is a little bit of programing, but the logic will be the same.

Then in the end we’ll have a design like this:

As I said, we can see by this schema that any cloud app can be integrated in kind of the same way. We can have everything we have until today and we just need to figure out the bridge we’ll have between our on-premises DB and our target app.

If the target app is VPN enable, everything get’s way easier since for ODI, this will be just another source, in fact, we could even have DB Links between our environments.

Then keep your mind open for the approach here more than the content we are talking about (PBCS).

I hope your guys enjoy this quick start of our series and see you soon.

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 9: Analyzes)

Posted in Uncategorized on February 23, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the Essbase Statistics DW: How to automatically administrate Essbase using ODI series (Part 9 Here), today we’ll talk about the analyzes we can do with all the data we have so far.

With all this information in our DW we can do a lot of things. We could automatically set the caches using ODI and Maxl or Esscmd but change settings in Essbase is more complicated than just an update.

We need to restart the applications, we need to take in consideration the design of each cube, we need the feeling and knowledge of the administrators, but can be done.

In fact, we can create thresholds to see if the cache is 20% off what it supposed to be and then apply the changes at night. We can also start to record the sessions (be aware that the tables get very big very quick), and analyze the retrieve/calculation times to see if the performance is getting better or not and them send a alert in case it was not after a automatic chance or something like that.

But what we can really do is provide precise information about each cube. We send a daily email for each administrator saying the actual stats for each cube, the actual setting of each cache, the recommended size, and we also send a table with more information about that application.

Attached we can add an excel with the raw data for further analysis.

This helps a lot the administrators to have a better understanding of our cubes and to decide and plan better maintenance strategies.

Of course, we can have a dashboard (and this would be the best approach) with all the statistics there showing the growing of the cube, projections, alerts of when the cubes will reach the maximum cache size, well, anything that could be needed.

The important thing here is to realize how useful this DW would be for keep our Essbase cubes healthy. And keep in mind that is a lot of other stats that can be gathered from Essbase that can do a lot for us.

For example, we can get the dimension settings and number of members and use ODI (or any language like java) to manipulate the outline and test the best Sparse/Dense settings for our cubes.

I hope you guys enjoy our series, be safe and see you soon.

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 8: ODI)

Posted in Essbase, Java, ODI with tags , , on February 19, 2021 by Rodrigo Radtke de Souza

Now its time to glue it all together with ODI. ODI is great here because it can work with different technologies without any effort. In our case, we will run the Java codes that we displayed in this series using Java BeanShell Technology.

Although ODI is great to execute any kind of technology code out there, it does not have a good way for you to write and debug your code while you are developing it. So, I always prefer to create the code in an external Java IDE (like Eclipse), test it and copy the “Main” portion of it into an ODI Procedure.

Another advantage to use ODI is that we can get the connection information from topology itself and reuse everywhere within the package using the command on source/target technique, where you define the connection in the command on source tab and get the information in the command on target tab.

To glue all together in ODI is very simple. We may have all the java codes in one procedure with as many steps as we want (depending in what kind of statistics we want to get, in our case, three). Then we may have another procedure that will hold our pivot queries that we use to transform and load the data into our DW tables. Finally, we may even create our own metrics based on the knowledge that we have from Essbase. Down below is one example of metrics that we may retrieve from the stats that we just gathered.

When we put it all in an ODI package, it will look like this. In this example, we also added a send email component just to inform the users about the job completion.

That’s it folks! Next post will be the last one of this series. Stay tuned!

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 7: Essbase DW)

Posted in Essbase, Java, ODI with tags , , on February 17, 2021 by Rodrigo Radtke de Souza

Before we glue it all together in ODI, let us organize the data that we just got from Essbase. In the last post we saw that we gathered all the information we needed inside a generic stage table. Although a generic table is great from an extract perspective, we may decide to split, organize, and load this information in some kind of “historical DW tables” which will help us to analyze the data better over time.

For this series of posts, we have 3 different kind of information: DB Statistics, File Statistics and Outline Statistics. Their structure are very different, so we will generate one historical table for each kind of statistic.

Since we created a generic table to hold all extracted information in rows, now we need to PIVOT the data into columns and load it into the historical tables. This can be easily achieved with a SQL like the one below:

First, we define the columns to be pivoted and use a consolidation function on the data column, like (SUM, AVG, MIN, MAX, COUNT…). Then we specify the data to be pivoted and this data must be a constant in the “IN” clause. Finally, this data is loaded to the DW table using a Control Append approach, since we want to keep adding the current statistics to a historical table for an analysis over time.

If we follow this approach for all three metrics that we retrieved from Essbase, we will end up with three DW tables, like the ones below:

File statistics:

Outline statistics:

DB statistics:

That is all for today! See you!