Archive for the EPM Category

Is my ODI version certified for that technology?

Posted in ODI, ODI Architecture, Versions with tags , on January 14, 2021 by Rodrigo Radtke de Souza

Hi all, today’s post is a quick one, but I hear this question very often. Is ODI version XXXXX certified to XXXXX technology version? It is very easy to check it since Oracle keeps it all in one place called “Oracle Fusion Middleware Supported System Configurations” page, but sometimes people get a little bit confused about it.

Once you reach the website, select your ODI version (remember, ODI is inside Oracle Fusion Middleware). My ODI is 12.2.1.4, so let’s check it out:

There will be multiple tabs there. If you want to check in which system your ODI Agent is certified to run for example, you may click on the System tab:

It gives us a lot of information regarding which system, version, java version is supported for your agent. If you want to check which DB version is certified, just click on Database tab:

One last import thing that is always checked is which technology and versions are supported for Source-Target ETL. This can be easily checked on “ODI Source-Target” tab:

That is its folks. Very simple, but useful post. See you soon!

Advertisement

Oracle SQL for EPM Tips and Tricks S01EP08

Posted in Connect By, DEVEPM, Dimensions, EPM, Oracle, Oracle 11.2.0, Oracle 11.2.0.4, Oracle Database, Performance, REGEXP, Tips and Tricks with tags , , , , , on November 26, 2019 by RZGiampaoli

Hey guys how are you? Today I’ll expand a little bit more the REGEXP team just to show how easier and powerful these functions are, not to mentioning how dynamic.

In the last post S01EP04 I explained how to split a string in different columns. The thing is, what if you don’t want to split in different columns? what if you want just to have the string as a list in the same column?

We can also do that with a very nice trick using REGEXP_COUNT. Let’s take a look on this.

In the previous post we had a query like this:

As I said there, using REGEXP is very simple to “walk”trough the string and get any part of it you want. And is also simple to transform a string in a list of values. To do that we just need to add one extra connect by and the REGEXP_COUNT function like this:

What that CONNECT BY LEVEL <= REGEXP_COUNT(PATH, ‘[^|]+’) is doing is basically increasing by 1 for each “|” he finds in the string we have, in other words, he is dynamically “walking” through the string looking for “|” and passing the Level he find a “|” to the REGEXP_SUBSTR above. The result is:

As you can see, the REGEXP_COUNT will return the number of “|” he finds in the string, the CONNECT BY LEVEL will multiply that string by that number and the REGEXP_SUBSTR will “walk” the string using the LEVEL of each row to extract the list of members from that string.

This is very useful to manipulate strings that you don’t know the amount of members you want to extract from that, like UDA members for example, that when you extract from ASO cubes, comes in a string all together.

I hope this is a useful trick for you guys and see you soon.

Oracle SQL for epm tips and tricks S01EP07

Posted in ACE, DEVEPM, EPM, Oracle, RANK, SQL with tags , , , , , on August 15, 2019 by RZGiampaoli

Continuing our Oracle SQL for EPM series, today we’ll start to talk about analytic functions and how can we use them for more than “just” analytics.

To start with, let’s talk about RANK(). As the name suggest, RANK() is used to rank our data based in something. It’s very useful to find out each data is more relevant than others. Let’s see a example:

Here we have a small table with 2 currencies and a few products. Let’s first start with the basic function of RANK() and see each product generated more income:

The basic syntax is RANK() OVER (ORDER BY COLUMN). Basically what you are saying to oracle is, rank my data based by a column (or multiple columns). Since I just ordered by data, the values of the RANK() got duplicated everything oracle finds the same value. This is because we have 2 currencies and they are both USD.

To fix data we can do 2 things: Or we can include currency in side the order by or we can use another more advanced use of RANK() that is OVER PARTITION.

Let’s see how it works:

If I just add another column in the ORDER BY, it’ll basically create the Rank based in the order of these 2 columns. It’s the same as do a ORDER BY and then follow the order of the data that returns. Then in this case, you can see that the products PR235 for Functional Data got Rank 1 and for USD rank 11, even both having the same value. By the way, you also can see that the Ranks is ordering in the opposite order that we would like to have. This was intentional to show you how the Rank is produce. To fix that we just need to put a DESC in the ORDER BY clause, like we would do in a normal ORDER BY.

Ok then let’s see the more advanced way to write this query:

Instead of inserting new columns in the ORDER BY we can use PARTITION BY instead. The results here is the same, but this can be used in other ways as well and I would say that this would be the best way to used it since is more clear what you want to do.

The PARTITION BY does exactly what the name says, it partition the data by the content of one or more columns. In fact, the PARTITION BY clause can be used in most off the analytics functions like MAX, SUM, MIN, AVG…. then it’s very powerful and the best thing is that, if you use it, you don’t need to use a group by (we’ll see that in the future).

Now, as I said before, we can have other uses for RANK than just ranking data. Let’s say that you have this table without the CURRENCIES column:

Without the CURRENCIES column we end up with duplicate data in the table right? In this case we could do just a distinct and use the data as is, but let’s say you want to create the CURRENCIES column based in the data that we have, and the rule would be, the first data you find is USD and the second (if exists) would be Function. We can use Rank for that too:

Since here the data is the same for the same product, the only thing that could differentiate them was the ROWNUM (or ROWID, that would be better to make sure each one was the first one, but harder to see the example) I used it to create a Rank that shows each row has the lowest ROWNUM and that would have the Rank 1, the second one will be 2 and with this information, I just did a decode to make the 1 USD and the 2 Functional (Also a NA in case we have more than 2 duplicated rows).

This can be used in exactly the same way if you have a metadata table without the datastorage information and you want to create it. Then the first member you find (Trough our friend CONNECT BY PRIOR) will be the Prototype (Store or never Share or Dynamic Calc and Store) and the other would be Shared members.

Of coarse there’s way more ways to use this function, and we’ll see more of them with the other analytics functions that we’ll going to see here.

See you soon guys.

ORACLE SQL for EPM tips and tricks S01EP05!

Posted in ACE, Connect By, DEVEPM, Dimensions, EPM, ETL, Oracle, Oracle 11.2.0, Oracle 11.2.0.4, Oracle Database, Query, REGEXP, SQL, Tips and Tricks with tags , , , , on May 3, 2019 by RZGiampaoli

Continuing the Oracle SQL for EPM series today we’ll see another usage of Connect by. I’m talking a lot about connect by because is a very useful function and we should use it a lot in the analytic space.

Let’s imagine that we are working with a planning application and we want to figure out in a query what is the region of each cost center and to what top level that cost centers belong. In this app we have the Entity dimension with Support Geography as a attribute.

The support geography has the region on the generation 2 of the hierarchy. How can we do that. Well, connect by to the help.

First of all we need to get the entire physical geography hierarchy. To do that we’ll do a query like this:

This is a simple query that joins the HSP_ATTRIBUTE_MEMBER table (this table contains all the attributes from all attribute dimensions) that’s why we have that sub-query to select just the Support Geography members (1).

Then we join this table with the HSP_OBJECT that contains all metadata from everything in Planning: Forms, Dimensions, Flows, everything. We do that to get the name of the members. And finally we do a connect by to rebuild the hierarchy (2).

With the hierarchy re-built we can use the Connect by root to figure out to witch top level each member belongs. (3)

With this query we know what’s the leaf member of the Support Geography dimensions (ID) and to what Region that member belong. Now we need to do the same for the Entity dimension.

With this query we are filtering the OBJECT_TYPE=33, that means Entity (1) (If you want to know what are the possible object types you can query the HSP_OBJECT_TYPE table to check that out), and then we do a Connect By to re-built the hierarchy and then use some Connect by Root to get the parents as well the Sys connect By Path to get the Path of the hierarchy.

Now we just need to join everything under the same query to get all the Cost Centers and to witch top level each one belongs and to witch region they are part of as well.

Another thing that I like a lot about the WITH clause is that is very easy to create nested queries. In this case I created 2 different queries, each one with a WITH and a select based in that WITH. Now everything I need to do is put everything under the same with by:

We can see that now I just got the Select that was under the WITH clause and just created a nested WITH by just creating the step 1 and 2 and now I have instead of 2 WITH queries I have just 1 WITH with 4 Clauses under it being 2 of then a nested one based in the previous one (Connect by from the filtered query).

All we need to do is putting everything together by joining the ENT_HIER and the SG_HIER using the HSP_MEMBER_TO_ATTRIBUTE table. This table basically is the map between the ENTITY dimension and the Attribute dimensions, in this case Support Geography.

The results is the Cost Center, the Path of that member in the hierarchy and with this we can use the REGEXP to extract any level of the hierarchy, and finally the region that cost center belong.

You may thing why we need to do that. well, this is a query I used to join with the HSP_GROUP table to get the groups and the members from each region and then create the security dynamically for each user. That means, I have a Planning application with Entities that has cost centers from different regions and I’m using the Attribute dimension to generate the security.

If a User has access only to AMER data, it’ll only see the AMER cost centers. This is just one example of what we can achieve using Connect by.

I hope you guys enjoy. Next time we’ll talk about another very very useful function that I really love it.

See you soon guys.

ORACLE SQL for EPM tips and tricks S01EP02!

Posted in ACE, Connect By, EPM, Oracle Database, Performance, Query, SQL, Tips and Tricks, WITH Clause with tags , , , , , , , on March 21, 2019 by RZGiampaoli

hey guys how are you? Let’s continue the SQL for EPM series. Today I’ll continue to talk about With with a small bonus of Connect by :). let’s start.

A lot of people uses Connect By in a daily bases but as far I having seeing, most of then don’t know how to use it properly. I already lost count with the amount of people complaining about performance issue with Connect By.

The thing is, Connect By works a little different than everything else in Oracle. We can say that Connect By has 2 stages and we’ll see why I’m saying that with this example. Let’s get back to our metadata table and let’s do a Connect By to extract the Balance Sheet Hierarchy from the Juno application:

As we can see, inside this table we have more than one application and more than one hierarchies for each application. That’s ok, we just need to filter it in our SQL right?

If we filter the APP_NAME and the HIER_NAME we’ll get all accounts for that Application and this will generate 12,622 rows. By the way, this table has all metadata from all our applications and we always filter by APP_NAME and HIER_NAME to select what we want (the table is also partitioned and sub-partitioned by these 2 columns). It’s important to know that without filtering anything this table has:

Ok, now, if we want to get just the BS hierarchy we just need to do the Connect By right?

That works… perfect… or not? Well in fact, this the wrong way to use Connect by because what I said before, the 2 stages.

As you can see, this query took 25 sec just to return the first 50 rows. In a integration this will take way more time, in fact, if you join this table to a data table to do a SUM in the BS level, this will take ages to return.

The reason is that for the Connect by, first Oracle does everything that is after the word Connect by and after the word Start with and then, and only then, it does what is in the where condition. That means, first he did the connect by in those 2.260.372 rows (and they are all repeated) and then after all the processing, it filtered what we wanted, that is the APP_NAME and the HIER_NAME. Then the right way to use it is:

Now it looks way better. 0.375 seconds to do exactly the same thing as before, and the only thing I did was to move our filters to the right place. Now Oracle is filtering and doing the Connect by at same time.

Now, if you do a SYS_CONNECT_BY_PATH and want to get just the leaf (to have the complete path that the hierarchy does, you can filter the leafs in the where clause (and need to be there otherwise it’ll not have the entire hierarchy during the connect by). This is how:

Now you see that the connect by filtered what needs to be filter during the Connect by execution and afterwards, it filtered just the leafs (using the CONNECT_BY_ISLEAF that returns if a member is a leaf or not).

Also, i used the CONNECT_BY_ROOT to generate the Root member used in this query (BS) and the SYS_CONNECT_BY_PATH to generate the entire path of the metadata (Very useful to transform parent/child tables in generation tables using this Technic and a regexp [we’ll see this in another post]).

Ok, now that the “Bonus” is written, let’s talk about the WITH that was the main subject here. Even with this Connect by write in the right way with the filters in the right place, we can still improve the performance using WITH.

That’s right, the idea is to prepare our subset of data using WITH before we ask Oracle to do the Connect by and leave it as simple as possible. Let’s take a look:

This is by far the best way to use a Connect by clause. You can, instead of using WITH use a sub-query but I think this way is easier and more organised as well. Also, I know the time difference doesn’t look to big between the previous example and this one but when you join this with data and start to SUM everything, you’ll see a huge difference between this method and the previous one.

Also, some times Oracle get lost with the previous method making everything slower but with the WITH method, it never happens then I advise you start to use this.

I hope you guys enjoy this little tip and see you next time.

BI Community Lvl up! Enter Analytics Community

Posted in ACE, Career, DEVEPM, EPM, ODTUG, Oracle with tags , , , on February 22, 2019 by RZGiampaoli

Hey guys how are you?

Today I have a very exiting news for all of us. ODTUG just announced the re-branding of the BI Community, to the Analytics Community.
The change is not in name alone—it’s an effort to create a more dynamic and inclusive community to better serve the needs the of ODTUG members. What does this change mean? It means we encompass more than just BI. It means if you are interested in any of the tools that are used to analyze data and turn it into information, we are here for you! Have a question? Want to share knowledge you gained from a project? Connect with other like-minded analysts who explore data in myriad ways, both on premises and in the cloud. Join the Analytics Community to share your passion for all things analytics, whether you are using OAC, OBIEE, ML/AI, R, Python, Essbase, DV, Big Data, Data Warehousing, or other platforms. This page is your portal to all things Analytics– a blog feed, a list of upcoming Analytics-related events, and a look at the Analytics content located in the Technical Database. Check out the most recent blogs from industry experts. Like what you read? Click on the title to visit the author’s blog. Would you like to see your blog featured here? Drop a note to Haleigh@odtug.com. Let’s explore analytics together!

Check the Twitter, LinkedIn and Facebook page to learn more, join us and help us to create a better Analytic Community.

BI Playlist Banner2.jpg

KScope 18 Speaker Award

Posted in ACE, Career, DEVEPM, EPM, Kscope, Kscope 18, ODI, ODI Architecture, ODTUG, PBCS with tags , , , , , , on September 17, 2018 by RZGiampaoli

Hey guys how are you?

It has been awhile since last time I wrote anything here…. and surprise, surprise, it’s because I’m crazy working in a project that was sized small but turn out huge and the size didn’t change…. 🙂 never happened before heheheh 😉

This is just a small post to tell how grateful and happy we are for receiving the EPM Data Integration Speaker Award in Kscope 18 with the presentation: How to Use Your ODI On-Premise to Seamlessly Integrate PBCS.

We start this blog in 2012 and we have been presenting at Kscope since 2013 and it has been very rewarding, not only because we become Oracle ACEs because of this, but because every single post or presentation we learn a lot with it.

When you do a presentation you need to stop to think in a solution for a specif project and start to thinking in a solution that can be used to all projects. This alone is a challenge, but the amount of thing we learn is a great deal. We can easily said that our code improved a lot since 2012 when we began with this blog and it’s in great part because of this blog and our presentations.

Then we thank you all of you that read our blog (even if we don’t post as much as we would like), to everybody that goes to KScope and decide to watch our presentations and to ODTUD that provide this bi-lateral learning platform.

Thank you all of you for supporting us and see you soon.

KScope 18! It’s a wrap.

Posted in ACE, DEVEPM, EPM, Kscope, Kscope 18, PBCS, Uncategorized with tags , , on June 21, 2018 by RZGiampaoli

That’s it guys, one more year of KScope finished successfully.

This year was a big one for us since we had 3 sessions, one lunch and panel and one lip-sync battle…. that we lost… but was a lot of fun (way better than I thought it would be).

The sessions were great and we are very proud to receive the Top Speaker Awards for EPM Data Integration track with the session How to Use Your ODI On-Premise to Seamlessly Integrate PBCS.

This means a lot to us since we are always worried about our speeches because our marvelous English and our subtle accent (I sound like a famous robot from the future “Come with me if you want to live….”), then we always try to compensate with the content.

And this year I think we made it. We’ll try very hard to keep the content this interesting. We always try, but some times what is interesting for us is not for others. Would be very nice if you guys leave comments with thing you would like to see in our presentation or blogs.

Thank you very much for all people that attended our sessions and look forward to see you next year!

Thanks

 

ODI 12c Standalone Agent Install for an ODI 11g guy

Posted in InfraStructure, Install, ODI, ODI 11g, ODI 12c, ODI Architecture with tags , , , , , on July 17, 2017 by Rodrigo Radtke de Souza

Hi everybody! Today’s post is about installing an ODI 12c standalone agent. This is not a “new” topic and the steps to perform it can also be found at the Oracle site, however it got me a little bit “off guard” when I was requested to install one and the reason is that it changed considerably comparing to ODI11g (and yeah, we still work A LOT with ODI11g, so installing ODI12c agent was “new” for us).

Prior to ODI 12 version, the ODI agent was configured by simply editing a file called odiparams.bat (odiparams.sh in Linux), which would contain all the necessary agent configuration parameters. It was a simple step, where you would enter the ODI master/work configuration, DB/ODI connection users and so on. After that, you would simply run the agent program and that was it, very short and easy to do. However, in ODI 12 version, it changed considerably and now we need to go through two wizard setups, one for creating the necessary pre-requisite DB schema for ”Common Infrastructure Services” and the other one to configure the ODI Standalone agent for us.

This change added some extra complexity to an architecture that was (talking exclusively about ODI Standalone Agent here) very simple to setup in the old days. Although Oracle provides wizards for us to minimize this effort, nothing was easier than simply configuring a parameter file and running a java program. But enough grumbling, let’s see how we may accomplish this task on ODI 12.

The first wizard that we need to run is the Repository Creation Utility (RCU) that is located here at ORACLE_HOME/oracle_common/bin/rcu.bat. Before we run it, we must understand what RCU is and what it can do for us. As its name suggests, it is a utility that may be used to create any repository component required for Oracle Fusion Middleware products, including the ODI Master/Work repository.

In our project, we did not create ODI Master/Work repository with RCU, but instead we got two empty Oracle DB schemas and installed ODI directly there. The reason why we did not use RCU in this situation is because RCU will force you to create one single Oracle DB schema that will store both ODI Master and Work repositories and this is not a good approach when dealing with large environments. We think that Oracle’s rational on this subject was to simplify certain ODI installs by unifying all in a single place, but again, this removes some of the ODI’s architecture flexibility and complicates the use of complex architectures in the future, like using multiple Work repositories attached to one Master.

So, if we already have ODI Master/Work repositories created, why do we still need RCU? This is because, from ODI 12 version on, we need a third Oracle DB schema that will be used to store the “Common Infrastructure Services” tables that are required for the ODI Standalone agent and the only way to create these tables are using the RCU utility.

Now that we have set our expectations around RCU, let’s run it. The first screen is just a welcome screen explaining what RCU is about, so just click Next.

1

Now let’s select “Create Repository” and “System Load and Product Load”. Just notice that you will be asked for a DBA user in the next steps, since this DBA user will be used to create the necessary database objects (including the DB schema itself) in the new “Common Infrastructure Services” schema. Click Next.

2

Add the database and DBA information and click next.

3

ODI installer will check your information and if everything is ok, all tasks will be green. Select Ok to proceed.

4

In the next screen is where we may select which components we want RCU to install. We may notice that RCU is able to create several schemas for different components, from ODI to WebLogic. Since we already have our Master and Work repositories created, we just need to select “AS Common Schemas”/”Common Infrastructure Services”. Note here that, for this schema, RCU will create it using what is added in the “Create new prefix” option plus a “_STB” postfix. Click Next.

5

The installer will check the pre-requisites to install and if it is ok, a green check will appear. Click OK.

6

In the next screen you will identify which schema password will be used on the new created DB schema. Add a password and click next.

7

Define the Default and Temp table spaces that will be used by the new schema and click Next.

8

If the table spaces does not exist, they will be created for you. Click Ok.

9

The installer will check once more if everything is okay and also create the necessary table spaces. Click Ok.

10

On the next page, we are going to have a Summary on what the installer will do. If everything looks correct, click Create to create the necessary DB objects.

11

Check the Completion Summary, click close and that’s it! You have successfully created the “Common Infrastructure Services” schema, which is a pre-requisite for the ODI Agent install.

12

The next step is to run the wizard setup that will configure the ODI Standalone agent for us. Run the Config program on ORACLE_HOME/oracle_common/common/bin/config.cmd. In the first screen let’s create a new domain. In this domain folder is where the ODI Agent batch programs will reside, such as Start/Stop agent. Select a meaningful folder and click next.

13

In the next screen you will select “Oracle Data Integrator – Standalone Agent – 12.2.1.2.6 [odi]” and click next. This step will also install some basic Standalone components required for the ODI Agent.

14

Select a valid JDK location and click next.

15

Since we did not create our Master and Work repositories using RCU, we won’t be able to use the “RCU Data” option for Auto Configuration here. It is not a big deal, since we may select “Manual Configuration” and click next.

16

Here we will need to input all the information related to two schemas: The ODI Master and the “Common Infrastructure Services“. The way that this screen works is tricky and confusing, since there are options that may be typed for all schemas at once. The best way to do it without any mistake is by selecting one of them, add all information, then uncheck and check the other one and add all the information again. Click next.

17

The installer will check the information that was added here and if it is okay, two green marks will be showed in the Status column. Click next.

18

The next screen will be used to define our ODI Agent name. Create a meaningful name here, since this will be used by the ODI users to select on which ODI agent they will run their ETL processes. Click next.

19

Add the server address, the port and an ODI user/password that has “Supervisor” access. On preferred Data source option, leave it as odiMasterRepository and click next.

20

Although we are not going to use our ODI Standalone Agent in a Node Manager object, which would be controlled by WebLogic, we still need to select a type for it and create a new credential. Add any name and a password for it (don’t worry, you will not use it for the ODI Standalone Agent) and click next.

21

Review the install summary and if everything is ok, just click Create.

22

Check all the steps as they turn into green checks and once completed, click next.

23

That’s the end of the configurations! You have successfully completed the ODI Standalone agent configuration and it is ready to run.

24

In order to run the ODI agent, open a CMD command, navigate to your base domain folder and run the ODI Agent start program with its name as an input argument: agent.cmd –NAME=DEV_AGENT. Wait a little bit for it to load and when its status gets to “started” it is good to go.

25

Now that the ODI agent is up and running, we may go to ODI Topology/Agent and double click the ODI agent that you have created. Now we may click on the Test button and see what happens. If everything is correct, you will see an information windows saying that the ODI agent Test was Successful!

26

Congratulations, now you have an ODI12c Standalone Agent configured. As you can see, we now have some more extra steps to do compared to ODI11g. I hope this post helps you to get prepared for this new kind of installs.

Thanks, see ya!

 

ODICS is here!!!

Posted in New Features, ODI, ODI Architecture, ODICS with tags , , on February 13, 2017 by Rodrigo Radtke de Souza

Hi all, quick but interesting post today! Oracle has just announced its Oracle Data Integrator Cloud Service (ODICS)! You may read about it here and here. We do not have much information about it yet (if it is a complete ODI solution, how it works, if it is similar to what we did in our ODI Cloud article), but we are hoping to get those answers in “Oracle Data Integration PM Webcast – Introducing Oracle Data Integrator Cloud Service (ODICS)” that will happen on Thursday, February 16, 2017 | 11:00 am Eastern Standard Time (GMT-05:00). We encourage all of you to join the webcast and have a first look on what ODICS looks like.

——- EDIT on Feb 23 ——-

Hi all, we’ve watched the ODICS webinar and in the end we were right: ODICS is very similar to what we did on our ODI Cloud article. The only difference is that ODICS is installed directly in JCS instead of a DBCS machine as we did in the article.

In resume, ODICS is nothing more, nothing less than the current ODI 12c installed in a JCS machine that is maintained by Oracle. There is only one restriction (that may not be a big issue for some projects), that the ODI agent needs to be running in the cloud, so you may not deploy it on premise. You may watch Oracle’s ODICS webinar here. You may also know more about ODICS at Christophe blog post.

 

odics

That’s it folks, exciting times are ahead of us!