Archive for ODI Expert

DEV EPM 2014 in review

Posted in EPM, Kscope 14, ODI with tags , , , , , , on December 30, 2014 by Rodrigo Radtke de Souza

Hi all readers! Another year has gone and it was a great (if not the greatest) year for DEVEPM! We have done a lot of cool stuff this year like ODI certification, ODTUG ODI Expert Panel, speaking at KScope14, changed ODI EPM KMs to work on ODI12c, ODI petition for EPM KMs support, OTNArchBeat Publication and a lot of other EPM related topics.

I would like to thank each one of you for accessing/reading/sharing our blog posts! I hope that we could positively impact your lives somehow with any of our post. That’s the reason why we blog: we want to somehow share our knowledge back to the community that always helped us in difficult times!

And 2015 will be no different! We will work even harder to bring new content about EPM world for you! Thanks again and see you soon!!!!

Follow us on Twitter: @RodrigoRadtke @RZGiampaoli @DEVEPM

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 22,000 times in 2014. If it were a concert at Sydney Opera House, it would take about 8 sold-out performances for that many people to see it.

Click here to see the complete report.

Advertisement

ODI Substitution Tags demystified

Posted in ODI, ODI Architecture with tags , , , , on September 16, 2014 by Rodrigo Radtke de Souza

Hi all, I bet that all ODI developers already got confused when working with ODI substitution tags. Every time that I need to work with those, I need to repeat all my previous tests again because I forget which tag is the right one to use (even after many years working with ODI….). That’s one of the reasons why I decided to write this post, so I can read my own post in future references (memory, memory, where are you???). Also because it is nice to share our experiences with others, so they can benefit from it (at least I hope so 🙂 ) if they face similar issues.

For all those that are not familiar with it, what are ODI substitution tags? Substitution tags are those characters that you find when you open a Knowledge Module or a procedure in ODI, like these ones: <%>, <?>, <@> and <$>. Here is one example:

1

Those tags are part of ODI Substitution API and they are used to produce dynamic code based on several things like ODI variables, procedure options, data models, interface diagram and so on. Basically the Substitution API is the greatest development power in ODI (that’s why I love ODI so much), since it can produce dynamic code based in the existing information around it. Oracle has a good documentation about it here: Oracle Doc

Although Oracle talks about substitution API in their documentation, it does not state about the tags themselves and in their examples they only talk about the most common tag that is the <%>. But in reality we have four types of it, each one behaving in its own way. Basically each one of them is executed in a specific order, which affects their results in a way greater than you could imagine.

I’ll explain each one of them using a real situation that happened to me some time ago. I was working in a project that contained several different servers (ODI agents) with different Operational Systems on them. I was developing a dynamic component code that had to change its behavior based on which OS that component was going to be executed, so if the OS server was Linux I would execute one particular command and if it was Windows another command. To test in which OS the component was in, I used the following Substitution API command with the following tag (the only one that I knew back then):

<%= System.getProperty(“os.name”) %>

It didn’t work as I expected. The result of this command was always returning the OS name of the machine in which I was launching the execution (the ODI client) not the one that the command was going to be executed (the ODI agent). It puzzled me, so I started to study which options I could use… that’s when I learned about <?>, <@> and later on <$> tags.  Each one of them has a different effect in ODI code generation because each one of them has its own parse phase! This is a very simple concept and it truly affects the ways that ODI generate its dynamic code.

So let me show you what I did to test all those options. First consider that my ODI client is in a Windows 2003 OS and the ODI agent is a Windows Server 2008 R2 OS (the Linux OS was not available for me anymore, but this Windows 2008 will suffice for this example, since we just need different types of OSs). I created the following procedure:

2

It is a simple procedure with a Jython code that will “raise” the name of the OS as an error. I just marked “Ignore Errors” as well, so I could add more steps to this procedure. I did the same for all substitution tags:

3

Then you execute the procedure, making sure that it uses the ODI agent (that resides in a different OS):

4

Let’s begin with <%>. After you execute the procedure and go to “Code” tab, we have the following:

5

Very interesting and it matches my previous experience. The <%> is returning the ODI client OS which indicates that the substitution API is generating the command when it is sent to the agent, but before it gets to the agent! Obviously if we go to “Definition” tab, we will get the execution of this command, which will be “Windows 2003”:

6

Let’s see what it did using <?> substitution tag:

7

Nice! This is exactly what I previously wanted: get the ODI agent OS and decide which command was going to be executed based on that OS. So ODI is generating the code in the agent server, before the command is sent to Operator and before ODI variables are substituted (what, Operator, variables??? More on those topics in a few moments…). If we go to “Definition” tab, we will get the execution of this command, which will be “Windows Server 2008 R2”:

8

Ok, let’s see <@>:

9

Humm….. More interesting stuff and that’s why I mentioned Operator/ODI variables before. This code is being generated in the agent, but after the command appears in the Operator and after ODI variables are substituted. When we go to “Definition” tab, we will get the execution of this command, which will be “Windows Server 2008 R2” again:

10

The <@> tag is great for you to evaluate ODI variables content and depending on that variable value, do something different in your code, like the example below:

11

The interface filter in this example will change accordingly to #V_COND variable. If #V_COND is 1, then it will filter all records with BANK_ACCOUNT_TYPE = ‘1’. If #V_COND is different from 1, then it will load all records.

Those three examples perfectly show us that each tag executes in a different parse phase and each one has precedence over the other. And now you must be thinking: does it means that we can mix and match those tags and create even more dynamic code??? Oh sure yes! Here you can see great examples from Sonra guys:

Sonra Part 1

Sonra Part 2

They show us how we can create “loops” using one tag and apply that result to another tag that will get executed right after that, which give us an extreme dynamic tool for developing. I already told you that I love ODI right?

Ok, but what about <$>? I left this one for the end because this tag was added later to the game (it seems that it is available from 11.1.1.6 version on). This tag resides between <?> and <@> which give us the unique ability of generating the code in the agent server, before the command is sent to Operator but AFTER ODI variables are substituted. Ok, this is crazy stuff 🙂 Why do we need it? The key here is that ODI will contain all ODI variables already evaluated (meaning containing its correspondent refresh/set values) and the code will be displayed in Operator already parsed with the correct ODI variables logic, which makes your code much more readable for the Operators users. Besides that, having a fourth parse phase is great to do even crazier dynamic self-constructed code (long live dynamic coding!!!). Here is “the same” example using <$> in an ODI interface from Oracle ODI blog (the place where I learned about the existence of <$>):

Oracle Blog

In our OS example, the <$> result will not be different from what we got using <?>:

12

And in Definition tab:

13

That’s it folks, so in resume here is the list of substitution tags and their parsing phase order:

<%>: generates the command when it is sent to the agent, but BEFORE it gets to the agent;

<?>:  generates the code in the agent server, BEFORE the command is sent to Operator and BEFORE ODI variables are substituted

<$> (available from 11.1.1.6 on): generates the code in the agent server, BEFORE the command is sent to Operator but AFTER ODI variables are substituted

<@>: generates the code in the agent server, AFTER the command appears in the Operator and AFTER ODI variables are substituted

Hope you have enjoyed it! See you next time!

Really using ODI 12c for Data Integration with Oracle Hyperion Planning/Essbase

Posted in ODI Architecture, Hyperion Planning, EPM, ODI 12c, ODI, Kscope 14, Hacking, Hyperion Essbase with tags , , , , , on July 18, 2014 by RZGiampaoli

On Kscope 14 Oracle announced that ODI 12c would not have support to Planning/Essbase and HFM. You people could imagine that this bomb would make a lot of noise in EPM world.

Because we did not liked the ODI 12c interface, we did not care too much about this announcement, but after Oracle post this: https://blogs.oracle.com/dataintegration/entry/using_odi_12c_for_data, we heard a lot of users complaining about it.

Also we thought a little about our dear users and imagined how our beautiful ODI environment that orchestrates the entire EPM environment would look like if we had to create a lot of extra steps to export data from tables to files and use some CMD commands to load it into Planning/Essbase/HFM.

We decided to make some tests to see if we were able to use ODI 11 KMs with minimal changes to them (in order to increase acceptance of this method), and do our part to help the EPM world.

After a couple of hours (it was more like 6 hours…) we figure out how to make the KMs from ODI 11 work with minimal changes in ODI 12. Let us stop talking and start showing what we need to change to make ODI 12 compatible with EPM.

Technologies Technologies We do not need to import or change anything in the Technologies tree because Oracle left all the technologies for the EPM tools there.

Jars

We do not need to change or copy any Jar because Oracle was kind enough to maintain them there. The only thing we need to do is to Import the EPM KMs from the ODI 11 to ODI 12.

KMs KMSWe have seven KM that we need to import from ODI 11.

For Essbase:

RKM Hyperion Essbase

LKM Hyperion Essbase DATA to SQL

LKM Hyperion Essbase METADATA to SQL

IKM SQL to Hyperion Essbase (DATA)

IKM SQL to Hyperion Essbase (METADATA)

For Planning:

RKM Hyperion Planning

IKM SQL to Hyperion Planning

After we import the KMs, we need to changes a few things on them.

RKMs

In the RKMs we saw the first changes Oracle did in the way ODI works. In the 11 version, ODI used the <%=odiRef.getModel(“ID“)%> substitution method to get the ID of the model there we would like to reverse. If we take a look in the master repository, in ODI 12 we’ll see a new column in the table: Master Repository This is the first thing we need to update in the RKM. ODI 11 used the ID, and if you try to use it as is, Oracle created a validation that makes the interface fails. (It was a good error message because we could figure out the problem reading it). Global ID Error Then, basically we need to update it to the new GLOBAL_ID.

RKM Hyperion Essbase

Step Reset MetaData:

We need to update the code:

SnpsReverseResetTable -MODEL=<%=odiRef.getModel(“ID“)%>

To

SnpsReverseResetTable -MODEL=<%=odiRef.getModel(“GLOBAL_ID“)%>

Step Start the Reverse:

We need to update the code to the new standard and also enclosure the substitution method with double quotes because now it returns a string instead of a numeric value.

imod = <%=snpRef.getModel(“ID“)%>

To

imod = “<%=snpRef.getModel(“GLOBAL_ID“)%>”

Step Set MetaData:

We need only to update to GLOBAL_ID.

SnpsReverseSetMetaData -MODEL=<%=snpRef.getModel(“ID“)%>

To

SnpsReverseSetMetaData -MODEL=<%=snpRef.getModel(“GLOBAL_ID“)%>

RKM Hyperion Planning

For the Planning KM we just need to update the code exactly in the same way we did in the Essbase KM.

Step Reset MetaData:

SnpsReverseResetTable -MODEL=<%=odiRef.getModel(“ID“)%>

To

SnpsReverseResetTable -MODEL=<%=odiRef.getModel(“GLOBAL_ID“)%>

Step Start the Reverse:

imod = <%=snpRef.getModel(“ID“)%>

To

imod = “<%=snpRef.getModel(“GLOBAL_ID“)%>”

Step Set MetaData:

SnpsReverseSetMetaData -MODEL=<%=snpRef.getModel(“ID“)%>

To

SnpsReverseSetMetaData -MODEL=<%=snpRef.getModel(“GLOBAL_ID“)%>

After this, just create a model and reverse something to test it. RKM Execution

Results are these: Planning ReverseEssbase Reverse

With this we are ready with the RKM part and we can start to create interfaces. Sweet.

IKMs

The IKMs were the easiest part. Oracle created some categories to group similar KMs. We need just to choose one of the 3 possibilities to make the KM shows in the Mappings (the new Interface object of ODI 12): Integration Type

We choose “Control Append” since the EPM IKMs does not have any type of integration strategy, but really does not matter with one you choose (I just like things organized).

IKM SQL to Hyperion Essbase (DATA)

We need to double click the KM and select the integration type: KM Integration type selectionIKM SQL to Hyperion Essbase (METADATA)

For the Metadata we need to do the same thing, add the Integration type (We chose “Control Append” again)

IKM SQL to Hyperion Planning

Same thing for the Planning IKM. Add the Integration type. (We chose “Control Append“) IKMs executionsWe are done with the IKM part.

LKMs

LKM Hyperion Essbase DATA to SQL

There is no change in this KM. Works as is. I like it.

LKM Hyperion Essbase METADATA to SQL

This was the tricky one. And not because it’s too different than the others. In fact, if it was not for a BUG in the API it would not need any changes to make it works (On Monday we will open a SR with Oracle about this bug).

The thing is: In the step “Begin Essbase Metadata Extract”, we have a line with the follow command: stagCols = ‘<%=snpRef.getColList(“”, “[CX_COL_NAME]=[COL_NAME] “, “, “, “”, “”)%>’

This command basically create a mapping between the temporary table and Essbase to start the extract. Essbase and its API is case sensitive with this information, and the pattern COL_NAME, in this version, seems to be bugged or changed the behavior (that I do not believe is the case, I will explain later).

In this version instead of bring something like this:

stagCols = ‘PARENTNAME=ParentName, MEMBERNAME=MemberName , ALIAS=Alias

It’s bring something like this:

stagCols = ‘PARENTNAME=PARENTNAME, MEMBERNAME= MEMBERNAME, ALIAS= ALIAS

The pattern is always returning in Uppercase, even when you change the execution to Stage or Target areas, and because of this, we got an error saying that the member PARENTNAME is not valid for the dimension XXXXX (And this does not happen in ODI 11 if you set the execution to Stage or Target areas).

Anyway, we start to test the behavior and we found out that in ODI 12 it is impossible (by now) to create an Oracle table like this: Oracle Model test Because the pattern COL_NAME always return, uppercase no matter what and it removes the double quotes from the expression. This is why we think it is a bug, because it removes a functionality from the Oracle DB as well.

For a quickly workaround (yes we know it’s ugly, but Oracle will fix that in no time and we’ll be able to get rid of it) we changed the code from:

stagCols = ‘<%=snpRef.getColList(“”, “[CX_COL_NAME]=[COL_NAME] “, “, “, “”, “”)%>’

To

stagCols = ‘<%=snpRef.getColList(“”, “[CX_COL_NAME]=[COL_DESC] “, “, “, “”, “”)%>’

After this, we just need to copy the column name to the description inside the attributes in the model: Models Details Double clicking the column, it will open its properties and in the description tab we only need to copy the name of the column to its description because the COL_DESC will get exactly what is written there (with one exception: if you put a quote or double quotes it will turn it in space, but we will not use it here). Description workaroundThis workaround does the trick: LKMs Executions Yes, we know that we could use a lot of other methods, as Flexfields for example, to do this or even do some code changes in the KM, but the idea here is to show that is possible to use ODI 12 with minor changes. And with this we can use ODI 12 in our EPM environment just like we do with the 11 version.

We didn’t test HFM here because we don’t have it installed in our infra, but probably the amount of changes for HFM is the same as it’s for Essbase and Planning.

We hope you guys like this and please give us any feedback if you guys implement this solution. As far as we test it, it should work exactly like ODI 11.

See you next time.

——————Edited on 07/22/2014——————-

About the Case sensitive issue of [COL_NAME].

We installed the ODI 12.1.2.3 today and repeated the tests we did in the previous version.

The [COL_NAME] for IKM is working, and that means, the issue to create an Oracle table with 2 columns with the same name but different case is solved.

But the [COL_NAME] for LKM still doesn’t works. All LKM have the same behavior and we can see that in the logs when we tried to replicate the IKM behavior in a LKM.

LKM testIKM test

As we can see, the behavior was fixed in the IKM but not in the LKM. This happens because the KMs are executed in different ways depending on its type (I, L, CKM) as we can see here:

http://docs.oracle.com/middleware/1212/odi/ODIKD/odiref_reference.htm#CIADFCFJ

For now please stick with our workaround until Oracle fix this in the LKMs.

See you next time.

10 Important Things to Improve ODI Integrations with Hyperion Planning Part 9 (Generic Data Load)

Posted in EPM, Hyperion Planning, ODI, ODI Architecture with tags , , , , , on April 25, 2014 by Rodrigo Radtke de Souza

Hi all, today let’s see how we may create dynamic data load components for any number of Hyperion Planning applications. As you know, all data related to Hyperion Planning actually resides in an Essbase database and ODI has a Knowledge module that loads data into it called “IKM SQL to Hyperion Essbase (DATA)”. In this post we will tweak this Knowledge Module allowing it to read the Planning Metadata repository and dynamically build its own data store mappings and definitions depending on which Hyperion Planning application it will load data to. But first let’s take a look on how the original “IKM SQL to Hyperion Essbase (DATA)” works:

9_1

As you can see, it is a fairly simple KM with two main steps: “Prepare for loading” and “Load data into Essbase”. If you open “Prepare for loading” step you will see some Jython code there getting all configurations and options to be used in your data load and since our objective here is to make it dynamic to load any Hyperion Planning application we should focus on the bellow settings:

9_2

These settings are getting all the connection information related to the target data store in the Interface ODI component. If you already worked with Hyperion Planning and ODI, you know that the target data store contains all dimensions from a particular Hyperion Planning application and a column named “Data” that will contain the value for that particular intersection. This data store belongs to a data model, which is connected to a logical schema that is tied (by a context) to a certain physical schema that contains the application server/user/password/application/database.

In this example we will consider that we have five Hyperion Planning applications and all of them resides in the same server and uses the same Essbase username and password to connect to them, but if your architecture is different from that, please feel free to customize this information as well. In order to make this KM dynamic, let’s add two more options to it called “APPLICATION” and “CUBE” and modify the application and database information code to get their value from options instead of the target data store, like this:

9_3

These two options will contain two ODI variables that will indicate which application/cube you will be loading. Keep those two variables in mind, as we will talk more about those later in the post. Now if you open “Load data into Essbase” step you will see that this one is responsible to actually fetch the data from a table based in the source and target data store columns and mappings as below:

9_4

If we want it to load any Hyperion Application, we need to change this to not rely on the target data store (that is a static information) in order to know which dimensions to load for a specific application. If you are a reader of this blog, you probably already know where the answer to this problem relies, right? If you thought about Hyperion Planning metadata repository, you are right! Let’s change this KM to read the metadata repository, figure out which dimensions that application has and load data to it. First let’s create another option in this KM called “PLANNING_SCHEMA”, which will contain the actual database schema name where the Hyperion Planning application resides in Oracle. Then we will add the following query in the source tab of “Load data into Essbase” step:

9_5

Befire we continue, this command LISTAGG only exists in Oracle DB 11.2 or newer. If you DB is in an older version please try the WM_CONCAT command. This is a undocumented command but works very well. You need to change the SQL projection from:

LISTAGG(MAPPING, ‘,’) WITHIN GROUP (ORDER BY 1) MAPPING

To

WM_CONCAT (MAPPING)  MAPPING

Also you will need to create and add to this source tab a logical schema that points to the database that contains the Hyperion Planning repository. One very important thing about this technique: it only works when you have all Hyperion Planning applications repository in a single database because even though the schema name will be changed dynamically, giving us the possibility to load any number of applications with a single component, the connection to the metadata database will be a single one. Of course that you can further change your process to accept dynamic topology information as seen in our previous post here, but in this example we will keep things simpler to demonstrate the technique. After you add this query in source, it’s just a matter to change the target code to something like this:

9_6

Our source query will retrieve one row only (called #MAPPING) with the exact number of dimensions that one specific planning application has (based on “PLANNING_SCHEMA” option) and will pass it to the target tab, where it will be used to retrieve the data. But you may be thinking now…. What about the source table for this data load? It may have different formats, different names, and different sources… it will not work in a dynamic load component…. and you are right, it won’t, so that’s why we need something extra to make this component work: an INBOUND_GENERIC table.

INBOUND_GENERIC is a table that contains all possible columns (and each column is a planning dimension) for all Planning applications that you will need to load. For example: if you need to load five applications and together they have 15 distinct dimensions, you will have an INBOUND_GENERIC table with 15 columns (one for each dimension name) plus three auxiliary columns: APP_NAME (the Hyperion Planning application name), CUBE (the cube that you will load data within that application) and INTERFACE_NAME (that is the job name that is loading that data). These three columns give you the flexibility to have in the same table information about different applications, cubes and even inbound jobs! For example, if you have only one application, but contains several inbound jobs, you still may have one single Essbase load component for your entire architecture, saving you time for any maintenance or new development. Also a good performance trick here is to have this table partitioned by INTERFACE_NAME, so you may easily truncate the partition before any new inbound job runs (and even parallel inbound executions).

Also you may be thinking now: ok, I have 10 inbound jobs for 3 different applications. I may create this INBOUND_GENERIC table and load all data to it and have it loaded by this generic Essbase load component, but I’ll still need to create 10 inbound interfaces to load the INBOUND_GENERIC table, right? Yes, you will, but it is worthy. Here are some good points of this architecture:

  • You will have only one single component that loads data to Essbase and all the possible future maintenance will be done in just one point, not in multiple;
  • Since it is modular, it can be added to any other inbound interface easily;
  • ODI constraints work great in this architecture. Think about it: you may have one constraint for each column and you may validate it against the planning metadata repository in order to check if all data is correct before loading to Essbase, which means no more “Unknown Members”, see our post about it here;

After those changes, you should end up with an interface more or less like this:

9_7

It doesn’t matter what you do in your mapping here because the KM will build the columns based in the planning metadata repository. Also is important to notice that no transformation will be done in this component, so be sure to do you ETL when you load INBOUND_GENERIC table. In resume, this component considers that the data is already validated, transformed and ready to be consumed by Essbase. The filter here contains which application/cube/interface you are currently loading.

Now it is just a matter to add this interface to an ODI scenario and add the following ODI input variables to it:

  • APP_NAME: Planning application name;
  • CUBE: Planning cube/database name;
  • PLANNING_SCHEMA: Oracle schema name which contains the Planning Metadata installation;

Then you may loop this scenario for any number of applications as you may see in our post about it here.

Thanks everyone! I hope you have enjoyed it!

DEVEPM at ODTUG Expert Panel – ODI

Posted in EPM, ODI with tags , , , , on March 25, 2014 by Rodrigo Radtke de Souza

Hi all!

This has been a great year for DEVEPM! First we had one of our submissions accepted at Kscope14. Now we were invited to be part of the second “ODTUG Expert Panel – ODI” that will happen on Tuesday, April 29, 2014 3:00 PM – 4:30 PM EDT. This is a free webinar offered by ODTUG (check more great webinars here) and this will be an unique opportunity to talk with some of the ODI experts about anything related to ODI, so don’t missed!

The following are some potential topics, but audience participation will shape the scope of the panel:

  • What are the top features of ODI 12c and can they be used in a typical data warehouse implementation?
  • Describe the migration from OWB to ODI 12c as it might occur in practice. Anything to watch out for?
  • What are some examples of customizations that you have made in ODI and why?
  • Interested in taking the ODI certification exam? We can talk about what was in the test, how to prepare for it, what to study, etc.
  • Using ODI to improve the Hyperion Planning/Essbase capabilities.

Cool hum? Go here to register today and enjoy it!

See ya!

Execution Variables trick for old versions of ODI

Posted in ODI, ODI Architecture with tags , , on February 28, 2014 by Rodrigo Radtke de Souza

Hi all, today I’m going to show a very simple trick that is very useful for everyone that uses ODI in version 11.1.1.5 or below. How many times you got a code to debug from Operator and you needed to keep replacing all those ODI variables for its correct values in order to troubleshoot something? If you work with ODI development probably you do it every day. Depending on the case, it can be very time consuming because you need to get all input variables, execute all SQLs from refresh variables and so on. Luckily for those on ODI 11.1.1.6 and beyond, Oracle have added the variables values to the Operator steps, so no need to go hunting variables anymore, but what about the older versions?

There are a lot of alternatives around there and I have used one in particular for a long time. It is a simple Jython procedure that has an option where I add all ODI variables that I want to show and a step with a “raise” Jython command with ignore errors selected:

blog1

blog2

It works pretty fine, when you execute it inside a package all variables values that you have added in the option will show in Operator, but it is always a pain in large projects because you may have 20 variables or so and you need to copy and paste its name one by one to the procedure’s option, but it was way better than having nothing at all.

Tired of doing this copy and paste work, a coworker (thanks Eduardo for the great idea and the SQL!) came with the follow idea: Why don’t we query ODI repository metadata to get all variables that were used in the ODI scenario and “raise” its value? Simple but great idea because it is just a matter to place the procedure anywhere in the ODI package and it will show all the variable’s values at that point.

It’s very easy to accomplish this. First you need to have a topology set to the ODI work repository, and then you need to create a procedure with two steps. First step will have the ODI work repository logical schema set in the “Command on Source” tab:

blog3

And the below Java BeanShell code written in the “Command on Target” tab:

blog4

SELECT DISTINCT VAR FROM (SELECT LEVEL, VAR_NAME, SYS_CONNECT_BY_PATH(VAR_NAME || ‘: #’ || VAR_NAME, ‘ \n ‘) AS VAR FROM (SELECT  ROWNUM AS CURR , ROWNUM -1 AS PREV , SSS.VAR_NAME , SST.DEF_TXT, SSTL.TASK_END , SSTL.TASK_BEG FROM ” + odiRef.getSchemaName( “ODI_WORK_REPOSITORY”,”D” )+”.SNP_SESS_TASK SST, ” + odiRef.getSchemaName( “ODI_WORK_REPOSITORY”,”D” ) + “.SNP_SESS_TASK_LOG SSTL , ” + odiRef.getSchemaName( “ODI_WORK_REPOSITORY”,”D” ) + “.SNP_SESS_STEP SSS WHERE 1 = 1 AND SST.TASK_TYPE = ‘V’ AND SSS.STEP_TYPE IN (‘V’, ‘VS’, ‘VE’, ‘VD’) AND SSTL.NNO = SSS.NNO AND SSTL.NB_RUN = SSS.NB_RUN  AND SST.SESS_NO = SSTL.SESS_NO  AND SST.SESS_NO = SSS.SESS_NO  AND SST.SCEN_TASK_NO = SSTL.SCEN_TASK_NO  AND SSS.SESS_NO = ” + odiRef.getSession(“SESS_NO”) + “) CONNECT BY PREV = PRIOR CURR  START WITH CURR = 1  ORDER BY LEVEL DESC ) WHERE ROWNUM = 1

In resume, the above query goes against the ODI repository and returns one line containing all ODI variables used in the ODI scenario until that point. This query can be accomplished in several different ways (for example using listAgg Oracle’s feature in a newer database) and you may tweak it to return more valuable information if needed. With this information added to “odivars” variable, it is just a matter to “raise” it in the second step:

blog5

Let’s see how it works. Imagine the below ODI scenario:

blog6

When we execute it and take a look in ODI Operator, this is what we get in the first procedure execution:

blog7

It shows the values “123” and “A” for the first two variables. When we look at the second procedure execution we get all the three variables:

blog8

As you can see, it works with all kind of variables (refresh, set, declare, global, project only, etc) and if you want even more freedom, you may add this code inside your Knowledge modules in the very first steps, so you will know all the variables values that was used in your integration steps without the need to drag and drop a procedure to the ODI package.

That’s it folks. I hope it is useful for you. Thanks again Eduardo for the idea! See ya!

[1Z0-482] – Oracle Data Integrator 11g Certified Implementation Essentials

Posted in ODI, ODI Certification with tags , , , , on February 18, 2014 by Rodrigo Radtke de Souza

Hi all,

Last week I got the response from Oracle about my ODI certification exam that I did back in December when it was still in beta. I got approved with a 91% score! It was a very good exam and I’ll try to share some thoughts about it here in the blog.

As I said before, I took a beta exam (named 1Z1-482, 1Z1 indicates beta, 1Z0 production exam) which is a little different from the regular Oracle exams. Here are some of the differences:

  • Beta exams are pre-production exams used by Oracle to evaluate new exam questions with the participation of a live audience;
  • Data gathered during beta testing are used to select the best questions for the production exam;
  • Beta exams are offered at a deeply discounted price (I took for $50, now it is $245);
  • Candidates usually have 2.5 to 3.5 hours to complete the exam, instead of 2 hours;
  • Need to answer from 120 – 220 questions, instead of 71 questions;
  • Passing a beta exam is the same as passing a production exam;

As you can see, as a Beta tester you have some extra work to do since you have to answer way more questions in a longer duration exam but the good thing is that the exam is way cheaper. So, after you have done your exam, Oracle analyzes all beta exams and from those 120-220 questions, only 71 gets to the official exam and those are the ones that you are scored against. So my 91% was related to the final 71 questions and not against all 124 questions that I answered.

But you should not worry about this beta thing anymore, since the certification is already in production, so let’s talk about what is in the exam. I found it very good, with several questions about every day ODI usage with a VERY BIG exception: EDQ (Enterprise Data Quality) questions. There were a lot of questions about it and they were really a pain…. But prior to talk about them, let’s go by each exam topic, so I can explain each one better. You can see more information about the exam and all the related topics in Oracle’s page:

ODI Certification information

Introduction to Oracle Data Integrator (ODI), ODI Architecture Overview, ODI Models and Datastores, ODI Project Development

I’ll talk about all those topics together because they are the day to day ODI operations, so if you work with ODI, you will be fine on those topics. I really liked the certification exam because a lot of questions were focused in these topics and for me those are the ones that make more sense to be in the exam, since these are the topics that your actually work with. I’ll not go into details here because I am assuming that those who want to become certificated have already a hands on experience in the tool and since there are a small amount of questions about every topic you can expect everything from simple questions about ODI variables and interfaces to some more complex questions like which/where temporary tables are created in a specific situation. But again, if you work with ODI, you will be more than fine here.

Managing ODI Execution

Although it may be considered a daily operation, I wanted to talk about this topic separately because this was one topic that kind of surprised me due to the big amount of questions about Load Plans. If you are an older ODI developer like me, you will know that Load Plan is a feature released in ODI 11g version, so it may be considered pretty new. I know a lot of places and projects that they don’t even know what a Load Plan is because they can get around executions only by using ODI scenarios. So here is my tip: you need to know everything about Load Plans. How to create parallel executions, how to restart it using all restart options, how to edit load plans and do on. Even if you work with Load Plans already, study them. There are always those options that you generally don’t use, but you need to know for the exam.

Integrating Oracle GoldenGate with ODI

OGG is a great product to integrate with ODI for real timing integrations and Oracle wants to be sure that their specialists knows how to use those two together. There were some questions about CDC in general and some more specifics regarding CDC using OGG. CDC is a concept and ODI has some possibilities to implement it besides OGG, so you need to be aware of all them. You need to know what is a journal and subscribers, how to properly setup up journalizing, which types exists, what are the structures that ODI creates both in source and target databases, how do you Extend a Window, Lock Subscribers, which ODI tools are used to work with CDC and so on. A good material about CDC can be found in Oracle documentation:

ODI CDC

For CDC with OGG you need to worry more about the details. You need to know how the Knowledge module works, which are its parameters and will even need to know about the basic commands in OGG to start the Extract and Replicat processes. Oracle has a great documentation about ODI + OGG integration. If you have a chance to actually build a small environment with OGG and ODI I strongly recommend it, so you can see with your own eyes all the details, architecture and so on.

ODI OGG

Doing more with ODI

It may be considered the “advanced” topics in ODI certification. These topics are the ones that most of the ODI developers do not generally work in every project that they are engaged (big data is a good example) since they are very specific. I’ll go by each subtopic to explain it better:

SDK

This is a complicated topic because there is a lot of content that you may study and learn about it but just a few questions are in the exam. Below is the link to ODI SDK java API with all classes, functions and so on. Of course that you will not need to know all of it by heart, but you need to know the basics. My recommendation is to read the main classes of API and create some examples on your own just to get the handle of it. ODI Experts site has a lot of SDK examples, so grab some of them and try them out.

SDK API

ODI Web-Based Components

There were some questions about ODI Console, but they were fairly simple if you already work with it. If you want to know more about it, you may read Oracle’s documentation:

ODI Console

The role of ODI in Big Data

This topic was the one that I actually had to study about it. Although I already read about big data stuff, I had never taken a look on how ODI worked with it. So here you need to know about what big data, Hadoop and Hive are and especially and most importantly which KMs and adapters ODI has to manipulate/load/extract big data. There is an OBIEE/ODI/HADOOP series in Rittman Mead blog that I strongly recommend:

ODI Big Data

There is some good information here as well:

ODI Big Data 2

The role of ODI in Business Intelligence integration

Have you already heard of Data lineage using ODI and OBIEE? If no, you should, because there are a few questions about it in the exam. Study its architecture, lineage options and how to use it. Oracle has a good documentation about it here:

ODI Data Lineage

Oracle Enterprise Data Quality (EDQ) Profiling, Oracle Enterprise Data Quality Batch Processing

Here is the big villain of ODI certification exam in my option: EDQ. A lot of questions about EDQ appeared in the beta exam and I really pray that Oracle have removed some of them when they created the production exam. Honestly, how many of you already worked with it? How many production systems exist that are currently using EDQ? Very, very few…. I personally don’t know anyone. EDQ is a data quality tool that identifies, categorize and even do some ETL to fix your data, but hey ODI may also do it using constraints. Of course EDQ is intended to be used by the business folks that are the “owners” of that data, so EDQ provides some nice dashboards and web accesses, but believe me, almost 100% of the time, business users are interested in their business not IT. IT folks will take care of any errors they find in their data and provide them what they want and if IT owns this responsibility, they will use ODI for it, not EDQ. But I understand Oracle’s movement: it has an entire set of great integration tools and EDQ is the tool responsible for data quality. Since nobody is actually using it, they added it to the exam to increase its visibility and try to get some movement around it.

Ok, let me stop whining about it and start to tell what I did to study this tool. First I read about it in Rittman Mead blog which contains a lot of good information:

Introduction to EDQ

ODI and EDQ integration

In EDQ’s case, reading will not be enough, at least in my case it wasn’t. I truly recommend you to download, install and play with it a little. You will need to know about EDQ architecture, its modules, how to do basic profiling with its pre-build processors, how EDQ and ODI talks to each other and so on. This is the hardest part of the exam for sure and there is no easy way to study it.

That’s it folks. I hope these tips can help you to achieve ODI certification. It is a good challenge but it is worth it!

ODI 11G Repository Book

Posted in ODI, ODI Architecture with tags , , , on December 18, 2013 by Rodrigo Radtke de Souza

Hi all!

I bet that all of you already got into a situation where you need to retrieve information about your developed objects in ODI, for example: how many interfaces does project X have? What are the model folders that have models shared between projects? What were the objects that were changed during this week and who changed them last? And so on. This information is easily retrieved if you know how ODI metadata is structured in its Work and Master repositories.

Luckily for us, Sonra have published a book on the ODI repository. It is free, but in order to get it, you need to comment about it. I already took a look at the models of some subject areas here so I already know that the book will contain great and useful information about the repositories.

As soon as I receive my copy, I’ll write a brief review about it, but I’m pretty sure that it will be a must to have for those that like to adventure themselves through the ODI repositories tables.

See you soon!

Edited on 12/19/2013

I got my copy of the book yesterday and I was right: it is a must to have! Great work from Sonra! The book contains models of several subject areas, very useful queries for daily/clean up tasks like: Unused Physical schemas, unused logical schemas, actual mapping of an interface, option values selected for KMs for all interfaces and a lot more! Another great thing are the ODI pictures with the correspondent repository table that contains that information. For example, when you open a Data Server tab, what is the repository table that has that information that you seeing (SNP_CONNECT in this case). It’s not 100% complete, as I missed some tables from Security module and Load Plans, but it covers 95% of your every day work! If you did not get it yet, go and get it ASAP! Thanks again Sonra!

ODI 12c: excelent for OGG, bad for EPM

Posted in EPM, ODI, ODI 12c, ODI Architecture with tags , , , on November 13, 2013 by Rodrigo Radtke de Souza

Hi all,

Yesterday there was the Oracle’s Webcast “Introducing Version 12c for Oracle Data Integration”. Since our first impression was not so good, we decided to participate in this webcast to see if we could find some new information that would change our minds. Everything was fine as Oracle representatives were talking about the nice new cool features, in especially for Oracle Golden Gate that is more integrated with ODI then never and it actually looks really nice. But then there was one question from someone in the audience that caught our eyes:

Can I integrate this with Hyperion and Discoverer?

“ODI 12c is integrated with OBIEE at the metadata level, we provide capabilities such as report to source data lineage. Hyperion connectivity to applications such as Planning or Financial Management will come in future versions of ODI 12c”

ODI 12c does not support Hyperion. What? What do you mean? At first I thought that something was wrong. Maybe the guy who answered the question did not get it or misunderstood it, so I asked a question:

In one of the answers, you said the following: “Hyperion connectivity to applications such as Planning or Financial Management will come in future versions of ODI 12c.” Does it mean that ODI 12c does not have any more Hyperion Planning KMs for example?

“We are planning on adding support for Hyperion applications in a future release of ODI 12c.”

I couldn’t believe what I was reading. They have done great enhancements on ODI to better support Golden Gate, Big Data, parallel executions, debugger, security stuff BUT they removed Hyperion support from ODI 12c. They just removed. In my last attempt to see if they were not misunderstanding me, I asked again almost the same question:

We work with ODI 11g integrating several Hyperion products, such as Essbase, Planning and HFM. Does it mean that we are not able to migrate to ODI 12c for now?

“We will be adding support for Hyperion applications in future releases of ODI 12c. For now you can still continue to work with ODI 11g when Hyperion connectivity is a requirement.”

Well, that was it. Oracle has stated for all presents there that ODI 12c does not fit to EPM users. They have removed a key functionality from its main ETL product. Do you work with Hyperion? Please, just sit on the corner and wait for some next release. There is no more to say about it. Uninstalling it right now and pretend that Oracle did not launch ODI 12c.