Archive for the ODI Mapping Category

ODI 12c new features: Dimension and Cubes! Part 4 (Loading using Surrogate Keys)

Posted in Dimensions, ETL, ODI 12c, ODI Architecture, ODI Mapping, Oracle, Tips and Tricks with tags , , , , , on December 16, 2016 by RZGiampaoli

Hi guys how are you?

Today we’ll continue the dimension and cubes series (Part 1, Part2 and Part 3 here) and we’ll see how to load data using Surrogate keys.

After all the setting done in the last post, now the only thing left is to create the interfaces and map everything. For the Surrogate keys, the interface and the mapping are exactly the same as for no-surrogate version (as we can see in the previous posts) for both, dimensions and facts, what’s very nice.

times-surrogate-interfaceThe interesting here is what he does behind the scenes. In the no-surrogate version ODI created one mapping for each hierarchy and in the end it merged everything together inside a table.

no-surrogate-time-operatorFor the Surrogate key version, ODI also generates one mapping for each hierarchy but the main difference is that after each one he merges it witch the others. This happens because he needs to get the surrogate key for each level.

time-surrogate-operator

For each level ODI automatically generates an insert into that level stage table verifying if all the columns does not exists in the target table (He does that to decrease the amount of data for the merge step since merge would insert or update everything and would take more time than necessary).

After the stage table is loaded the next step is to merge the stage table to the target table, and for that ODI just create a “Merge”: when match he updates the descriptions or attributes and when doesn’t match it inserts the new rows with the sequences for the SK.

In the next level of the hierarchy ODI repeats the process but joining the Year with the Quarter. ODI will keep doing this for each level mapped until the last one, where instead of having a merge with matches and not matches, he just do a merge with Matches (since he know everything will already be there).

The results will be this:

time-surrogate-table-results

It’s nice that ODI already creates the dimension thinking in an aggregated fact since we can see that he has some rows just with the year, other with the year and quarters and the last one with all the information.

One thing to notice is that the PK is the same as the Month SK. This is because ODI is ready to create SCD type 2 (we’ll do another post to show how it works).

For the fact, the mapping will still be the same as the No-surrogate version and again the difference will be in the results.

fact-surrogate-interface

We can see that in the operator ODI does something really neat this time.

fact-surrogate-operator

MERGE INTO EPM_HPT_ODI_RUN.S_FACT FACT_SURROGATE1_FACT_SURROGATE USING
(SELECT TIME_SURROGATE_FACT_SURROGAT_1.MONTH_SK AS ID_TIME ,
PRODUCT_SURROGATE_FACT_SURRO_1.PRODUCT_SK AS ID_PRODUCTS ,
REGIONS_SURROGATE_FACT_SURRO_1.CITY_SK AS ID_REGIONS ,
SRC_ERP.SALES AS METRIC
FROM ((EPM_HPT_ODI_RUN.SRC_ERP SRC_ERP
LEFT OUTER JOIN
(SELECT TIME_SURROGATE_FACT_SURROGATE.ID_MONTH AS ID_MONTH ,
TIME_SURROGATE_FACT_SURROGATE.MONTH_SK AS MONTH_SK ,
TIME_SURROGATE_FACT_SURROGATE.TIME_PK AS TIME_PK
FROM EPM_HPT_ODI_RUN.S_TIME TIME_SURROGATE_FACT_SURROGATE
WHERE ((TIME_SURROGATE_FACT_SURROGATE.TIME_PK = TIME_SURROGATE_FACT_SURROGATE.MONTH_SK)
AND (TIME_SURROGATE_FACT_SURROGATE.MONTH_SK IS NOT NULL) )
) TIME_SURROGATE_FACT_SURROGAT_1
ON (SRC_ERP.ID_MONTH = TIME_SURROGATE_FACT_SURROGAT_1.ID_MONTH) )
LEFT OUTER JOIN
(SELECT PRODUCT_SURROGATE_FACT_SURROGA.ID_PRODUCT AS ID_PRODUCT ,
PRODUCT_SURROGATE_FACT_SURROGA.PRODUCT_SK AS PRODUCT_SK ,
PRODUCT_SURROGATE_FACT_SURROGA.PRODUCTS_PK AS PRODUCTS_PK
FROM EPM_HPT_ODI_RUN.S_PRODUCTS PRODUCT_SURROGATE_FACT_SURROGA
WHERE ((PRODUCT_SURROGATE_FACT_SURROGA.PRODUCTS_PK = PRODUCT_SURROGATE_FACT_SURROGA.PRODUCT_SK)
AND (PRODUCT_SURROGATE_FACT_SURROGA.PRODUCT_SK IS NOT NULL) )
) PRODUCT_SURROGATE_FACT_SURRO_1
ON (SRC_ERP.ID_PRODUCT = PRODUCT_SURROGATE_FACT_SURRO_1.ID_PRODUCT) )
LEFT OUTER JOIN
(SELECT REGIONS_SURROGATE_FACT_SURROGA.ID_CITY AS ID_CITY ,
REGIONS_SURROGATE_FACT_SURROGA.CITY_SK AS CITY_SK ,
REGIONS_SURROGATE_FACT_SURROGA.REGIONS_PK AS REGIONS_PK
FROM EPM_HPT_ODI_RUN.S_REGIONS REGIONS_SURROGATE_FACT_SURROGA
WHERE ((REGIONS_SURROGATE_FACT_SURROGA.REGIONS_PK = REGIONS_SURROGATE_FACT_SURROGA.CITY_SK)
AND (REGIONS_SURROGATE_FACT_SURROGA.CITY_SK IS NOT NULL) )
) REGIONS_SURROGATE_FACT_SURRO_1
ON (SRC_ERP.ID_CITY = REGIONS_SURROGATE_FACT_SURRO_1.ID_CITY)
) MERGE_SUBQUERY ON ( FACT_SURROGATE1_FACT_SURROGATE.ID_TIME = MERGE_SUBQUERY.ID_TIME AND FACT_SURROGATE1_FACT_SURROGATE.ID_PRODUCTS = MERGE_SUBQUERY.ID_PRODUCTS AND FACT_SURROGATE1_FACT_SURROGATE.ID_REGIONS = MERGE_SUBQUERY.ID_REGIONS )
WHEN NOT MATCHED THEN
INSERT
(
ID_TIME ,
ID_PRODUCTS ,
ID_REGIONS ,
METRIC
)
VALUES
(
MERGE_SUBQUERY.ID_TIME ,
MERGE_SUBQUERY.ID_PRODUCTS ,
MERGE_SUBQUERY.ID_REGIONS ,
MERGE_SUBQUERY.METRIC
)
WHEN MATCHED THEN
UPDATE SET METRIC = MERGE_SUBQUERY.METRIC

He automatically joins all our dimensions at level zero (since we have the dimensions in the higher levels for the aggregated fact) to get the surrogate key information and use it in the fact table. This is very nice because in large DWs we’ll have tons of dimensions, and map/join everything is very time consuming. The final results is this:

fact-surrgoate-sql-results

A perfect DW created using surrogate key, in other words, instead of having the dimensions PKs in the fact table we have the SKs (that ware generated by a sequence in the dimensions).

In resume, we think that if you going to create simple dimensions and simple facts (without surrogate key or SCD type 2) it’s still nice to use this new feature since it’s a nice way to document and standardize your DW, but if we measure by development time it’s not worthy since it’s very time consuming for simple DW.

Now, if you want to create a DW using surrogate keys or SCD type 2 we found this new feature extremely useful for both, documentation and standardizations and because is a lot faster than do manually.

Thanks and see you soon.

Advertisement

ODI 12c new features: Dimension and Cubes! Part 1 (Settings)…

Posted in ACE, Configuration, Cubes, Dimensions, ETL, New Features, ODI, ODI 12c, ODI Architecture, ODI Mapping, Tips and Tricks with tags , , , , , , , , on August 19, 2016 by RZGiampaoli

Today we’ll talk a little bit about the new feature introduced in ODI 12.2.1.1.0, Dimension and Cubes!

As everybody already know, Oracle is slowly merging OWB within ODI and in each release we can see a new feature from OWB arriving in ODI. This time were the Dimension and Cubes feature.

This feature helps you to create a DW based in a configuration that you do. Basically there is a new component in ODI that helps you to define the datastore to be mapped. Also, after you create all dimensions (that is the most time consuming part in the process), the cube or fact table creation and mapping is a lot easier than do it manually.

Right now there is just one type of dimension available (Star schema level based dimension), but in the future other kinds will be supported like snow flake and others.

Ok, let’s start. There’re two ways to build a star dimension in ODI: with natural key’s (where the natural key is stored in the FACT table) and with surrogate keys (where the surrogate key is stored in the FACT table). In this post we’ll cover how we create a DW using the natural key process since the surrogate key one is buggy (the interface fails on saving the surrogate key) and we have openned a SR with Oracle to get it fixed. As soon we have the fix we’ll cover that too here in the blog.

In the Designer tab we can now see that we have a new tab called Dimensions and Cubes.

1-Dimension and Cubes

Opening that tab you will find a blank area, you need to click the button in the “Dimension and Cubes” tab, and you can create a new DM or DW.

2-DW creation

By the way, here’s the first small bug. For some reason when you write the name you want, ODI does not fill automatically the code field (as it always do for all the other objects in ODI), then you need to manually insert a code there. Remember, no spaces and no special character.

After that we can expand it and see the Dimension and the Cube node.

3-DW creation

Right click on those and we can create a new Dimension or Cube. As everybody knows, the dimension comes first since we need them to maintain the data integrity of the cube.

4-Dimension Definition

Here you can give any name you want for the dimension. Also you have a Pattern Name (that has just one option by now) and in the side tabs we have all possible options for the Dimension, Levels and Hierarchies, that we’ll cover later.

There are two more option here: the Datastore, that is the target dimension datastore where all metadata will flow and the Surrogate key Sequence that you need to set in case you want to create a dim using surrogate key (We’ll cover this later since we have a bug here).

In our case we’ll have three dimensions and one cube. (Time, Products, Regions and Fact). Both the source and the targets tables were generated by me with dummy data, just for this post. If you want to replicate this example, the scripts are here:

No surrogate Script

Let’s create the Time dimension. Click in the “Levels” in the left side tabs and you will see a big screen in three big sessions: Levels, Levels Attributes and Parent Level References.

5-Level Canvas

Let’s begin with the level configuration. Clicking in the Plus Sign button will create a Level.

6-Level Creation

I always like to rename the Level to something more meaningful like “Year” but if you like you can keep as default. By the default the target datastore comes automatically mapped since you define it in the previous screen. The only thing left here is to define the “Staging Datasore”.

This is something that we didn’t understood why it was made in this way since ODI could create automatically based in the definitions we had in the previous step or even with the interface configuration.

Anyway, what we need to do is create the stage tables for each level, and for that we have a few approaches we can do here:

  1. We can create another table exactly in the same way of the target table (needs to be a new table because the way ODI integrates the data. We’ll cover that latter).
  2. We can create, in this case, 3 tables, one for Year (same way as the source table is), and one for Quarter (same way of the source plus all columns from the Year table) and one for Month (same way of the source plus Quarter and Year columns).
  3. And we can duplicate the sources or the target datastore and do the changes above (in the 2 approach).

With the Stage datastores created (manually or by reverse) we just need to click in the “…” button and choose it from the list. Now we just need to repeat the step 2 more times for the other levels:

7-Level Canvas mapps

After we associate the source datastores and the stage datastores it’s time to create the attributes and ID’s for each level. For this you just need to click in the Year level and click in the Plus Sign button below:

8-Level attibutes config

Here we need to create all the attributes for this level and the natural key for that level as well. (We have the option to create slowly change dimensions here, but this will be covered in a future post!)

For each attribute you need to Plus Sign and fill the name of the attribute, set the data type (yes it not get automatically….) and select the Stage attribute (click in the “…” button and select it).

After all Attributes and ID’s we need to click in the below Plus Sign to set the natural key of that level. Just select in the list available.

After that, we just need to repeat for all the other 2 levels that we’ll have in this dimension.

With this done, the last step for this tab is to create the relationship between one level and its parent level. For this, highlight each level again, in this case we’ll start from bottom up, then let’s start clicking in the Month level and click on Plus Sign button below. Here we just need to say that for the Month level his reference parent will be Quarter. To set this we just need to select the Quarter level from the drop box and select eh foreign key from the drop box as well. Do that again for the Quarter level and reference it to the Year level. We don’t need to create any reference for the Year since it has no parent.

9 Parent Level References

As you can see, after the level configuration, everything you need to do is click in buttons and select from drop box or from “…” Screen (other than rename the defaults values if you like).
For last but not least, we need to click in the tab Hierarchies on the left tabs to enable us create a new hierarchy.

This is something fun. We can create multiple hierarchies inside the target table as well as skip level and some other features that we’ll cover in another post. For now let’s stay with a single hierarchy.

10-hierarchy

Here we need just to create the hierarchy by clicking in the Plus Sign button, give a name for the hierarchy and then click in the plus button bellow and add all the levels for the hierarchy. The order doesn’t matter, the idea here is that you can have multiple hierarchies with different levels in each one. For example, we could have a hierarchy called Full_Time with Year->Quarter->Month and another Hierarchy called Small_Time with just Year->Month. ODI would know based in the configurations we did, how to handle the data. Nice.

Also we can set skip level for each level we defined.

We are done with the dimension settings. I know it’s a lot of settings and some of you could be thinking (as we thought, this is a lot more work than if I create manually), but believe me, after you get used, you can do it in a reasonable time and the cube part is worthy.

Now we just need to repeat the process for all the other 2 dimension and them we finally start the cube settings:

11-Cube

To start the same thing as the dimension, Right click in the Cubes node and new.

12-Cube definition

In this screen we need to give a name for the cube, select a pattern name (Same as Dimension, just one option here for now) and do a biding to the target datastore.
After that we just need to click in the Detail tab in the left menu and start to configure our fact table.

12-Cube config

As I said in the beginning, here’s where the use of this components pays off. To configure a cube we just need click in the Plus Sign button and add all dimension we have, in this case our three dimensions. Then we just need to select the level we want to join our Fact table with our dimensions and bind the keys from the fact and that dimension.

For the last but not the least we just need to create by Plus Sign the measures that the Fact table will have. Same as the attributes in the dimensions: Name of the measure, Datatype and the column that will receive the data.

And that’s it. We are all set to move to the Mappings. Since this is already a huge post, I’ll stop this one now and will start a new post just for the Mappings, since I want to analyze how ODI builds the queries and loads the data there.

Hope you guys enjoy this post and see you soon.

10 Important Things to Improve ODI Integrations with Hyperion Planning Part 8 (Building Planning DataStore)

Posted in EPM, Hyperion Planning, ODI, ODI Architecture, ODI Architecture, ODI Mapping on March 10, 2014 by RZGiampaoli

In order to create a process that is able to load any application and dimension using one single ODI interface we need to make some code changes to the KM that is responsible to load metadata into Hyperion Planning. First, we need to understand the ODI concept of a KM. KM is a set of instructions that will take the information from what exists in the source and target data stores of an ODI interface and construct a SQL command based in those data stores. In a nutshell the ODI KM is code generator based in the information that you set in the interfaces, data stores, topology and so on.

As we know the default Hyperion Integration KM is able to load only one application and dimension at a time because of the need of a target data store for each dimension in each application. If we take a deeper look in the KM to see what it does behind the scenes we will see something like this:
KM Behind the ScenesFigure 1 – KM behind the scenes.

Basically what the KM does is translate the Planning application data store to a SQL query, and as we know, we get this data store by reversing a Planning application inside ODI. Fair enough, but this also means that if we could somehow have the same information that ODI has to reverse this application dimension to a data store we could easily end up with the same SQL created from that data store information. As we already showed before we have the Planning application repository itself where all the information about a Hyperion application is stored. We only need to read this information to get the same information provided by the ODI data store.

Knowing this the only thing left is to change the default KM according to our needs, and for this we need to make three changes on it:

  • Make the application name that it is going to be loaded dynamic;
  • Make the dimension name that is going to be loaded dynamic;
  • Change the way that the KM builds its SQL command that will load metadata to Hyperion Planning. Currently it builds its SQL command based on the source and target data stores and the interface mappings;

Default KM Behind the ScenesesFigure 2– Default KM behind the scenes.

In Figure 2 we can see how a default planning integration KM works. Basically it has two main steps: “Prepare for loading” and “Load data into planning”. The first one is responsible to set all information regarding connections, log paths, load options and so on. The second step is responsible to retrieve all source data based in the interface mapping and the source/target data store and load it to planning. In our case, the application and dimension names resides on the first step and the SQL command resides in the second step so we already know where we need to change the code.

But we need to analyze further to know what exactly we need to change. For the application name ODI gets it from “<%=snpRef.getInfo(“DEST_CATALOG”)%>” API function that returns the application name based in the destination target store that is connected to a logical schema that finally resolves into a physical schema that contains the application name itself. If we change it to an ODI variable we will be able to encapsulate this interface into an ODI package and loop it passing the application name as a parameter, making it independent of the target data store topology information and giving us the a ability to load any Hyperion planning application using one single interface.

The dimension name follows the same logic: ODI gets it from “<%=snpRef.getTargetTable(“RES_NAME”)%>” API function that returns the resource name from the target data store that in this case is the dimension name itself. Again if we changed it to an ODI variable we will be able to encapsulate this interface into an ODI package and loop it passing the dimension name as a parameter, making it independent of the target data store resource name and enabling us to load any dimension with one interface.

The third part is the most complex one. ODI data stores for planning applications are so different from one dimension to another that they require one data store object for each dimension. In figure 10 we can see that ODI relies on “odiRef.getColList” API command to return all mappings done in the target dimension data store, which has the correct dimension format required to load that dimension metadata into planning.

So the big question is: How can we change the “Load data into planning” step to use a dynamic SQL to create dynamic interface mappings to load to any application/dimension? The answer is to rely again on the “Command on Source/Target” concept and on the planning repository metadata information.

Instead of getting the mapping information from the ODI data store object, we can query Planning repository to get the same mapping for all dimensions and applications being loaded. The result of this query is a formatted mapping, identically of what ODI would have generated if we used the default planning development, but with the big advantage of being entirely dynamic to any application and dimension.

Dynamic KM behind the scenes
Figure 3 – Dynamic KM behind the scenes.

In figure 3 we can see an example using an Attribute dimension. The command on source will query HSP_OBJECT and HSP_ATTRIBUTE_DIM of a given application (defined by #SCHEMA_APP variable) to retrieve information about one attribute dimension (defined by #DIMENSION variable). Those variables are passed from an external ODI package that will be used to loop all applications and dimensions that we want to load.

Dimension Datastore Information

Table 1 – Dimensions Data Store information.

If we take a further look into all different data stores that a Planning application could have, we will see a pattern regarding the information that we need to send to Planning to load metadata depending of each dimension, as we can see in the Table 1.

The logic to create the dynamic mapping columns is exactly the same used to create the inbound and the extract tables. The only difference is that for the inbound and extract tables we need to put all columns together and for the KM mapping we need to, depending of the selected dimension, take the right information in the application repository. This information will help us to create the necessary mapping that contains the right source columns and the right alias of those columns, which will inform Planning about what that metadata column stands for.

Since our metadata tie out table contains standard columns for all dimensions we don’t need to worry about adjustments when we change to another dimension, and since our source metadata table already has the metadata information in the correct planning format, we don’t even need any kind of transformation here, it is just a matter to read from the metadata source table and load directly to Planning.

In the Figure 3 example we will use the SRC_MEMBER, SRC_PARENT and SRC_ALIAS as the mapping columns and for the Planning alias the only one that is dynamic is the member name alias that identifies the dimension name. To get this information we need to query the application repository looking for the attributes into HSP_ATTRIBUTE_DIM and for its name in HSP_OBJECT table, and finally we can use the OBJECT_NAME column to get the dimension name alias.

Executing this query we will get a one line mapping string that will be passed as a parameter (#MODEL) from “Command on Source” to “Command on Target” and will enable ODI to load metadata to that specific dimension/application. If we execute this interface and look at the query created in ODI operator we will see that the result is the same as a default KM would create but with the big advantage of being entirely dynamic. Following this logic, we would only need to change the value of the #SCHEMA_APP and #DIMENSION variables to get another application\dimension loaded.

Off course we need to work a little more to get the mapping for the other dimensions as Account or Entity, but the idea will be always the same: query the application repository to get the data store information depending on the dimension\application selected.

Dimensions Mapping informationTable 1 – Dimensions Mapping information

In table 1 we can see all the possible mapping combination that we can have in a planning application for the mainly planning dimensions and we notice that some information are dynamic (dependent of the planning repository) and some are fixed. To put everything together in one single query here are some tips:

  • The majority of the columns are fixed and can be obtained with a simple “select ‘Any string’ from dual”;
  • The easiest way to create this SQL is to create separated SQLs for each different kind of information and put everything together using Unions statements;
  • Split the final query in small queries to get the different categories presented in table 1;
  • Use the MULTI_CURRENCY column in HSP_SYSTEMCFG table to find out if that application is a multicurrency one or not;
  • For aggregations and plan type mapping we need to get the name of the plan type itself and for this we use the HSP_PLAN_TYPE table;
  • When the query is ready you need to add a filter clause to filter the dimension from where that information belongs;

With the query ready the only missing step is to insert it into the “Command on Source” tab inside the Planning IKM and pass the string generated by it to the “Command on Target” tab as we can see in the figure 3.

This ends all the preparations that we need for learn how to build a ODI interface that will dynamically load metadata into any number of Planning applications.

Thanks you and see you in the next post.

ODI 12c First impressions

Posted in EPM, New Features, ODI 12c, ODI Architecture, ODI Architecture, ODI Mapping with tags , , , on October 26, 2013 by RZGiampaoli

When I started working with ODI it was in version 10, and back there we had a few bugs, the UI was good (back there we could change the expressions and we didn’t have to take out the focus to save the changes, for example) , everything worked well, we could write a variable name with upper or lower case, the metadata navigator worked very well and that was one of the things that made the users choose ODI instead of power center Informatica, because they had an easy way to run their interface at will, and some other good things. It was a very stable version of ODI, good times.

Then 11 version came out. Well, the first thing we noticed was the UI, and the huge amount of bugs that came with it, and most of them in the interface screen. In 11 version, if you try to delete a filter, all the other filters disappears, but they are still there, if you close and re-open the interface they’ll come back, if you change a expression and doesn’t remove the focus from the filed it’ll not commit the changes, if you delete a datastore and put a new one (because some model changed for example) you have a good chance to not be able to save the interface for some bizarre error and you need to do this operation over and over, the variables name must be upper case for some odd motive, and other things. Another big loss was the metadata navigator that was replaced by ODI Console, a worse version with so many bugs that we had to stop using it. Some bugs like lack of security (everybody could see everything), all the execution ran as supervisor, we couldn’t see the load plans (only place where the security works), we couldn’t see the variables and lots of other things.

BUT despite of that, the functionalities for the DEV team were almost the same.

Now we have a new version of ODI. The 12c. Ok, this is only our first impressions and we could have been doing something very wrong (and I pray for this). When a software changes its version and two specialists takes more than 30 minutes trying to figure out why and how or what they need to do to sum a column in an interface or should I say mapping (yes, this is the new name, I liked it and this is one of the few things that made sense to me in this new version), something is very wrong with it.

Ok let’s start from the beginning. When I started to work with datamart, the first tool that I used was OWB, and after some time when I started to use ODI to make some integration, I really missed some stuffs from OWB. It makes sense to get this two tools and merge it together. From OWB we had a cool mapping process that makes easy to understand what that transformation is doing, multiple targets, and a few other things that I missed in ODI, BUT, ODI has the Agent that allow us to connect anywhere without the need to create a heterogeneous service or a dblink or other stuff like that, it has more flexibility (and when I say more you can understand infinite more), we don’t need to deploy the mapping to create a procedure in a oracle database to integrate something, what makes the development test super-fast, we have a lot of components, well, ODI is so much better in this aspect that the few things I missed doesn’t bother me at all.

So in this new version they tried to merge the two tools. What looks good in the paper (I mean blogs and documentations) looks terrible in ODI.

We installed it this weekend to see what happened in this new version and we saw a very different workspace for the interface, I mean for the mappings. This simple ODI UI…

ODI 11g Interface UI

ODI 11g Interface UI

Turned into this:

ODI 12c Interface UI

ODI 12c Interface UI

Humm looks good right? Well, yes and no. I’m working in a 60” full screen TV and I need to drag the screen left and right, up and down to make everything visible, poor devs that has a screen smaller than mine.

Ok but this is only layout, everything else should be better right? Well, unless we were doing something very wrong they put a lot of more complexity to solve some issues that we were able to solve very quickly since version 10.

First of all, in all the other versions if you want to sum something in ODI you just need to get the expression in the target datastore and put a SUM() function on it. ODI would do the group by for you and everything is ok.

ODI 11g Sum

ODI 11g Sum

In the new version you need to drag an object called Aggregate, put all the columns that you want to map trough this (like in OWB), change the options of this object, and in the end, put the same SUM() expression in this object instead of in the target datastore.

ODI 12c Sum Part 1

ODI 12c Sum Part 1

ODI 12c Sum Part 2

ODI 12c Sum Part 2

ODI 12c Sum Part 3

ODI 12c Sum Part 3

If you try to put the expression as before (<=11g) it’ll not create the group by and you’ll not be able to run the interface because it will just simply fail ….

Well, at least in OWB when you use the Aggregate object it’ll aggregate the columns that you define without the need to write the SUM() function. Why they put this new complexity? Ok you can execute this SUM() in another place different from the source or the target but still…

We have some other components that we need to use in the interface right now.

ODI 12c Mapping Components

ODI 12c Mapping Components

The dataset is used when you want more than one dataset in the source (we already have it in the 11g, the difference now is that you need more screen to manage it but the bright side is that you’ll not forget to change anything because you missed the datastore tab like in the old version [yes I did it a lot])

ODI 11g Datastore

ODI 11g Datastore

ODI 12c Datastore

ODI 12c Datastore

The distinct component does not need to be explained, the only thing I had to say is that in the old versions you need only to flag it in a simple check box and now we need to add a distinct component in the flow, drag all columns to it and them drag those columns again to the target. A complete waste of time.

ODI 11g distinct

ODI 11g distinct

ODI 12c Distinct

ODI 12c Distinct

The expression… well, almost the same as Aggregate. Now instead of just write any expression in the target datastore you may add this object in the flow, BUT it will work if you just write the expression on the target. So, why do we need to have this additional object???

ODI 11g Expression

ODI 11g Expression

ODI 12c Expression

ODI 12c Expression

For the filter, join and lookup table nothing changed.

ODI 12c Lockup

ODI 12c Lockup

The set is to define the type of union you can have between the datasets, same as before but now it’s in the mapping too.

ODI 12c Set

ODI 12c Set

We now have a sort component, so now we may stop doing “SQL injection” or “KMs changes” for a simple order by component (of course I liked this one).

ODI 12c Sort

ODI 12c Sort

And the Split component. This one is what I missed the most in OWB. This allow us to say something like: if the DIMENSION is Account, all the data goes to DIM_ACCOUNT, DIMENSION =  ENTITY then DIM_ENTITY, the others goes to DIM_OTHERS for example.

This is a cool thing but easily done using our command on source and target in a procedure (See this post 10 Important Things to Improve ODI Integrations With Hyperion Planning Part 2).

ODI 12c Split

ODI 12c Split

As we can see a lot of things were made for this version, but all this things makes it unusable. Really, in the old versions I already tried to not use interface, only if was absolute necessary, because they are time consuming, inflexible, hard to maintain and there’s nothing you can’t do in a procedure that is a lot better and faster to create then interfaces, in fact I only use interface when I want to use the CKM to use some constraints for data quality, nothing I can’t do in a procedure, but this is for sure something easier to achieve using interfaces. Despite of that, everything else I prefer to use a procedure, mainly because I can get rid of the models, that looks good, but for me they are the true villain of ODI. Models are hard coded information, and I hate hard code.

In resume, in this new version things that were relative simple to use, now are a nightmare to create. Of course that things will get more visual but the developers will pay a very high price for that cool looking. Oracle just added foolish complexity on things that were simple and that worked very well. Do you want a final example? On prior versions of ODI, you would import all KMs that you needed for that specific project and you would pick one of them from the combo box on the flow tab. If you needed to change it later, just pick another one from that same list and that’s it:

KMs 11g

ODI 11g KM Selection

On version 12, first you need to be on the Logical tab of the Mapping object, click on the target table to get its focus, expand “Target” properties on the right panel and select its target “Integration Type”. This type will filter which KMs you will be able to see in the Physical tab:

IntegrationType1

ODI 12c KM Selection 1

In the Physical tab, click again on the target table, expand “Integration Knowledge Module” and select one of the KMs of that type that you filtered in the previous tab:

IntegrationType2

ODI 12c KM Selection 2

And what happens if you want to change the KM? If it is from a different type, first you need to go to the Logical tab, change its type, go to Physical, and select another KM. Ok, they have categorized the KMs and this is a good thing but why they didn’t add the Integration Type in the same tab of the KM selector??? Now we need to go back and forth without any good aparent reason and if you are in doubt on which KM to select and you want to read their descriptions to see which one best fits your needs, then you are totally screwed.

But there are two really cool things about this 12c version. First one debugger! Finally they added a debugger to ODI! This feature was a long waited one because it was simply terrible to debug things in ODI. Now you can go execute the code step by step, take a look in the variables content for that session and you can even query uncommitted data through the transactions:

debug_odi

ODI 12c Debug

Second cool stuff: Roles in Security Module! Again, another long awaited simple feature that did not exist until now. Roles are similar to Groups where the security added to a Role is replicated to all users that belongs to that Role. This is great, because in the old days, Security configuration was madness with a lot of manual configuration. Finally now we will have a better Security framework to work on it.

Roles_ODI

ODI 12c Roles

Well, there’s a lot of thing to see in this new version yet, but the first thing wasn’t pretty. I didn’t uninstall it yet. Let’s see if we can find anything good that justify this living hell that the interfaces (mappings) turned out to be.

If someone of you learn something different or get a different idea for this new version please let us know because I still don’t believe that these changes happened and this is the way Oracle wants us to work from now on. (By the way the UI for procedures are different too and for now I’ll not say if I liked it or not because normally we need some time to get used to it [but I didn’t like it J]).

This weekend we’ll try a migration and let our impressions here.

See you guys!