Archive for the Tips and Tricks Category

Fragmented and Aggregated tables in OBIEE using ODI Part 1/5: Creating the tables

Posted in OBIEE, ODI, ODI 12c, Oracle, Oracle 11.2.0, Oracle Database, Performance, SQL, Tips and Tricks with tags , , , , on February 7, 2020 by RZGiampaoli

Hey guys, how are you?

I normally don’t talk about OBIEE, despite the fact I work a lot with it, because there’s a lot of people talking about it. But recently I worked in a project and I decide to use fragmented tables and aggregated tables at same time and I found that we don’t have much specific information regarding this matter, and of course, zero information about using ODI to enable this project.

Then since this will be kind of big, I decide to split this in 5 parts:

Creating the tables: I’ll explain the advantages and how you can create the fragmented tables and also how to further improve performance using partitioning and sub partitioning.

Managing the partitions of the Fragmented/Aggregated tables: Here you’ll see how to manage all partitions using ODI.

Populating the Fragmented tables using ODI: Here you’ll find how to create generic components to load all fragmented tables we need to.

Populating the Aggregated tables using ODI: Here you’ll find how to create generic components to load all Aggregated tables we need to.

Setting the OBIEE Repository: Finally, we’ll going to setting up the OBIEE repository to make use of all tables.

To start the series Fragmented and Aggregated tables in OBIEE, today we are talking about how to Creating the tables for our project.

In this project I had data from 4 different source system, and each source system had different amount of data. Dell data had 20 million rows per month, EMC data 30 million, DTC data 5 million and STAT data another 5 million.

The business wanted to have 15 quarters of data stored, then if we do a simple calculation we’ll going to have: 20.000.000 + 30.000.000 + 5.000.000 + 5.000.000 * 45 = 2,700,000,000 rows.

That’s a lot of data even for a partitioned table. Then I decide to split this tables in some way.

Analyzing the data, I saw that this tables contain 3 different account hierarchies and the users will only analyze one per time: BS, Income and PL2. I decide to split the tables by it. Then just with that now I had 3 fragmented tables. JAT_JE_DETAIL_BS, JAT_JE_DETAIL_INCOME and JAT_JE_DETAIL_PL2.

That’s nice, just with that I split that 2.7 trillion rows in 3. The problem now is that I have 4 different source system right, and even if I partition ate the table by source system, EMC is as big as the other 3 sources put together, and that would make that source slower than the other to retrieve.

With that I decided to split the tables by 2 again, one for EMC and another for the other sources. That makes the data to be distribute evenly. 20.000.000 + 5.000.000 + 5.000.000 in the D table and 30.000.000 in the E table (D and E is how I called them).

With that now we have 6 partitioned tables, right? 3 for EMC data (one for each Account Hierarchy) and another 3 tables for Dell, DTC and STAT data.

Now that we have a nice and even data distribution, we need to decide how we’ll going to create the partition and sub partition for these tables.

The partitions I always like to use the PERIOD_ID because it makes sense in most of cases (unless we are creating a Forecast app, in this case the users will want to see by scenario and not by one period).

Then for these tables JAT_JE_DETAIL_D_BS_FACT, JAT_JE_DETAIL_D_INCOME_FACT, JAT_JE_DETAIL_D_PL2_FACT we are going to partition by period. But what kind of partition we will going to use? Well, for this case, this tables contains Journal information at day level and the users want to drill from the month to the Journal rows of that mouth. What that means is that we’ll always going to work inside a month, then the best way to go is using a RANGE partition.

Range partitions, how the name already says, uses a Range of values to create the partition area. In our case it’ll be one month. The range can be built by a DATE column or a Number column, and the second one is our case since is easier to manipulate the periods using a number than a data.

(The period for this project is not a regular month, for example, the month of February FY20 start in Jan/30 and goes until Feb/2).

For the sub partitions is an easy call, we’ll going to use the Source System since we have 3 different one in this tables right (Dell, DTC and STAT), and the partition type will be LIST since we have a simple list of values. Also, for sub partitions is always easier to have values that don’t increase because the complexity of adding new sub partition values to a table.

What I mean is that to add a new partition you just need to use one ALTER TABLE ADD PARTITION statement and add any number of partitions you need. But for a sub partition is different. Every time you create a sub partition Oracle create a Sub Partition template. That template is used every time you create a Partition.

In our case for example, our Sub Partition Template would be DELL, DTC and STAT. What that means is that when you create another partition, this partition will be created with these 3 sub partitions automatically. Then if you want to add another value to a sub partition you need to do 2 things instead of just one (in partition case).

You need to change the sub partition template definition to add the new sub partition and you need add your new Sub partition in every single already existing Partitions (is not update automatically). After that all new partitions will contains all your sub partitions.

If you don’t update the template, every time you create a partition, you’ll not have the new value you want.

Enough of theory, let’s proceed with the other set of tables, JAT_JE_DETAIL_E_BS_FACT, JAT_JE_DETAIL_E_INCOME_FACT, JAT_JE_DETAIL_E_PL2_FACT. For these tables we’ll continue to have the partitions as a RANGE of the PERIOD_ID but the sub partition doesn’t make sense for us to use source system because we have only one source system.

And now is the time to explain another thing about partitions. For Oracle to use the partition you need to filter the column that is used to create the partition. For example, if we are creating a partition using the column PERIOD_ID, we must filter the data by PERIOD_ID and the ideal would be always have a filter in the Partition and Sub Partition column, because if you don’t, it’ll do a full table scan anyway. Then for a data retrieval point of view, the choice for the partitions are always made based in the Filter requirement, that means, we need to choose columns that we can force the user to choose in OBIEE. For an ETL point of view, the best would be to have partitions and sub partitions in columns that we can use to truncate before load.

For example, in our first set of tables I can easily and fast truncate a period and a source system before I load it back to the tables. That’s the beauty of partitions, I can drop, truncate and create on demand.

That said, you can imagine that some times what is good for the report is not good for the DW, but this is not our case today.

With this new piece of information in mind, the users are force in the OBIEE prompts to choose a Gen 2 Account, then this will be our sub partition for our second set of tables also with the partition type as LIST.

Before we proceed, let’s just see how everything is set then:

Table NamePartitionSub Partition
JAT_JE_DETAIL_D_BS_FACTPERIOD_ID (Range)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_INCOME_FACTPERIOD_ID (Range)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_PL2_FACTPERIOD_ID (Range)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_E_BS_FACTPERIOD_ID (Range)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_INCOME_FACTPERIOD_ID (Range)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_PL2_FACTPERIOD_ID (Range)ACCOUNT_GEN2_ID (List)

Perfect, now the only thing that is missing here is the aggregate tables.

Aggregated table are tables that uses a dimension Drill to pre aggregate the vales that it contains. It’s an easy way to improve performance during a drill operation.

The most common aggregated table is for the Period dimension, and this is the one we’ll going to do here as well. Then in our case, as mentioned before, the 6 tables we have now contains day level data right. I want to have the data aggregated by Month and by Quarter.

What that means is that we’ll need to have one more table for each one of these 6 tables for Monthly level data and one more table for each one of these 6 tables for Quarterly level data. These tables will going to have the same Partitions and Sub Partitions definitions with only one difference, now instead of using a Range Partition, we’ll going to use a List Partition because we don’t have a range anymore, we’ll going to aggregate that range of days in one month.

Then let’s see how the tables needs to be set:

Table NamePartitionSub Partition
JAT_JE_DETAIL_D_BS_FACTPERIOD_ID (Range)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_INCOME_FACTPERIOD_ID (Range)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_PL2_FACTPERIOD_ID (Range)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_M_BS_FACTPERIOD_ID (List)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_M_INCOME_FACTPERIOD_ID (List)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_M_PL2_FACTPERIOD_ID (List)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_Q_BS_FACTPERIOD_ID (List)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_Q_INCOME_FACTPERIOD_ID (List)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_Q_PL2_FACTPERIOD_ID (List)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_E_BS_FACTPERIOD_ID (Range)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_INCOME_FACTPERIOD_ID (Range)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_PL2_FACTPERIOD_ID (Range)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_M_BS_FACTPERIOD_ID (List)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_M_INCOME_FACTPERIOD_ID (List)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_M_PL2_FACTPERIOD_ID (List)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_Q_BS_FACTPERIOD_ID (List)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_Q_INCOME_FACTPERIOD_ID (List)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_Q_PL2_FACTPERIOD_ID (List)ACCOUNT_GEN2_ID (List)

Great, now we have 18 tables, 6 fragmented tables for daily level data, 6 fragmented tables for monthly aggregated level data (M) and 6 fragmented tables for quarterly aggregated level data (Q).

The last thing about tables is that we need a matching Period dimension (for this case) to match the data in each aggregated level. What that means is that, in this case, since we have 3 different level of aggregations, we need to have 3 Period dimensions.

OBIEE don’t allow us to join the same table using columns that are not an ID column. Then what we need is one PERIOD dimension (DIM_PERIOD) that contains the drill from Year to day (Period ID will be YYYYQMMDD number), another PERIOD Month Dimension (DIM_PERIOD_MONTH) that contains Drill from Year to Month (Period ID will be YYYYQMM number) and one last PERIOD Quarter table (DIM_PEIOD_QUARTER) that contains Drill from Year to Quarter (Period ID will be YYYYQ number).

Of course, the Aggregated FACT tables also will going to have a matching ID (and we need that to aggregate the data right).

Last thing to mentioning is that the Detail Fact table (the one in the Day level) it contains a lot of Journal details like Journal number, Description, Header ID and others. For the detail this is ok, but when we design the aggregated tables, only ID are allowed there for maximum aggregation. If the table contains Descriptions, nothing will be going to be aggregated right?

Don’t worry that OBIEE can handle this kind of situation very well.

That’s for now. I hope you guys enjoy it and found helpful.

See you soon.

Advertisement

ODI Hidden Gems – Static validations

Posted in Gems, ODI, Tips and Tricks with tags , , on December 20, 2019 by Rodrigo Radtke de Souza

Hi all!

This post is based in one question that I answered in the Data Integration community forum some time back. This feature is powerful, but it is also somehow “hidden”. The question was like this (I have edited it for the purpose of this post):

I have one mapping that will load one target table (TARGET) from two different sources (SOURCE_A and SOURCE_B). Target table TARGET contains ITEM_KEY, ATT1, ATT2, ATT3 and so on. One Source (SOURCE_A) contains ITEM_ID, ATT1, ATT2, ATT3 values and we are using a look up to a dimension table (T_ITEM) to get ITEM_KEY for the ITEM_ID. Second source (SOURCE_B) contains CC_ID, ATT1, ATT2, ATT3 and there is a reference table (T_CC_ITEM_REF) which contains the mapping between CC_ID and ITEM_ID that we use to look up to the dimension table (T_ITEM) to get ITEM_KEY for the ITEM_ID.

Validating the first source is straight forward as we have declared a constraint in CKM to log foreign key error records which got logged as ‘Join Error between TARGET and T_ITEM’ in E$. For the second source, there is a “middle” table in the mapping, so we must join SOURCE_B to T_CC_ITEM_REF (reference table) and to T_ITEM (dimension table). I can still create a constraint and log foreign key errors between TARGET and T_ITEM, but is there a way that I can also be more specific and capture any join error between T_CC_ITEM_REF (reference table) & T_ITEM (dimension table). E.g. if T_CC_ITEM_REF is missing any mapping record or if any mapping is mapped to an inexistent T_ITEM row?

This is very good example of “indirect join” validation, where you want to validate some source data that is used in your mappings, but these tables are not directly associated with your target table. In these cases you cannot directly validate them in a regular mapping, but ODI has a very nice feature called Static validation, which allows you to run any validation in any data model at any time, so you could catch all those errors before trying to load your target table.

If you go to ODI models and expand any of them, you will see that you may add “Constraints” to it. Generally, we add those constraints to the target table, so data gets validated during the load data flow, before hitting the actual target table. I said “generally” it is done in the target because we have situations (like the one that we are talking now) where we want to validate the source/mapping tables even before we try to load the targets.

Before getting directly on how to solve this issue, lets step back and briefly see some options that ODI has to offer regarding data validation. If you go to ODI models and expand any of them, you will see that you may add “Constraints” to it.

1

The first one is a “Key”, which resembles a “Primary Key”. You will add all columns that belongs to your table primary key and ODI will make sure to log all records that are not unique based on this key.

The second one is “Reference”, which resembles a “Foreign Key”. You will add the columns and tables names that belongs to a foreign key relationship and ODI will automatically log all records that have no reference between that relationship. If we go back to the above question, this works fine to validate our TARGET table against the T_ITEM (dimension) because there is a direct link between them. ODI will log all source rows that does not have a valid “join” to the T_ITEM table before loading it to the target (so the target has only “valid” records regarding that FK).

Third one is called “Condition” and it is the most flexible of all ODI constraints. Basically, it is a free form text where you may add any kind of SQL statement which you want to check. You just need to remember that you always want to write down a statement that will check for TRUE values, as for example, you want to test if a column value exists, “is in” another table or if the values are greater or lesser than a specific value. When you execute your mapping, you will see that ODI will “negate” your condition adding a NOT before it, so it will check for all the records that are not TRUE (in other words, FALSE) and logging them at the E$ table. Pretty neat stuff.

Going back to our example, let’s divide our problem in two pieces. First let’s validate if T_CC_ITEM_REF contains any kind of bad data, meaning that we will check if it contains any mapping row that references a non-existing T_ITEM row. This can be achieved by going to T_CC_ITEM_REF and create a New Reference.

2

There you point the “Parent Table” (in this case, T_ITEM):

3

In attributes, add the two columns that are used in the join condition:

4

Save it. Go to the parent Model of this datastore and check its Control tab. You will need to select which CKM will be used to run the Static validation (you cannot leave it blank).

5

Create a new package and “drag and drop” the model to the package. It will look like this:

6

This icon indicates that, once you run the package, ODI will run a Static validation on this datastore using the CKM that we set in its parent Model. Once we run it, this is what we get in Operator:

7

The warning sign on the scenario execution tell us that some error was logged in the E$ table. When we go to check it, we can see following:

8

It means that ITEM_ID 50 does not exist in T_ITEM, so any source record with CC_ID equals to 5 would be dropped due to an invalid mapping row.

Now, what if we want to check if all CC_IDs from the source has a valid mapping record? This is where “Condition” constraint comes in handy, since it is very flexible and allow us to virtually write any kind of SQL logic in it. Let’s go to SOURCE_B and create a Condition to it:

9

Add a name to it and select a type. An Oracle Data Integrator Condition is a condition that exists only in the model and does not exist in the database. A Database Condition is a condition that is defined in the database and has been reverse-engineered. In our case, let’s pick ODI Condition. Write the SQL statement that you want to be true. In this case, I want all source CC_ID columns to be not null and that also exists in the join between my mapping/dimension tables. Another cool thing about conditions is that you may write custom messages for it, so it gets clearer to the users what that validation means.

10

Similarly, to the previous example, drag and drop the model to the package and run it. This is what you will get:

11

12

The error is telling us that there is a source record (CC_ID 3) that does not have a mapping record on T_CC_ITEM_REF. You may add both Static validations on your package, before you load your target and have a complete picture on what data is missing in your data flow.

Hope you have liked the post. See ya!

Oracle SQL for EPM Tips and Tricks S01EP08

Posted in Connect By, DEVEPM, Dimensions, EPM, Oracle, Oracle 11.2.0, Oracle 11.2.0.4, Oracle Database, Performance, REGEXP, Tips and Tricks with tags , , , , , on November 26, 2019 by RZGiampaoli

Hey guys how are you? Today I’ll expand a little bit more the REGEXP team just to show how easier and powerful these functions are, not to mentioning how dynamic.

In the last post S01EP04 I explained how to split a string in different columns. The thing is, what if you don’t want to split in different columns? what if you want just to have the string as a list in the same column?

We can also do that with a very nice trick using REGEXP_COUNT. Let’s take a look on this.

In the previous post we had a query like this:

As I said there, using REGEXP is very simple to “walk”trough the string and get any part of it you want. And is also simple to transform a string in a list of values. To do that we just need to add one extra connect by and the REGEXP_COUNT function like this:

What that CONNECT BY LEVEL <= REGEXP_COUNT(PATH, ‘[^|]+’) is doing is basically increasing by 1 for each “|” he finds in the string we have, in other words, he is dynamically “walking” through the string looking for “|” and passing the Level he find a “|” to the REGEXP_SUBSTR above. The result is:

As you can see, the REGEXP_COUNT will return the number of “|” he finds in the string, the CONNECT BY LEVEL will multiply that string by that number and the REGEXP_SUBSTR will “walk” the string using the LEVEL of each row to extract the list of members from that string.

This is very useful to manipulate strings that you don’t know the amount of members you want to extract from that, like UDA members for example, that when you extract from ASO cubes, comes in a string all together.

I hope this is a useful trick for you guys and see you soon.

ODI Hidden Gems – Target Load Order

Posted in Gems, ODI 12c, Tips and Tricks with tags , , on October 14, 2019 by Rodrigo Radtke de Souza

Hi all!

Today’s gem is related to a very nice surprise that all ODI developers had when they were migrating from ODI 11 to ODI 12, which is the ability to load multiple target tables with the same ODI mapping object. You may have a very large mapping and “split” (with Split object) the result set into multiple targets, or you may just copy the same result to different target tables, mapping different fields in each of the targets. This post is not about what you can do when loading multiple targets (which is kind a lot due to this feature in ODI 12c), but how to control the order which those target tables are loaded.

Let’s start with the following example:

1

If you execute this mapping, this is what it will look like in Operator:

2

You can see that it loaded the tables in an order that was decided by ODI, probably in the same order that you dragged and dropped the models into the mapping. Now imagine that you want to have control over the order and need to load Table B before table A. The classical example where you would use that is when you retrieve a source dataset and you want to derive both the dimension and fact information out of it. In this case, you would want to load the dimension table first and the fact table second.

Luckily, ODI allow us to change the order, but its not too intuitive.  In the Logical tab, you need to click in any area that does not contain an object (any white area will do). This will display the “Target Load Order” option:

8

Click on the “gear” icon in the far right and a menu will popup:

4

Now you may configure it to have any order that you want, let’s say table B, C and then A. If you save and run the mapping now, this is what you get:

5

6

That’s it folks! See ya!

ODI Hidden Gems – Begin/End Mapping Command

Posted in ODI 12c, Tips and Tricks with tags , on October 7, 2019 by Rodrigo Radtke de Souza

Hi all,

Today’s short post is about a simple, but very powerful feature that often is overlooked: Begin/End Mapping Command. These options are in the Physical tab and, as their name suggests, they may issue any kind of command before a mapping begins and/or after it finishes.

1

Pay close attention to the detail that they may execute ANY command from ANY technology that ODI may handle and that’s why it is so powerful. You may run anything from Oracle DML statements, a piece of Java code, trigger OS commands and so on. This gives you a lot of flexibility.

A very common example that we may use those are to “track” some mapping in a separate log table. Although you have ODI Operator that contains all the log information on it, sometimes we may get a requirement to track all the executions of a particular mapping, so people know for sure when it ran and that the logs will not be purged by accident from the Operator by someone. Let’s see how we may accomplish logging the start and end times of a execution.

Let’s start with “Begin”. First you select which technology and logical schema that command refers to. In this case, we will insert the name of the mapping, the time that it started, and which was the session number that it was assigned to in ODI.

2

Let’s do the same with “End”:

3

Let’s run the mapping. When we go to Operator, we may see that two new tasks were created, one before and another one after the main mapping:

4

We may double click it to see the code that was executed:

5

If we query the LOG_INFO table, we will see two entries, one for begin and another one for end:

6

This was a very short example as you may do way more than that. You may send emails to alert that a critical mapping has completed, you may zip and move a file after it was just loaded by the mapping, you may run an OS bat file that will prepare your enviroment before a data load and so on. These two options are a great alternative for us to get all these “small” codes inside the ODI mapping object itself and rely less on small ODI procedures.

See ya!

ODI Hidden Gems – Temporary Indexes

Posted in ODI, ODI 12c, Tips and Tricks with tags , , on September 3, 2019 by Rodrigo Radtke de Souza

Hi all!

Today’s gem is indeed a very hidden one. ODI is known (unfortunately) to be “not intuitive” most of the times and I think that’s because we have many options that are scared across too many objects in the UI, which leads you to keep clicking on objects until you find what you need. To make things a little bit harder, you have the logical and physical tabs and each one of them has its own representation of the objects, so some of the options will be in the logical tab, some others in the physical tab. Lets talk today about the “Temporary Indexes” option that exists in some objects in the Physical tab of a mapping.

Very often you will load data from places that may not have an “index” concept, like files for example. You may create a mapping that will load any number of big files, put them in a staging area and filter/join them together to do some ETL. You may reach situations where those joins are not performing well since you don’t have an “index” on the files. Here is where ODI may help you with some Temporary Indexes creation. Let’s see this example (it’s very basic, but you will get the idea):

1

You join both files and load to a target table. When we execute this mapping, this is what we get:

2

Basically, you are loading both files to C$ tables and then you are joining those C$ tables when loading to the target.

7

If you think that this join would benefit from an index creation, you may configure ODI to create temporary indexes on that join. To do that, you will need to go to physical tab, click the “join” object and go to “Properties” as below. After you know where the option is, it seems pretty easy and obvious, however if you are not familiar with ODI, you will need some practice to actually figure out that you need to go to physical tab and then click on the specific object to get some unique properties of that object:

3

Change Index Type to one of the possible values (Bitmap, Non-Unique, Unique) and run the mapping again. You will see the index creation after it loads the data to C$:

4

This technique also works for filters objects. If we add a filter in any of the file columns and if you think that it would be beneficial to have an index on it, the steps to create it  would be the same as before:

5When executing, we can see all the indexes creation there:

6

If you go to Oracle documentation, there is a note there regarding using the Temporary Index creation:

  • The creation of temporary indexes may be a time-consuming operation in the overall flow. Oracle recommends reviewing execution statistics and comparing the execution time saved by the indexes to the time spent creating them.

It’s a very valid point. Most of the time we don’t need to create any temporary index, but if you end up in one of those cases that you need to create them, now you know where to find this option.

Thanks!

 

ODI Hidden Gems – Setting custom Warnings and Errors

Posted in ODI, Tips and Tricks with tags , on August 7, 2019 by Rodrigo Radtke de Souza

Hi all! ODI developers often create their own ODI procedures that contains any kind of specific logic/technology on it. Since it’s a custom code, we want to be sure that they are efficient, but also easy to read or show errors/warnings when something wrong happens. Let’s see an example of what can be done in ODI regarding this topic.

I’ve created a simple procedure and put it in a package. If I execute it and something goes wrong, the entire procedure gives an error and it is propagated to the main scenario:

1

Now imagine that this procedure is not that critical and that we could ignore the errors when it happens. We just need to open the ODI procedure and click the “Ignore Errors” option.

2

When we execute again, this is what happens:

3

My procedure step finished with a warning, which is good, but the main package finished with a success, which may be bad. If someone looks at the operator only at the package level, they may never look to see why that internal step has a warning, since the parent is “green”. If the warning is somehow important and we want to propagate it to the parent step, we will first need to understand what ODI checks to set a parent step to warning.

For ODI to cascade the warning to its parent, one of its child steps needs to have the “Nº of Errors” greater than zero. In our case, when the error was triggered and set to ignore, the “Nº of Errors” in the step was never change (it remained 0) as you can see below:

4

Fortunately, it’s very easy for us to manipulate all those record statistics numbers in ODI. You can use one of five methods below (one for each statistic):

  • setNbInsert()
  • setNbUpdate()
  • setNbDelete()
  • setNbErrors()
  • setNbRows()

You may create a task in a procedure with Jython technology and just add odiRef.setNbErrors(1) to set the number of errors of that step to 1. Just to get it easier to understand, lets remove the Ignore step option from our example and create a new one procedure, just to set the error number, as below:

5

6

When we ran our package now, we can see the following:

7

Our procedure thrown an error, we caught with a red arrow in the flow and then set a 1 to the number of errors. ODI understands that, when the error number is greater than 0, then it must set the parent icon as a Warning. Depending on the code/technology in your procedure, you may even include the setNbErrors inside of your own code, so you don’t need a separated procedure for that.

8

Thanks all! See you soon!

 

 

 

ODI Hidden Gems – Exception Handling – Timeout(s)

Posted in ODI, Tips and Tricks with tags , on July 26, 2019 by Rodrigo Radtke de Souza

Hi all,

Today’s hidden gem is the “Exception Handling – Timeout(s)” option which is located at the Load Plan steps:

1

There are certain situations where we may have a very strict load window, which we cannot go over a certain limit of time. If this situation happens, the data load should abort before something bad happens. Luckily, ODI Load Plans have a very easy mechanism to handle timeout situations and most people are not aware of it. In every load plan step, we may add a timeout value which is the maximum time (in seconds) that this step takes before it is aborted by the Load Plan. When a timeout is reached, the step is marked in error and the Exception step (if defined) is executed.

It seems simple, but it can be very powerful, since this setting may be applied to any parent step (even the root step). In this case, we may have a safeguard to avoid a potential long running/overlapping situation for the entire load plan. As for example, if you want to be sure that the entire load finishes within 8 hours, just add a timeout value to the root step (28800 seconds) and it will stop in case it reaches this value. In daily execution load plans, you may set it to 86400 seconds (24 hours), so it does not overlap with the next daily execution. In the following screen, I set a 10 seconds timeout, and this is the error that is triggered when it reaches the timeout setting value.

2

That’s if folks, see ya!

ODI Hidden Gems – Log Steps in the Journal

Posted in ODI, Tips and Tricks with tags , on July 11, 2019 by Rodrigo Radtke de Souza

Hi all,

Today we will talk about “Log Steps in the Journal” option. This one resides the “Advanced” tab of every step inside an ODI package, as you can see below. Every step in a package appears in the execution log while being executed, but we may define whether the step should be kept in the journal after its execution is finished or not. The available options are:

  • Never: the step is deleted from the journal.
  • Always: the step is always kept in the journal.
  • Errors: the step is kept in the journal only if it failed. Otherwise, it is deleted.

1

Although it seems a very simple option, it has some good PROS and CONS about using it. Let’s talk about the PROS first. Imagine that the above scenario was going to loop the same procs 10 times. You would end up with a log like this:

2

You may want to keep the log of all those procedures executions, so you know what they did (like how many rows did they insert/delete/update). However, all the steps related to the loop variable are kind of useless, since they are only used to control the “loop” over the steps. In this case, if you wish to keep a cleaner log, you may set both variable steps to “Never”, like below:

3

Now, when you execute this package, you will have a much cleaner log:

4

However, this option comes with a bad CONS: I already saw several situations where people were trying to debug an execution in Operator and they were not understanding how the values were being assigned or they were not sure how some stuff were being populated if they don’t see any step in Operator related to that. After some time lost wondering about it and then double checking the same package in development, they would realize that someone had put that step to never log (sometimes even by accident). So, anytime that you are trying to debug something in Operator, and it seems weird or missing pieces, please make sure to look on the development package as well, since some steps may be set to Never log.

That’s it folks. A quick post today. See ya!

ODI Hidden Gems – Degree of Parallelism for Target (DOP)

Posted in ODI, Tips and Tricks with tags , on July 5, 2019 by Rodrigo Radtke de Souza

Hi all,

If you read our posts, you know that we like to write “series” of them. We think it’s a good motivation for us to focus on some topic and keep writing about it. So, let me begin with a brand-new series called “ODI Hidden Gems”. We will be talking about those small configurations, check boxes and settings that most of the people just ignore them or don’t even know that they exist, but they can be of great value.

ODI is a great tool, it has a lot of options and anyone may survive without knowing all of them, however, there are some that may shine and gives you better data load performance, tool development usability and so on. Without further delay, let’s talk about Degree of Parallelism for Target (DOP).

1

This setting resides on ODI Data Server component within Topology together with Array Fetch Size and Batch Update Size and all the three are often misleading. First thing to notice is that each of those settings happens in either on SOURCE, TEMP (C$) or TARGET databases (and here is where the mislead happens).

  • Array Fetch Size: This setting is only used when the data server is used as a SOURCE. When reading large volumes of data from a data server, Oracle Data Integrator fetches successive batches of records. This value is the number of rows (records read) requested by Oracle Data Integrator on each communication with the data server.
  • Batch Update Size: This setting is only used when the data server is used as a TARGET. When writing large volumes of data into a data server, Oracle Data Integrator pushes successive batches of records. This value is the number of rows (records written) in a single Oracle Data Integrator INSERT command.
  • Degree of Parallelism for Target: Although the name suggests TARGET, this setting is only used on the TEMP (C$) part when the data server is used as a TARGET. Indicates the number of threads allowed for a loading task, in other words, in C$ population from the source database to the target database. Default value is 1. Maximum number of threads allowed is 99.

So, when you want to optimize all three parameters, you will probably change in two different data servers (source and C$/target) and not only in one data server, as most people try to do. Also, when we talk about ODI Data Server DOP, which is a number that represents the number of parallel threads, we are talking exclusively about the C$ piece of the integration, so it’s not related to SOURCE/TARGET at all.

Let me give you one example to make it clearer. If you are not aware from where I’m getting the following details, please notice that, every time you have a “Load Data” step from Server A to Server B, ODI creates a “Details” tab at the Operator Task Level with a lot of useful information. This is also another free Hidden Gem.

Source: SQL Server, TABLE_A has 2,261,393 rows
Target: Oracle, TABLE_B will be loaded from TABLE_A.
Topology: Target Data Server is set to DOP 1.

3

We can see that it took 75 seconds to load this data, but the Wait time on source connection was 27 seconds. It means that, ODI was able to get data fast from the source database, but it needed to wait for the target thread to be available, so it could send more data in. Also, target DOP is one, so only one thread worked to load this data to the C$ table.

Source: SQL Server, TABLE_A has 2,261,393 rows
Target: Oracle, TABLE_B will be loaded from TABLE_A.
Topology: Target Data Server is set to DOP 16.

5

Now we see some gain. The wait time is 0.610 seconds in the source and the target threads were able to load all of them to C$ table in 35 seconds, running 16 threads in parallel. You may even try to reduce this load times further by changing Array Fetch Size (in the source Data Server) and Batch Update Size (in the target Data Server), but those two settings I’ll leave to another Hidden Gem post.

See you later!