ODI Hidden Gems – Exception Handling – Timeout(s)

Posted in ODI, Tips and Tricks with tags , on July 26, 2019 by radk00

Hi all,

Today’s hidden gem is the “Exception Handling – Timeout(s)” option which is located at the Load Plan steps:

1

There are certain situations where we may have a very strict load window, which we cannot go over a certain limit of time. If this situation happens, the data load should abort before something bad happens. Luckily, ODI Load Plans have a very easy mechanism to handle timeout situations and most people are not aware of it. In every load plan step, we may add a timeout value which is the maximum time (in seconds) that this step takes before it is aborted by the Load Plan. When a timeout is reached, the step is marked in error and the Exception step (if defined) is executed.

It seems simple, but it can be very powerful, since this setting may be applied to any parent step (even the root step). In this case, we may have a safeguard to avoid a potential long running/overlapping situation for the entire load plan. As for example, if you want to be sure that the entire load finishes within 8 hours, just add a timeout value to the root step (28800 seconds) and it will stop in case it reaches this value. In daily execution load plans, you may set it to 86400 seconds (24 hours), so it does not overlap with the next daily execution. In the following screen, I set a 10 seconds timeout, and this is the error that is triggered when it reaches the timeout setting value.

2

That’s if folks, see ya!

ODI Hidden Gems – Log Steps in the Journal

Posted in ODI, Tips and Tricks with tags , on July 11, 2019 by radk00

Hi all,

Today we will talk about “Log Steps in the Journal” option. This one resides the “Advanced” tab of every step inside an ODI package, as you can see below. Every step in a package appears in the execution log while being executed, but we may define whether the step should be kept in the journal after its execution is finished or not. The available options are:

  • Never: the step is deleted from the journal.
  • Always: the step is always kept in the journal.
  • Errors: the step is kept in the journal only if it failed. Otherwise, it is deleted.

1

Although it seems a very simple option, it has some good PROS and CONS about using it. Let’s talk about the PROS first. Imagine that the above scenario was going to loop the same procs 10 times. You would end up with a log like this:

2

You may want to keep the log of all those procedures executions, so you know what they did (like how many rows did they insert/delete/update). However, all the steps related to the loop variable are kind of useless, since they are only used to control the “loop” over the steps. In this case, if you wish to keep a cleaner log, you may set both variable steps to “Never”, like below:

3

Now, when you execute this package, you will have a much cleaner log:

4

However, this option comes with a bad CONS: I already saw several situations where people were trying to debug an execution in Operator and they were not understanding how the values were being assigned or they were not sure how some stuff were being populated if they don’t see any step in Operator related to that. After some time lost wondering about it and then double checking the same package in development, they would realize that someone had put that step to never log (sometimes even by accident). So, anytime that you are trying to debug something in Operator, and it seems weird or missing pieces, please make sure to look on the development package as well, since some steps may be set to Never log.

That’s it folks. A quick post today. See ya!

ODI Hidden Gems – Degree of Parallelism for Target (DOP)

Posted in ODI, Tips and Tricks with tags , on July 5, 2019 by radk00

Hi all,

If you read our posts, you know that we like to write “series” of them. We think it’s a good motivation for us to focus on some topic and keep writing about it. So, let me begin with a brand-new series called “ODI Hidden Gems”. We will be talking about those small configurations, check boxes and settings that most of the people just ignore them or don’t even know that they exist, but they can be of great value.

ODI is a great tool, it has a lot of options and anyone may survive without knowing all of them, however, there are some that may shine and gives you better data load performance, tool development usability and so on. Without further delay, let’s talk about Degree of Parallelism for Target (DOP).

1

This setting resides on ODI Data Server component within Topology together with Array Fetch Size and Batch Update Size and all the three are often misleading. First thing to notice is that each of those settings happens in either on SOURCE, TEMP (C$) or TARGET databases (and here is where the mislead happens).

  • Array Fetch Size: This setting is only used when the data server is used as a SOURCE. When reading large volumes of data from a data server, Oracle Data Integrator fetches successive batches of records. This value is the number of rows (records read) requested by Oracle Data Integrator on each communication with the data server.
  • Batch Update Size: This setting is only used when the data server is used as a TARGET. When writing large volumes of data into a data server, Oracle Data Integrator pushes successive batches of records. This value is the number of rows (records written) in a single Oracle Data Integrator INSERT command.
  • Degree of Parallelism for Target: Although the name suggests TARGET, this setting is only used on the TEMP (C$) part when the data server is used as a TARGET. Indicates the number of threads allowed for a loading task, in other words, in C$ population from the source database to the target database. Default value is 1. Maximum number of threads allowed is 99.

So, when you want to optimize all three parameters, you will probably change in two different data servers (source and C$/target) and not only in one data server, as most people try to do. Also, when we talk about ODI Data Server DOP, which is a number that represents the number of parallel threads, we are talking exclusively about the C$ piece of the integration, so it’s not related to SOURCE/TARGET at all.

Let me give you one example to make it clearer. If you are not aware from where I’m getting the following details, please notice that, every time you have a “Load Data” step from Server A to Server B, ODI creates a “Details” tab at the Operator Task Level with a lot of useful information. This is also another free Hidden Gem.

Source: SQL Server, TABLE_A has 2,261,393 rows
Target: Oracle, TABLE_B will be loaded from TABLE_A.
Topology: Target Data Server is set to DOP 1.

3

We can see that it took 75 seconds to load this data, but the Wait time on source connection was 27 seconds. It means that, ODI was able to get data fast from the source database, but it needed to wait for the target thread to be available, so it could send more data in. Also, target DOP is one, so only one thread worked to load this data to the C$ table.

Source: SQL Server, TABLE_A has 2,261,393 rows
Target: Oracle, TABLE_B will be loaded from TABLE_A.
Topology: Target Data Server is set to DOP 16.

5

Now we see some gain. The wait time is 0.610 seconds in the source and the target threads were able to load all of them to C$ table in 35 seconds, running 16 threads in parallel. You may even try to reduce this load times further by changing Array Fetch Size (in the source Data Server) and Batch Update Size (in the target Data Server), but those two settings I’ll leave to another Hidden Gem post.

See you later!

ORACLE SQL for EPM tips and tricks S01EP06!

Posted in Oracle, Tips and Tricks with tags , on May 6, 2019 by radk00

Hi all,

Today’s post is about two cool Oracle analytics functions that are powerful and awesome, but few people use them, which is LEAD and LAG. LEAD function lets you query more than one row in a table at a time without having to join the table to itself. It returns values from the next row in the table. LAG does the same thing but returning the previous row. It may sound weird when you just read its descriptions, so let’s get some real examples.

Imagine that we have the following data:

1

I need to do a query that shows the percentage that DATA column increased over the periods in a single row. For example, in PERIOD 2 I need to show one row with the previous and current period values and how much it increased over the period. I see in a lot of places people just querying the same table twice, joining by its key columns (in this case ACCOUNT and PERIOD) and then doing the Percentage calculation. However, we don’t need to go over all this trouble, since it is very easy to accomplish the same result using LAG function as showed below:

SELECT

ACCOUNT,


PERIOD,


LAG (DATA,1) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) as PREVIOUS_DATA,


DATA as PERIOD_DATA,


ROUND(DATA/LAG (DATA,1) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) *100,2) PERCENTAGE


FROM T$_LEAD_LAG

2

LAG and LEAD syntax are basically the same:

LAG/LEAD ( expression [, offset [, default] ] )

OVER ( [ query_partition_clause ] order_by_clause )

In our example, I’m querying the table only once and I’m “LAGing” for 1 previous row, partitioned by ACCOUNT and ordering by PERIOD. So, for each distinct account value, Oracle will order the rows by period and we will access its values as a normal column. We may do this as many times as we want, for example if we want a two-month comparison:

SELECT

ACCOUNT,


PERIOD,


DATA as PERIOD_DATA,


LAG (DATA,1) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) as PREVIOUS_PERIOD_DATA,


LAG (DATA,2) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) as PREVIOUS_TWO_PERIODS_DATA,


ROUND((DATA - LAG (DATA,1) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) )/ LAG (DATA,1) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) *100,2) PERCENTAGE_PREVIOUS_PERIOD,


ROUND((DATA - LAG (DATA,2) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) )/ LAG (DATA,2) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) *100,2) PERCENTAGE_PREVIOUS_TWO_PERIODS


FROM T$_LEAD_LAG

3

Another example using LEAD can be used to check data accuracy between “linked” rows, often seen in tables that contains SCD (Slowly Changing Dimension) behavior. Let’s get the below example:

4

In a SCD table, we have the effective start and end date for each one of the records that belongs to the same key. These dates creates a “link” between the records, where one effective date starts where another effective date ends. The above picture is an example where all records looks good, having each effective date ending and starting correctly. Now see example below:

5

We can see that there is a one-day gap between 10/08 and 11/08, which may cause a lot of trouble if the application tries to see which record was effective right between those two days (it would return null).

In order to search for those kinds of gaps between the records, we may write a simple and elegant LEAD statement that will search for all records that has a “gap” between them. The statement would look like this:

WITH ALL_ AS (

SELECT RECORD_KEY


, TO_DATE(EFF_START_DATE,'mm/dd/yyyy hh24:mi:ss') EFF_START_DATE


, TO_DATE(EFF_END_DATE,'mm/dd/yyyy hh24:mi:ss') EFF_END_DATE


, CURRENT_FLAG


, LEAD (TO_DATE(EFF_START_DATE,'mm/dd/yyyy hh24:mi:ss'),1) OVER (PARTITION BY RECORD_KEY ORDER BY TO_DATE(EFF_START_DATE,'mm/dd/yyyy hh24:mi:ss')) AS NEXT_START


FROM T$_POST
)

SELECT RECORD_KEY


, TO_CHAR(EFF_START_DATE,'mm/dd/yyyy hh24:mi:ss') EFF_START_DATE


, TO_CHAR(EFF_END_DATE,'mm/dd/yyyy hh24:mi:ss') EFF_END_DATE


, TO_CHAR(NEXT_START,'mm/dd/yyyy hh24:mi:ss') NEXT_START


FROM ALL_ WHERE (NEXT_START - EFF_END_DATE)*24*60*60 > 0

6

The result will show which record has a “gap” between its effective end date and the next effective start date. In this case I had to create the SQL using a WITH clause, because we cannot use “window” functions directly into the where clause. If we try to do it, we will get an ORA-30483 error:

7

Thanks all! I hope you have liked it! See you soon!

ORACLE SQL for EPM tips and tricks S01EP05!

Posted in ACE, Connect By, DEVEPM, Dimensions, EPM, ETL, Oracle, Oracle 11.2.0, Oracle 11.2.0.4, Oracle Database, Query, REGEXP, SQL, Tips and Tricks with tags , , , , on May 3, 2019 by RZGiampaoli

Continuing the Oracle SQL for EPM series today we’ll see another usage of Connect by. I’m talking a lot about connect by because is a very useful function and we should use it a lot in the analytic space.

Let’s imagine that we are working with a planning application and we want to figure out in a query what is the region of each cost center and to what top level that cost centers belong. In this app we have the Entity dimension with Support Geography as a attribute.

The support geography has the region on the generation 2 of the hierarchy. How can we do that. Well, connect by to the help.

First of all we need to get the entire physical geography hierarchy. To do that we’ll do a query like this:

This is a simple query that joins the HSP_ATTRIBUTE_MEMBER table (this table contains all the attributes from all attribute dimensions) that’s why we have that sub-query to select just the Support Geography members (1).

Then we join this table with the HSP_OBJECT that contains all metadata from everything in Planning: Forms, Dimensions, Flows, everything. We do that to get the name of the members. And finally we do a connect by to rebuild the hierarchy (2).

With the hierarchy re-built we can use the Connect by root to figure out to witch top level each member belongs. (3)

With this query we know what’s the leaf member of the Support Geography dimensions (ID) and to what Region that member belong. Now we need to do the same for the Entity dimension.

With this query we are filtering the OBJECT_TYPE=33, that means Entity (1) (If you want to know what are the possible object types you can query the HSP_OBJECT_TYPE table to check that out), and then we do a Connect By to re-built the hierarchy and then use some Connect by Root to get the parents as well the Sys connect By Path to get the Path of the hierarchy.

Now we just need to join everything under the same query to get all the Cost Centers and to witch top level each one belongs and to witch region they are part of as well.

Another thing that I like a lot about the WITH clause is that is very easy to create nested queries. In this case I created 2 different queries, each one with a WITH and a select based in that WITH. Now everything I need to do is put everything under the same with by:

We can see that now I just got the Select that was under the WITH clause and just created a nested WITH by just creating the step 1 and 2 and now I have instead of 2 WITH queries I have just 1 WITH with 4 Clauses under it being 2 of then a nested one based in the previous one (Connect by from the filtered query).

All we need to do is putting everything together by joining the ENT_HIER and the SG_HIER using the HSP_MEMBER_TO_ATTRIBUTE table. This table basically is the map between the ENTITY dimension and the Attribute dimensions, in this case Support Geography.

The results is the Cost Center, the Path of that member in the hierarchy and with this we can use the REGEXP to extract any level of the hierarchy, and finally the region that cost center belong.

You may thing why we need to do that. well, this is a query I used to join with the HSP_GROUP table to get the groups and the members from each region and then create the security dynamically for each user. That means, I have a Planning application with Entities that has cost centers from different regions and I’m using the Attribute dimension to generate the security.

If a User has access only to AMER data, it’ll only see the AMER cost centers. This is just one example of what we can achieve using Connect by.

I hope you guys enjoy. Next time we’ll talk about another very very useful function that I really love it.

See you soon guys.

DEVEPM in the land of the free!

Posted in Career, ODI with tags , on April 29, 2019 by radk00

Hi all,

Today’s quick post is not a technical one, but it is just to share with you some good news. I (Rodrigo) have recently moved from Brazil to USA and I’m very excited about it! This means new challenges, new opportunities and probably some new content to the blog 😊

I have accepted an offer to work for Innive Inc, an IT consulting company that you may read more about it in our site. I’ll still work a lot with data integration, but I’ll probably move away a little bit from EPM. However, it doesn’t mean that I won’t play around and do some tests when new EPM stuff comes out 😉

Thanks everyone! And don’t forget that we will be at Kscope19 this year, so if you want to talk to us there, please feel free to send us a message!

See ya!

ORACLE SQL for EPM tips and tricks S01EP04!

Posted in ACE, Connect By, Oracle, Oracle Database, Performance, REGEXP, SQL, Tips and Tricks with tags , , , , , , on April 9, 2019 by RZGiampaoli

Continuing the Oracle SQL for EPM series, today we’ll extend a little bit the “Connect by” post to see some neat thing we can do when we mix regular expression with connect by.

First of all, let’s start explaining the basic of this approach. We can use Connect By and Level to generate data in Oracle like this:

This is very useful when you need to generate 2 rows for each row we have in a query, then you can do a Connect by Level <=2 and use it as a query without join, duplicating everything in the source query. Then you can use a Decode or Case to say, Case when level = 1 then x when level =2 then y….

But for us this is just the basic concept that we need to understand for our trick. let’s get back to our metadata table, the one with parent child relationship and with all metadata from all apps and all dims that we have. Let’s say we need to create an hierarchical table based in a parent child table. Some people does one union for each level that we have in the source table (too much work), another will do filter all over the places, I don’t, I just use connect by with regular expression. Let’s take a look.

As we can see, we have 17 levels or generations for the account dimension. quite big. First of all, let’s do a connect by path to flat the entire hierarchy in one row and remove the parent child relationship.

Now, as we can see, if we just do a connect by prior with the sys_connect_by _path, the query will generate 1 row for each level of data. We don’t want that, unless you want to have the ability to input data in the upper level of the hierarchy, that is usually not the case. What we want is to have one the entire path for each leaf member, then we need to filter just the leafs in our queries. In my case my table already have a column that says if a member has children or not, but I’ll proceed as if I don’t have it.

If we don’t have the leaf information we can generate it by using Connect_By_IsLeaf that basically return 0 if is a parent or 1 if is a leaf. Again, we are filtering this outside the connect by prior and now we have just the leafs and the entire hierarchy above him. Next step is to split this back into rows.

REGEXP_SUBSTR is the perfect tool to split the string we have into new columns. As you can see, we can use the follow expression: REGEXP_SUBSTR(Column, ‘[^|]+’, 1, 1) Basically the regexp uses what is in [] to find what yo want and split it in the way you want. In our case I’m looking for the first | (that is defined by the 1, 1) and get the first word after it. Then I have another regexp and I changed the 1, 1 to 1, 2 to get the second word and keep doing this until I get to the 17 generation (as we saw in the beginning).

Basically if you have 17 generation you need to have 17 columns in the table, then you need to have 17 REGEXP, one for each column, you just need to change the 1, 1, 1, 2, 1, 3… until the 1, 17.

Here’s when ODI come in handy because if you use ODI you just need to do a select in the source to figure out how many generations you will have and then generate dynamically the 17 REGEXP and pass this to the target dynamically. It’s very easy since the code itself is always the same and the only thing that changes is the second parameter, and it’s a sequential parameter.

I hope this can be as useful for you as it’s for me and see you soon.