ORACLE SQL for EPM tips and tricks S01EP06!

Posted in Oracle, Tips and Tricks with tags , on May 6, 2019 by radk00

Hi all,

Today’s post is about two cool Oracle analytics functions that are powerful and awesome, but few people use them, which is LEAD and LAG. LEAD function lets you query more than one row in a table at a time without having to join the table to itself. It returns values from the next row in the table. LAG does the same thing but returning the previous row. It may sound weird when you just read its descriptions, so let’s get some real examples.

Imagine that we have the following data:

1

I need to do a query that shows the percentage that DATA column increased over the periods in a single row. For example, in PERIOD 2 I need to show one row with the previous and current period values and how much it increased over the period. I see in a lot of places people just querying the same table twice, joining by its key columns (in this case ACCOUNT and PERIOD) and then doing the Percentage calculation. However, we don’t need to go over all this trouble, since it is very easy to accomplish the same result using LAG function as showed below:

SELECT

ACCOUNT,


PERIOD,


LAG (DATA,1) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) as PREVIOUS_DATA,


DATA as PERIOD_DATA,


ROUND(DATA/LAG (DATA,1) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) *100,2) PERCENTAGE


FROM T$_LEAD_LAG

2

LAG and LEAD syntax are basically the same:

LAG/LEAD ( expression [, offset [, default] ] )

OVER ( [ query_partition_clause ] order_by_clause )

In our example, I’m querying the table only once and I’m “LAGing” for 1 previous row, partitioned by ACCOUNT and ordering by PERIOD. So, for each distinct account value, Oracle will order the rows by period and we will access its values as a normal column. We may do this as many times as we want, for example if we want a two-month comparison:

SELECT

ACCOUNT,


PERIOD,


DATA as PERIOD_DATA,


LAG (DATA,1) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) as PREVIOUS_PERIOD_DATA,


LAG (DATA,2) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) as PREVIOUS_TWO_PERIODS_DATA,


ROUND((DATA - LAG (DATA,1) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) )/ LAG (DATA,1) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) *100,2) PERCENTAGE_PREVIOUS_PERIOD,


ROUND((DATA - LAG (DATA,2) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) )/ LAG (DATA,2) OVER (PARTITION BY ACCOUNT ORDER BY PERIOD) *100,2) PERCENTAGE_PREVIOUS_TWO_PERIODS


FROM T$_LEAD_LAG

3

Another example using LEAD can be used to check data accuracy between “linked” rows, often seen in tables that contains SCD (Slowly Changing Dimension) behavior. Let’s get the below example:

4

In a SCD table, we have the effective start and end date for each one of the records that belongs to the same key. These dates creates a “link” between the records, where one effective date starts where another effective date ends. The above picture is an example where all records looks good, having each effective date ending and starting correctly. Now see example below:

5

We can see that there is a one-day gap between 10/08 and 11/08, which may cause a lot of trouble if the application tries to see which record was effective right between those two days (it would return null).

In order to search for those kinds of gaps between the records, we may write a simple and elegant LEAD statement that will search for all records that has a “gap” between them. The statement would look like this:

WITH ALL_ AS (

SELECT RECORD_KEY


, TO_DATE(EFF_START_DATE,'mm/dd/yyyy hh24:mi:ss') EFF_START_DATE


, TO_DATE(EFF_END_DATE,'mm/dd/yyyy hh24:mi:ss') EFF_END_DATE


, CURRENT_FLAG


, LEAD (TO_DATE(EFF_START_DATE,'mm/dd/yyyy hh24:mi:ss'),1) OVER (PARTITION BY RECORD_KEY ORDER BY TO_DATE(EFF_START_DATE,'mm/dd/yyyy hh24:mi:ss')) AS NEXT_START


FROM T$_POST
)

SELECT RECORD_KEY


, TO_CHAR(EFF_START_DATE,'mm/dd/yyyy hh24:mi:ss') EFF_START_DATE


, TO_CHAR(EFF_END_DATE,'mm/dd/yyyy hh24:mi:ss') EFF_END_DATE


, TO_CHAR(NEXT_START,'mm/dd/yyyy hh24:mi:ss') NEXT_START


FROM ALL_ WHERE (NEXT_START - EFF_END_DATE)*24*60*60 > 0

6

The result will show which record has a “gap” between its effective end date and the next effective start date. In this case I had to create the SQL using a WITH clause, because we cannot use “window” functions directly into the where clause. If we try to do it, we will get an ORA-30483 error:

7

Thanks all! I hope you have liked it! See you soon!

Advertisements

ORACLE SQL for EPM tips and tricks S01EP05!

Posted in ACE, Connect By, DEVEPM, Dimensions, EPM, ETL, Oracle, Oracle 11.2.0, Oracle 11.2.0.4, Oracle Database, Query, REGEXP, SQL, Tips and Tricks with tags , , , , on May 3, 2019 by RZGiampaoli

Continuing the Oracle SQL for EPM series today we’ll see another usage of Connect by. I’m talking a lot about connect by because is a very useful function and we should use it a lot in the analytic space.

Let’s imagine that we are working with a planning application and we want to figure out in a query what is the region of each cost center and to what top level that cost centers belong. In this app we have the Entity dimension with Support Geography as a attribute.

The support geography has the region on the generation 2 of the hierarchy. How can we do that. Well, connect by to the help.

First of all we need to get the entire physical geography hierarchy. To do that we’ll do a query like this:

This is a simple query that joins the HSP_ATTRIBUTE_MEMBER table (this table contains all the attributes from all attribute dimensions) that’s why we have that sub-query to select just the Support Geography members (1).

Then we join this table with the HSP_OBJECT that contains all metadata from everything in Planning: Forms, Dimensions, Flows, everything. We do that to get the name of the members. And finally we do a connect by to rebuild the hierarchy (2).

With the hierarchy re-built we can use the Connect by root to figure out to witch top level each member belongs. (3)

With this query we know what’s the leaf member of the Support Geography dimensions (ID) and to what Region that member belong. Now we need to do the same for the Entity dimension.

With this query we are filtering the OBJECT_TYPE=33, that means Entity (1) (If you want to know what are the possible object types you can query the HSP_OBJECT_TYPE table to check that out), and then we do a Connect By to re-built the hierarchy and then use some Connect by Root to get the parents as well the Sys connect By Path to get the Path of the hierarchy.

Now we just need to join everything under the same query to get all the Cost Centers and to witch top level each one belongs and to witch region they are part of as well.

Another thing that I like a lot about the WITH clause is that is very easy to create nested queries. In this case I created 2 different queries, each one with a WITH and a select based in that WITH. Now everything I need to do is put everything under the same with by:

We can see that now I just got the Select that was under the WITH clause and just created a nested WITH by just creating the step 1 and 2 and now I have instead of 2 WITH queries I have just 1 WITH with 4 Clauses under it being 2 of then a nested one based in the previous one (Connect by from the filtered query).

All we need to do is putting everything together by joining the ENT_HIER and the SG_HIER using the HSP_MEMBER_TO_ATTRIBUTE table. This table basically is the map between the ENTITY dimension and the Attribute dimensions, in this case Support Geography.

The results is the Cost Center, the Path of that member in the hierarchy and with this we can use the REGEXP to extract any level of the hierarchy, and finally the region that cost center belong.

You may thing why we need to do that. well, this is a query I used to join with the HSP_GROUP table to get the groups and the members from each region and then create the security dynamically for each user. That means, I have a Planning application with Entities that has cost centers from different regions and I’m using the Attribute dimension to generate the security.

If a User has access only to AMER data, it’ll only see the AMER cost centers. This is just one example of what we can achieve using Connect by.

I hope you guys enjoy. Next time we’ll talk about another very very useful function that I really love it.

See you soon guys.

DEVEPM in the land of the free!

Posted in Career, ODI with tags , on April 29, 2019 by radk00

Hi all,

Today’s quick post is not a technical one, but it is just to share with you some good news. I (Rodrigo) have recently moved from Brazil to USA and I’m very excited about it! This means new challenges, new opportunities and probably some new content to the blog 😊

I have accepted an offer to work for Innive Inc, an IT consulting company that you may read more about it in our site. I’ll still work a lot with data integration, but I’ll probably move away a little bit from EPM. However, it doesn’t mean that I won’t play around and do some tests when new EPM stuff comes out 😉

Thanks everyone! And don’t forget that we will be at Kscope19 this year, so if you want to talk to us there, please feel free to send us a message!

See ya!

ORACLE SQL for EPM tips and tricks S01EP04!

Posted in ACE, Connect By, Oracle, Oracle Database, Performance, REGEXP, SQL, Tips and Tricks with tags , , , , , , on April 9, 2019 by RZGiampaoli

Continuing the Oracle SQL for EPM series, today we’ll extend a little bit the “Connect by” post to see some neat thing we can do when we mix regular expression with connect by.

First of all, let’s start explaining the basic of this approach. We can use Connect By and Level to generate data in Oracle like this:

This is very useful when you need to generate 2 rows for each row we have in a query, then you can do a Connect by Level <=2 and use it as a query without join, duplicating everything in the source query. Then you can use a Decode or Case to say, Case when level = 1 then x when level =2 then y….

But for us this is just the basic concept that we need to understand for our trick. let’s get back to our metadata table, the one with parent child relationship and with all metadata from all apps and all dims that we have. Let’s say we need to create an hierarchical table based in a parent child table. Some people does one union for each level that we have in the source table (too much work), another will do filter all over the places, I don’t, I just use connect by with regular expression. Let’s take a look.

As we can see, we have 17 levels or generations for the account dimension. quite big. First of all, let’s do a connect by path to flat the entire hierarchy in one row and remove the parent child relationship.

Now, as we can see, if we just do a connect by prior with the sys_connect_by _path, the query will generate 1 row for each level of data. We don’t want that, unless you want to have the ability to input data in the upper level of the hierarchy, that is usually not the case. What we want is to have one the entire path for each leaf member, then we need to filter just the leafs in our queries. In my case my table already have a column that says if a member has children or not, but I’ll proceed as if I don’t have it.

If we don’t have the leaf information we can generate it by using Connect_By_IsLeaf that basically return 0 if is a parent or 1 if is a leaf. Again, we are filtering this outside the connect by prior and now we have just the leafs and the entire hierarchy above him. Next step is to split this back into rows.

REGEXP_SUBSTR is the perfect tool to split the string we have into new columns. As you can see, we can use the follow expression: REGEXP_SUBSTR(Column, ‘[^|]+’, 1, 1) Basically the regexp uses what is in [] to find what yo want and split it in the way you want. In our case I’m looking for the first | (that is defined by the 1, 1) and get the first word after it. Then I have another regexp and I changed the 1, 1 to 1, 2 to get the second word and keep doing this until I get to the 17 generation (as we saw in the beginning).

Basically if you have 17 generation you need to have 17 columns in the table, then you need to have 17 REGEXP, one for each column, you just need to change the 1, 1, 1, 2, 1, 3… until the 1, 17.

Here’s when ODI come in handy because if you use ODI you just need to do a select in the source to figure out how many generations you will have and then generate dynamically the 17 REGEXP and pass this to the target dynamically. It’s very easy since the code itself is always the same and the only thing that changes is the second parameter, and it’s a sequential parameter.

I hope this can be as useful for you as it’s for me and see you soon.

Playing with ODI and Groovy – Part 4 – Exporting/Importing ODI Scenarios with SDK

Posted in GROOVY, Java, ODI SDK with tags , , on April 9, 2019 by radk00

Hi all, I’m back with the continuation of this Groovy and ODI series. Last post we saw how to find the different scenarios between two environments. Today we will look on how we may export those different scenarios from our source repository and how to import them in our target repository. We will do a two-step operation: first we will export the different ODI objects from our source repository as XML files into a folder and then we will import those xml files into our target repository.
Our code is very similar to the one that we did for post 3, but we will need to enhance it a little bit. First thing that we will have to change in our code is the function that creates the list of objects. In the previous post, we were just adding the name of the scenarios to the list. Now we will need to store the object itself in the list, since we will need to have the ODI object (scenario) to have it exported.

def listObjects (odiInstance,odiClass,listOfObjects) {
	odiObjects = odiInstance.getTransactionalEntityManager().getFinder(odiClass).findAll().sort{it.name}
	for (Object odiSingleObject: odiObjects)
		listOfObjects.add(odiSingleObject)
}

Also, we will need to create a variable that will indicate the path where the objects will be temporarily exported.

exportPath = "C:\\Odi"

One import thing that we will need to change is how to compare the objects. In the previous post, we were simply comparing them, as they were strings, which was ok for our propose there. However, now we cannot simple compare the java objects because they will be different even if they represent the same scenario name/version. They are considered “different” because they came from different environments and logically, they represent different ODI entities.

diffScenarios = []
	for (Object odiSingleObject: sourceScenarios)
		if (targetScenarios.find {targetScenarios -> targetScenarios.getName() == odiSingleObject.getName() && targetScenarios.getVersion() == odiSingleObject.getVersion()}.equals(null))
			if (odiSingleObject.getName().startsWith('TEST'))
				diffScenarios.add(odiSingleObject)

I’m basically doing three tests to see if the source scenario will be migrated or not: first I compare its name, than its version and finally if its name starts with TEST (this last step does not need to be done if you want to get the complete scenario list). Next step I just print the scenarios names and versions that will be exported/imported:

println("List of ODI Scenarios that will be migrated")
		for (Object singObject: diffScenarios)
			println(singObject.getName() + "_" + singObject.getVersion())

Now comes the new code:

encode = new EncodingOptions();
transSource = sourceOdiInstance.getTransactionManager().getTransaction(new DefaultTransactionDefinition());
	exportService = new ExportServiceImpl(sourceOdiInstance);
	for (Object singObject: diffScenarios)
		exportService.exportToXml(singObject, exportPath, true, false, encode)

Export objects in ODI SDK is very straight forward: you need to inform which scenarios you want to export (in our case, all objects that were stored in diffScenarios array), the path where the object will be exported and the encode option that will be used. In this case, I just went ahead with the default encode options.

Importing objects is also easy, but similarly to a database, you need to explicitly commit your actions to make it effective in the target repository. Also, for the sake of simplicity, we will import all new scenarios under “root”, but we could explicitly say under which ODI objects we would want to have it imported to:

tm = targetOdiInstance.getTransactionManager()
	transTarget = tm.getTransaction(new DefaultTransactionDefinition());
	importService = new ImportServiceImpl(targetOdiInstance);
	for (Object singObject: diffScenarios)
	{
		println(exportPath+"\\SCEN_"+singObject.getName() + "_Version_" + singObject.getVersion()+".xml")
		importService.importObjectFromXml(ImportServiceImpl.IMPORT_MODE_SYNONYM_INSERT_UPDATE,exportPath+"\\SCEN_"+singObject.getName() + "_Version_" + singObject.getVersion()+".xml", true, null, true)
	}
	tm.commit(transTarget)

Once you run the job, you will get the following:

1

Our target repository already had TEST2, so that’s why its not in the list. When the user connects to the target repository, he will see the following:

2

That’s it for today folks! Hope you like it! See you soon! The code for part 4 can be found here.

ODI “Command on Source” buffered behavior

Posted in ODI, Tips and Tricks with tags , on March 28, 2019 by radk00

Hi all! This post was created based on a friend’s question to me a couple of days ago. He asked me the following:

  • I know that we may create a procedure with a SQL in “Command on Source” that would return N rows and trigger an OS command in the “Command on Target” tab for each of those rows, passing the results as a parameter. My process takes a while for each row, so I was wondering if I could insert more rows in the table that is being read on “Command on Source” while it is still executing, so it would pick the new rows as well in the same execution?

In other words, he wanted to trigger the ODI procedure once and keeps “feeding” the “Command on Source” table many times, so all his OS Commands would get executed in one procedure run. Instinctively I said no, because ODI needs to somehow “buffer” the “Command on Source” results (which may be a result of a SQL statement with different tables) and then start to run the “Command on Target” commands. He agreed with me and moved on. However, that keep in my head: what would happen if the source table somehow changes while the ODI procedure is running? What if more rows were inserted or if the table was truncated/deleted? I did some tests to make sure I gave him the right answer.

I created a procedure that contains only an ODI sleep command in the Target that will be executed for each row that comes from the source, like this:

1

Then in the Source I added the following:

2

I populated this table with 20 records and I executed the proc. As expected, it took 20 seconds to complete:

3

Then I did the following test: I executed the proc and right away I inserted 20 more rows in the target. As I thought, the procedure ended again in 20 seconds, not 40, which means that ODI really buffers the results before executing the Command on Target:

4

But this is a small number and maybe ODI can buffer all of it right away and maybe that’s why it worked. I looked in the Topology and the Array Fetch Size for this connection was set to 250:

5

I did another test with 1000 rows and decreasing the wait time from 1000 to 100 to see how it goes. Both executions (with 1000 rows and then with adding more 1000 rows in between) ended in 104 seconds (the four extra seconds may be a delay due to network, ODI usage and so on):

6

So, I changed my approach and tried something different. What if I add many rows (100,000), change the delay to ‘1’ and in the meanwhile I truncate the table? The result kind of surprised me:

7

Truncate is considered a DDL command, so that’s why the error is saying that the object does not exists anymore, although it’s still there in the DB (it does not exist in that session, as it was modified by a DDL command in another session). This test was not exactly what I wanted to test and yet it does surprises me because I was expecting that ODI would have already buffered all the results (the error happened after 51 seconds only) and it would not be sensitive to certain DDL commands. However, this may indicate that ODI does not buffer it all at once and when it tried to read the table again, it was already truncated. So, let’s do another test.

Next test I doubled the row amount and inserted 200,000 and kept the delay to ‘1’ and ran the procedure. While it was running, I tried to delete the rows instead of truncating the table. The delete ran fine, it took 18 seconds to delete (less than 51 seconds from truncate) and no errors happened this time. So, it seems that ODI buffered all the results (which were doubled) before the 18 seconds.

8

After the deleting and committing, the proc continued to run and finished around 200 seconds, as expected.

9

I did one more test to see if the time that it took to fail in the truncate test would increase if I increase the number of rows. I inserted again 200K rows (again, double amount from the previous truncate test), ran the process and truncated the table. It took the same 51 seconds. So, I believe that, although ODI can buffer all the results before 51 seconds, Oracle somehow tells the process that the table was changed by a DDL command and sends a “stop” signal to the connection from a specific amount of time. I don’t see any other explanation for this behavior.

I could run some other tests, especially to see how large is the ODI buffer size, but as for now I’m ok with the results. In resume we figure out that:

  • ODI does buffer the results in the “Command on Source/Target” and we cannot modify them once it starts;
  • Although the results are buffered, the ODI procedure may fail if DDL commands are issued to the source tables;
  • DML commands does not seem to affect the buffered results, as expected;

If any of you has done some tests like this, please share with us! This kind of things are never documented, so we need to keep testing to see how they work behind the scenes.

Thanks! See you soon!

ORACLE SQL for EPM tips and tricks S01EP03!

Posted in Query, SQL, Tips and Tricks with tags , , on March 26, 2019 by radk00

Hi all! Continuing the Oracle SQL for EPM series, today’s post is quite simple, but it may consume an extreme amount of time when we are requested to troubleshoot “why these numbers does not match” type of scenarios. Its related to UNION and UNION ALL operations. Let me describe what happened to me in one of those situations.

The client had a table with several columns that would calculate some metrics related to their

business. It was a “cumulative” type of table, where metrics were being aggregated by each previous period’s numbers. In a very resumed way, lets use the following example:

1

So, for Feb-19, the SUM would be 150 for Account 1 and 60 for Account 2. Next month, he would get the following:

2

His logic was summing the March period in Account 1 correctly (30) and summing it to previous 150. However, since Account 2 was not coming in March, his SQL was not reporting Account 2 in March. To make the calculations easier, he decided to add a “dummy” metric for all existing Accounts as 0, so his logic would calculate it correctly even it the record did not exist for that period. Something like that:

3

The process would still give his correct value of 30 in Account 1 for March and 0 for Account 2, which would then sum against the previous periods. It all worked fine, until someday someone complained that the numbers could not be right and some numbers were missing. When I checked the code, I quickly realized his mistake: he created his “dummy” metrics using a UNION in Oracle against his periodic metric and his “dummy” metric. But why it was giving the wrong numbers? Oracle explains:

  • UNION combines the results of two queries, which eliminates duplicate selected rows. The UNION operator returns only distinct rows that appear in either result.

Let’s picture the problem. His logic worked fine for Feb and Mar, but in Apr, something like this happened:

4

If you sum Apr period for Account 1, the number should 80, but he was getting only 60 as below:

5

This is due to UNION’s behavior: It will run an implicit distinct in the combined dataset, which in this case is eliminating good data. I went ahead and changed the UNION to UNION ALL, which Oracle states:

  • The UNION operator returns only distinct rows that appear in either result, while the UNION ALL operator returns all rows. The UNION ALL operator does not eliminate duplicate selected rows.

The result is the following:

6

Now it looks correct: 80 for Account 1 and 0 for Account 2.

That’s it folks! Simple things that may give us enormous headaches and wrong numbers, so please always check out when you see an UNION in the queries! It may be implicitly omitting some good data there.

See ya!