Archive for the Performance Category

Oracle SQL for EPM Tips and Tricks S01EP14

Posted in ODI, Oracle, Oracle Database, Performance, SQL, Tips and Tricks with tags , , , on October 15, 2020 by RZGiampaoli

Hey guys how are you? Continuing our SQL series (S01EP13), today I’ll share a very hand little query that I use very often for check data duplication. In fact, this would be an upgrade version of ODI’s pk check.

An upgrade version because in ODI, if you enable PK check, if he finds duplication, he eliminate both data. This code I’ll show you, you would choose if you want to keep the last created duplication or the oldest one, but only one will be eliminated.

I have a test table with this values:

If I want to check for duplicate PK, I can just run this query here:

The Idea here is, we have 2 queries. The first one will check if the ROWID it has is bigger or smaller (you choice) than the MIN or MAX ROWID (depending of your previous choice) than the second sub query by any joins you want to check.

In this case, we wanted to check only if the PK column had duplicated values, but we could check any other column by just replace it in the join. In fact, we could have any amount of columns in the join and that would check if there’s any duplications in all columns you inserted there.

Then you can select the first using > and MIN or the last by using < and MAX as well you can select what column you want to check in the where clause.

One important thing to mention is that this query is meant to work as a delete because it’ll keep what was not in the select. What I mean is, if you have more than one duplication, it’ll bring, in this case, all the rows that has the ROWID > then the one selected in the first query:

Then if I have multiple duplications, the query will return everything that needs to be deleted and the only one remaining was the first one inserted (3, Chuck, Giampaoli).

I hope you enjoy this little trick and see you soon.

Oracle SQL for EPM Tips and Tricks S01EP10

Posted in ACE, Hacking, Oracle, Oracle Database, Performance, SQL, Tips and Tricks with tags , , , , , , on February 26, 2020 by RZGiampaoli

Hey guys how are you?

Today a quick tip that I think is very useful. From time to time the business ask us to validate if a table has data or not before we load it. It’s fare, specially if you use a truncate and insert approach.

The problem is, sometimes, the table/view they are asking for has millions of rows, and there’s no other safe way to validate if a table has data or not than querying it.

I just fixed a case where an interface had a validation that basically counts 3 different tables that together had 40 million rows per period. This validations were taking around 1000 sec to happens.

The data load that happens before that took 1200 sec. Then, basically the validation process were taking as much time as the load process.

After some changes, the query now is validating the 3 tables in 0.3 seconds. Way better than before. Basically I just used 3 things:

The hint /*+ FIRST_ROWS(1) */ that makes oracle prepare the best plan to query just one row (in my case since I used 1 as parameter.

The filter ROWNUM = 1 to make sure oracle just return 1 row, if we don’t use that, the hint can make everything very slow because oracle will be planning for just one row, but without filtering it’ll bring more (using the best plan possible for 1 row).

And UNION ALL instead of UNION, because there’s a huge difference between them. when you use UNION, oracle matches the sets of data to make sure you have unique rows after that. UNION ALL in other case, just bring everything each set return without any extra process to validate anything. UNION ALL is always faster than UNION.

In the end I have an query like this:

As you can see, the query is very simple and for this example I just had the name of the table there, then we know the table is not empty for that period. We can do other approach like summing then all together and validate if the results is = 3 for example or any other logic we need can be implemented on top of this query.

I hope this is helpful for you guys and see you in the next post.

Oracle SQL for EPM Tips and Tricks S01EP09

Posted in ACE, Hyperion Planning, Oracle, Oracle Database, Performance, Tips and Tricks with tags , , , , , , on February 19, 2020 by RZGiampaoli

Hey Guys how are you?

Today it’ll be a quick tip for you guys that like/need to query the Planning repository.

The Planning repository stores both the Plan Type and the Consolidation in a very particular way, in fact this is true for a lot of other things like security, form properties etc… but I’ll focus these 2, that are the more often used and the solution is the same for all of them anyways.

If we take a look in the HSP_PLAN_TYPE table we’ll have something similar to this (depending in how many plan types you have in your app).

As we can see Planning stores in this table all the plan types that were created when we setup the application. In my case I have 4 plan types and we can have up to 5 BSO plan types in a Planning app. Now, if we join the HSP_OBJECT and the HSP_MEMBER filtering the OBJECT_TYPE = 2 we can take a look in all the dimensions we set in the repository.

The USED_IN columns is the column that says to planning in witch plan type that member will exists. The interesting thing here is that, you don’t see the PLAN_TYPE ID that you supposed to right? And that is because a member can exists in more than 1 plan type right, and if we use the PLAN_TYPE ID straight, we would need one row for each plan type right?

Instead, we have just one row but we also have the ability to tell Planning where that member should exists, and we can do that by summing the PLAN_TYPE ID’s together. In the example above, the Account dimension exists in all 4 plan types (1+2+4+8 = 15). Now the Products dimension exists only in one plan type (1), and by the number you can say that is the Pnl Plan type.

you seen the idea here is to check if a PLAN_TYPE ID exists inside that number we have here in the USED_IN column. Another example is the Employee dimension that has the USED_IN set as 8. The only number that will fit in here is the 8 itself (1+2+4 = 7, 1+8 = 9…) meaning the Plan type is WrkForce.

I think the most used way for us to figure out if a number exists inside another number is to use MOD.

  • CASE WHEN MOD(USED_IN,2)>=1 THEN 1 ELSE 0 END PT_1
  • CASE WHEN MOD(USED_IN,4)>=2 THEN 1 ELSE 0 END PT_2
  • CASE WHEN MOD(USED_IN,8)>=4 THEN 1 ELSE 0 END PT_3
  • CASE WHEN MOD(USED_IN,16)>=8 THEN 1 ELSE 0 END PT_4
  • CASE WHEN MOD(USED_IN,32)>=16 THEN 1 ELSE 0 END PT_5

The Oracle MOD(N,M) is used to return the remainder of a dividend divided by a divisor where:

NDividend.
MDivisor.
Pictorial Presentation of MOD() function

Then in our case, we need to test if the USED_IN number contains the PLAN_TYPE ID on it, and for that we need to MOD it by rolling sum of the plan types + 1. To make it easier I’ll put that in numbers:

  • N = USED_IN = 31 (max number possible)
  • M = PLAN_TYPE ID = 1 (Pnl) what I want to test) + 1 = 2
  • MODE (31, 2) = 1
  • 31/2 = 15 Reminder = 1
  • MOD = 1

What that is telling us is that if the MOD is = 1, the Plan type 1 exists in that number. I run a simulation just to show us when the Plan Type 1 does not exists in the USED_IN:

As we can see, the Plan Type 1 only exists in the odd possible results (as expected) what means in any other possible combination of the other 4 plan types he doesn’t exists (2, 4, 2+4=6, 8, 8+2=10, 8+4=12, 8+2+4=14, 16, 16+2=18, 16+4=20, 16+2+4=22, 16+8=24, 16+8+2=26, 16+8+4=28, 16+8+2+4=30).

The same is true for the other Plan types, you can try then out using the MOD. Now, this work well but there’s a way easier and clean way to do exactly the same thing using the function BITAND.

The BITAND function treats its inputs and its output as vectors of bits, the output is the bitwise AND of the inputs. Basically it performs below steps.

  • Converts the inputs into binary.
  • Performs a standard bitwise AND operation on these two strings.
  • Converts the binary result back into decimal and returns the value.

Ok, it looks more complicated now, but the good news is that to use is simpler than it sounds like. The main difference between this function and MOD is that MOD returns a boolean, BITAND return the value you asked if it’s true. Expanding my previous test using BITAND:

As you can see, with BITAND returning the number you asked for instead of 0 or 1 make it possible for us to Join the HSP_PLAN_TYPE with HSP_MEMBER using the USED_IN and the PLAN_TYPE in the BITAND Function as a Join:

As you can see, this is a far better way to split the members by Plan Type. And now we can see that the Dimension Products only exists in the Plan Type Pnl and that Entity exists in 4 different plan types. We don’t need to worry about any mathematics formula to create all our MODs, we just need to Join our Plan Type table with the BITAND of USED_IN by PLAN_TYPE.

The Consolidation is another place where you can use the exactly same thing. Instead of using something like this:

  • DECODE(MOD(CONSOL_OP,8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON1
  • DECODE(MOD(TRUNC(CONSOL_OP/8),8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON2
  • DECODE(MOD(TRUNC(CONSOL_OP/64),8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON3
  • DECODE(MOD(TRUNC(CONSOL_OP/512),8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON4
  • DECODE(MOD(TRUNC(CONSOL_OP/4096),8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON5

You can create a parameter table with the values for each kind of consolidation and use BITAND with that info:

I hope you this will be useful for you guys and see you soon.

Fragmented and Aggregated tables in OBIEE using ODI Part 2/5: Managing the partitions of Fragmented/Aggregated tables

Posted in OBIEE, ODI, ODI Architecture, Oracle, Oracle Database, Performance, SQL, Tips and Tricks with tags , , , on February 10, 2020 by RZGiampaoli

Hey guys, how are you?

Continuing the series Fragmented and Aggregated tables in OBIEE and today we are talking about how to Populating the Fragmented tables using ODI.

Just to make easier for you to navigate in this series, here’s the parts of it:

Creating the tables: I’ll explain the advantages and how you can create the fragmented tables and also how to further improve performance using partitioning and sub partitioning.

Managing the partitions of the Fragmented/Aggregated tables: Here you’ll see how to manage all partitions using ODI.

Populating the Fragmented tables using ODI: Here you’ll find how to create generic components to load all fragmented tables we need to.

Populating the Aggregated tables using ODI: Here you’ll find how to create generic components to load all Aggregated tables we need to.

Setting the OBIEE Repository: Finally, we’ll going to setting up the OBIEE repository to make use of all tables.

In my previous post we design 18 tables and its partitions and sub-partitions. Now what we need to do is to make ODI manage these partitions for us. As I said before, the Sub partitions will be static and if we need to create one more sub-partitions, we’ll be doing that manually. That’s always the best choice because manage sub-partitions add a layer of complicity that is not needed.

With that said, if you need to manage the sub-partitions, you can use the exactly same approach I’ll show here just adapting to the needs of the sub-partition’s creation and sub-partition template update.

Let’s start analyzing what needs to be done. For each table we need to issue this command to the database:

For List partitions:

               ALTER TABLE table_name ADD PARTITION partition_name VALUES (value[, value]…);

For Range Partitions:

               ALTER TABLE table_name ADD PARTITION partition_name VALUES LESS THAN (value[, value]…);

Sounds hard but in fact is very simple and it’ll going to work in any SQL you have. To do that this we will use a procedure with our ALTER TABLE statement in the Command on target tab and a SELECT in the command on source to pass the information we need to create the partitions like the table names, the partition name and the partition values.

For each row that the SELECT on source returns, the statement in the Command on target will be executed, and we can pass the values from the SELECT in the Command on Source to change the behavior of the statement in the Command on target.

This is something very important that we need to understand about ODI. ODI It’s not an ETL tool but a code generator tool that orchestrate Database and a lot of other tools if you will.

What I mean by that is, until ODI send the code for the database to execute, everything that ODI does is to generate code. If we use the ODI API it’ll replace it before send to the database as well to replaces the variable with their values, meaning a variable like this #SQL on the Command on Target tab could contain an entire dynamically generate INSERT that ODI would replace before send it to the Database.

And this is why ODI is so powerful, because we can manipulate string and make everything dynamic.

Then our query in the source should return something like this:

The query will change depending of your design of the application then I don’t think sharing my query will going to help. One tip I can give you is always compare the name of the partition with the content of the column PARTITION_NAME on ALL_TAB_PARTITIONS view to see if the partition you want to create already exists to prevent an error in ODI, in fact you have 2 choices, either do this or flag the step as Ignore Error to end the step with a warning. I prefer to compare and end the step without warnings, then I compare.

Another thing to notice is that for FY20 Feb, my PARTITION_VALUE (MAX_PERIOD) is the 1 day after the last day of the month. It’s hard to see here because the calendar used by the client is all different but if you have the normal calendar and the mask you have for you PERIOD_ID is YYYYMMDD, for 2020/Feb you partition ID needs to be 20200230 (Feb ends 02/29/2020).

That’s because this is a RANGE partition, and for RANGE partition we define the partition value as VALUES LESS THAN, what means that everything under 20200230 will be FEB unless we have a smaller partition (JAN) below it. But for example, if you smallest partition is 2018/Jan, everything below that period will be inserted in the 2018/Jan partition, then you need be careful with when you’ll start to create partitions.

Another important thing to mention is that you cannot insert a partition smaller than an existing partition, just bigger. That means, if you start with 2018/Jan, it’s impossible to create a 2017 partition, the only way is to drop the table and create the table starting by 2017. A RANGE partition needs to be sequential and always growing.

Having these values returning from the SELECT on the Command on source tab, the only thing that is left is to use these variables in the statement in the command on Target tab like this:

ALTER TABLE #TABLE_NM ADD PARTITION PARTITION_NM VALUES LESS THAN (#MAX_PERIOD);

Pay attention on the values of the partitions. In this case the values are numeric, that is why I’m not using quotes. If the values were String, we need to enclosure the variable with quotes.

And that’s it, this is all we need to do to manage partitions using ODI. Because the SELECT on the source returns All tables with their partitions name and values and we pass to the TARGET statement this information, for each row in the source the target will be looped, creating the partitions you need for each table.

We are done for the fragmented tables. Now we need to manage the partitions for the Aggregated table. The approach is the exactly the same, the only thing that will change is the content of the query from the Command on Source tab and that the PARTITION now will be a LIST Partition instead of a RANGE partition.

We could have done just one query to return all values including if that was a RANGE or a LIST partition, but for the sake of simplicity is always better to split the scenarios to not over complicate something simple.

For the month level we need the query in the Source to return:

And the statement in the Command on Target should be like this:

ALTER TABLE #TABLE_NM ADD PARTITION PARTITION_NM VALUES (#MAX_PERIOD);

That’s all that needs to be done. For the quarter level the results are similar:

And the statement in the Command on Target should be like this:

ALTER TABLE #TABLE_NM ADD PARTITION PARTITION_NM VALUES (#MAX_PERIOD);

As you can see, it’s very simple to manipulate and manage the database with ODI, we just need to create a query with the information we want in the source, and replace the target statement with that information, that’s all we need to do.

Another important thing that we need in our procedure, in case you want to go with the truncate/insert approach, is to truncate the partitions before load. This is also managed by ODI and also works in the exactly same way as the CREATE PARTITIONS but a little bit simpler since we don’t need to have the partitions values:

And the statement in the Command on Target should be like this:

ALTER TABLE #TABLE_NM TRUNCATE PARTITION PARTITION_NM;

For month level:

And the statement in the Command on Target should be like this:

ALTER TABLE #TABLE_NM TRUNCATE PARTITION PARTITION_NM;

And for the Quarter Level:

And the statement in the Command on Target should be like this:

ALTER TABLE #TABLE_NM TRUNCATE PARTITION PARTITION_NM;

Also, could be done with just one query but again, simplicity works always better than complexity.

This is all we need to start loading data. One last thing we could do, if is you case, is to create a purge process to drop old partitions. If your business requires 5 years of data, it’s a good idea to drop older partitions. The approach is exactly the same

In fact, the results of the Command on source are exactly the same as the ones we need for the TRUNCATE PARTITION steps, the only difference is that you need to have in place a logic to find the old partitions. Other than that, the only thing that changes are the Statement on the Target that should be:

ALTER TABLE #TABLE_NM DROP PARTITION PARTITION_NM;

And that’s it, we just create a procedure to manage the partitions for all our 18 tables including create, truncate and drop old partitions.

In the next part of this series we’ll going to talk about populating the Fragmented tables.

I hope you find this helpful and see you soon.

Fragmented and Aggregated tables in OBIEE using ODI Part 1/5: Creating the tables

Posted in OBIEE, ODI, ODI 12c, Oracle, Oracle 11.2.0, Oracle Database, Performance, SQL, Tips and Tricks with tags , , , , on February 7, 2020 by RZGiampaoli

Hey guys, how are you?

I normally don’t talk about OBIEE, despite the fact I work a lot with it, because there’s a lot of people talking about it. But recently I worked in a project and I decide to use fragmented tables and aggregated tables at same time and I found that we don’t have much specific information regarding this matter, and of course, zero information about using ODI to enable this project.

Then since this will be kind of big, I decide to split this in 5 parts:

Creating the tables: I’ll explain the advantages and how you can create the fragmented tables and also how to further improve performance using partitioning and sub partitioning.

Managing the partitions of the Fragmented/Aggregated tables: Here you’ll see how to manage all partitions using ODI.

Populating the Fragmented tables using ODI: Here you’ll find how to create generic components to load all fragmented tables we need to.

Populating the Aggregated tables using ODI: Here you’ll find how to create generic components to load all Aggregated tables we need to.

Setting the OBIEE Repository: Finally, we’ll going to setting up the OBIEE repository to make use of all tables.

To start the series Fragmented and Aggregated tables in OBIEE, today we are talking about how to Creating the tables for our project.

In this project I had data from 4 different source system, and each source system had different amount of data. Dell data had 20 million rows per month, EMC data 30 million, DTC data 5 million and STAT data another 5 million.

The business wanted to have 15 quarters of data stored, then if we do a simple calculation we’ll going to have: 20.000.000 + 30.000.000 + 5.000.000 + 5.000.000 * 45 = 2,700,000,000 rows.

That’s a lot of data even for a partitioned table. Then I decide to split this tables in some way.

Analyzing the data, I saw that this tables contain 3 different account hierarchies and the users will only analyze one per time: BS, Income and PL2. I decide to split the tables by it. Then just with that now I had 3 fragmented tables. JAT_JE_DETAIL_BS, JAT_JE_DETAIL_INCOME and JAT_JE_DETAIL_PL2.

That’s nice, just with that I split that 2.7 trillion rows in 3. The problem now is that I have 4 different source system right, and even if I partition ate the table by source system, EMC is as big as the other 3 sources put together, and that would make that source slower than the other to retrieve.

With that I decided to split the tables by 2 again, one for EMC and another for the other sources. That makes the data to be distribute evenly. 20.000.000 + 5.000.000 + 5.000.000 in the D table and 30.000.000 in the E table (D and E is how I called them).

With that now we have 6 partitioned tables, right? 3 for EMC data (one for each Account Hierarchy) and another 3 tables for Dell, DTC and STAT data.

Now that we have a nice and even data distribution, we need to decide how we’ll going to create the partition and sub partition for these tables.

The partitions I always like to use the PERIOD_ID because it makes sense in most of cases (unless we are creating a Forecast app, in this case the users will want to see by scenario and not by one period).

Then for these tables JAT_JE_DETAIL_D_BS_FACT, JAT_JE_DETAIL_D_INCOME_FACT, JAT_JE_DETAIL_D_PL2_FACT we are going to partition by period. But what kind of partition we will going to use? Well, for this case, this tables contains Journal information at day level and the users want to drill from the month to the Journal rows of that mouth. What that means is that we’ll always going to work inside a month, then the best way to go is using a RANGE partition.

Range partitions, how the name already says, uses a Range of values to create the partition area. In our case it’ll be one month. The range can be built by a DATE column or a Number column, and the second one is our case since is easier to manipulate the periods using a number than a data.

(The period for this project is not a regular month, for example, the month of February FY20 start in Jan/30 and goes until Feb/2).

For the sub partitions is an easy call, we’ll going to use the Source System since we have 3 different one in this tables right (Dell, DTC and STAT), and the partition type will be LIST since we have a simple list of values. Also, for sub partitions is always easier to have values that don’t increase because the complexity of adding new sub partition values to a table.

What I mean is that to add a new partition you just need to use one ALTER TABLE ADD PARTITION statement and add any number of partitions you need. But for a sub partition is different. Every time you create a sub partition Oracle create a Sub Partition template. That template is used every time you create a Partition.

In our case for example, our Sub Partition Template would be DELL, DTC and STAT. What that means is that when you create another partition, this partition will be created with these 3 sub partitions automatically. Then if you want to add another value to a sub partition you need to do 2 things instead of just one (in partition case).

You need to change the sub partition template definition to add the new sub partition and you need add your new Sub partition in every single already existing Partitions (is not update automatically). After that all new partitions will contains all your sub partitions.

If you don’t update the template, every time you create a partition, you’ll not have the new value you want.

Enough of theory, let’s proceed with the other set of tables, JAT_JE_DETAIL_E_BS_FACT, JAT_JE_DETAIL_E_INCOME_FACT, JAT_JE_DETAIL_E_PL2_FACT. For these tables we’ll continue to have the partitions as a RANGE of the PERIOD_ID but the sub partition doesn’t make sense for us to use source system because we have only one source system.

And now is the time to explain another thing about partitions. For Oracle to use the partition you need to filter the column that is used to create the partition. For example, if we are creating a partition using the column PERIOD_ID, we must filter the data by PERIOD_ID and the ideal would be always have a filter in the Partition and Sub Partition column, because if you don’t, it’ll do a full table scan anyway. Then for a data retrieval point of view, the choice for the partitions are always made based in the Filter requirement, that means, we need to choose columns that we can force the user to choose in OBIEE. For an ETL point of view, the best would be to have partitions and sub partitions in columns that we can use to truncate before load.

For example, in our first set of tables I can easily and fast truncate a period and a source system before I load it back to the tables. That’s the beauty of partitions, I can drop, truncate and create on demand.

That said, you can imagine that some times what is good for the report is not good for the DW, but this is not our case today.

With this new piece of information in mind, the users are force in the OBIEE prompts to choose a Gen 2 Account, then this will be our sub partition for our second set of tables also with the partition type as LIST.

Before we proceed, let’s just see how everything is set then:

Table NamePartitionSub Partition
JAT_JE_DETAIL_D_BS_FACTPERIOD_ID (Range)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_INCOME_FACTPERIOD_ID (Range)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_PL2_FACTPERIOD_ID (Range)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_E_BS_FACTPERIOD_ID (Range)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_INCOME_FACTPERIOD_ID (Range)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_PL2_FACTPERIOD_ID (Range)ACCOUNT_GEN2_ID (List)

Perfect, now the only thing that is missing here is the aggregate tables.

Aggregated table are tables that uses a dimension Drill to pre aggregate the vales that it contains. It’s an easy way to improve performance during a drill operation.

The most common aggregated table is for the Period dimension, and this is the one we’ll going to do here as well. Then in our case, as mentioned before, the 6 tables we have now contains day level data right. I want to have the data aggregated by Month and by Quarter.

What that means is that we’ll need to have one more table for each one of these 6 tables for Monthly level data and one more table for each one of these 6 tables for Quarterly level data. These tables will going to have the same Partitions and Sub Partitions definitions with only one difference, now instead of using a Range Partition, we’ll going to use a List Partition because we don’t have a range anymore, we’ll going to aggregate that range of days in one month.

Then let’s see how the tables needs to be set:

Table NamePartitionSub Partition
JAT_JE_DETAIL_D_BS_FACTPERIOD_ID (Range)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_INCOME_FACTPERIOD_ID (Range)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_PL2_FACTPERIOD_ID (Range)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_M_BS_FACTPERIOD_ID (List)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_M_INCOME_FACTPERIOD_ID (List)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_M_PL2_FACTPERIOD_ID (List)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_Q_BS_FACTPERIOD_ID (List)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_Q_INCOME_FACTPERIOD_ID (List)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_D_Q_PL2_FACTPERIOD_ID (List)SOURCE_SYSTEM_ID (List)
JAT_JE_DETAIL_E_BS_FACTPERIOD_ID (Range)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_INCOME_FACTPERIOD_ID (Range)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_PL2_FACTPERIOD_ID (Range)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_M_BS_FACTPERIOD_ID (List)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_M_INCOME_FACTPERIOD_ID (List)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_M_PL2_FACTPERIOD_ID (List)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_Q_BS_FACTPERIOD_ID (List)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_Q_INCOME_FACTPERIOD_ID (List)ACCOUNT_GEN2_ID (List)
JAT_JE_DETAIL_E_Q_PL2_FACTPERIOD_ID (List)ACCOUNT_GEN2_ID (List)

Great, now we have 18 tables, 6 fragmented tables for daily level data, 6 fragmented tables for monthly aggregated level data (M) and 6 fragmented tables for quarterly aggregated level data (Q).

The last thing about tables is that we need a matching Period dimension (for this case) to match the data in each aggregated level. What that means is that, in this case, since we have 3 different level of aggregations, we need to have 3 Period dimensions.

OBIEE don’t allow us to join the same table using columns that are not an ID column. Then what we need is one PERIOD dimension (DIM_PERIOD) that contains the drill from Year to day (Period ID will be YYYYQMMDD number), another PERIOD Month Dimension (DIM_PERIOD_MONTH) that contains Drill from Year to Month (Period ID will be YYYYQMM number) and one last PERIOD Quarter table (DIM_PEIOD_QUARTER) that contains Drill from Year to Quarter (Period ID will be YYYYQ number).

Of course, the Aggregated FACT tables also will going to have a matching ID (and we need that to aggregate the data right).

Last thing to mentioning is that the Detail Fact table (the one in the Day level) it contains a lot of Journal details like Journal number, Description, Header ID and others. For the detail this is ok, but when we design the aggregated tables, only ID are allowed there for maximum aggregation. If the table contains Descriptions, nothing will be going to be aggregated right?

Don’t worry that OBIEE can handle this kind of situation very well.

That’s for now. I hope you guys enjoy it and found helpful.

See you soon.

Oracle SQL for EPM Tips and Tricks S01EP08

Posted in Connect By, DEVEPM, Dimensions, EPM, Oracle, Oracle 11.2.0, Oracle 11.2.0.4, Oracle Database, Performance, REGEXP, Tips and Tricks with tags , , , , , on November 26, 2019 by RZGiampaoli

Hey guys how are you? Today I’ll expand a little bit more the REGEXP team just to show how easier and powerful these functions are, not to mentioning how dynamic.

In the last post S01EP04 I explained how to split a string in different columns. The thing is, what if you don’t want to split in different columns? what if you want just to have the string as a list in the same column?

We can also do that with a very nice trick using REGEXP_COUNT. Let’s take a look on this.

In the previous post we had a query like this:

As I said there, using REGEXP is very simple to “walk”trough the string and get any part of it you want. And is also simple to transform a string in a list of values. To do that we just need to add one extra connect by and the REGEXP_COUNT function like this:

What that CONNECT BY LEVEL <= REGEXP_COUNT(PATH, ‘[^|]+’) is doing is basically increasing by 1 for each “|” he finds in the string we have, in other words, he is dynamically “walking” through the string looking for “|” and passing the Level he find a “|” to the REGEXP_SUBSTR above. The result is:

As you can see, the REGEXP_COUNT will return the number of “|” he finds in the string, the CONNECT BY LEVEL will multiply that string by that number and the REGEXP_SUBSTR will “walk” the string using the LEVEL of each row to extract the list of members from that string.

This is very useful to manipulate strings that you don’t know the amount of members you want to extract from that, like UDA members for example, that when you extract from ASO cubes, comes in a string all together.

I hope this is a useful trick for you guys and see you soon.

ORACLE SQL for EPM tips and tricks S01EP04!

Posted in ACE, Connect By, Oracle, Oracle Database, Performance, REGEXP, SQL, Tips and Tricks with tags , , , , , , on April 9, 2019 by RZGiampaoli

Continuing the Oracle SQL for EPM series, today we’ll extend a little bit the “Connect by” post to see some neat thing we can do when we mix regular expression with connect by.

First of all, let’s start explaining the basic of this approach. We can use Connect By and Level to generate data in Oracle like this:

This is very useful when you need to generate 2 rows for each row we have in a query, then you can do a Connect by Level <=2 and use it as a query without join, duplicating everything in the source query. Then you can use a Decode or Case to say, Case when level = 1 then x when level =2 then y….

But for us this is just the basic concept that we need to understand for our trick. let’s get back to our metadata table, the one with parent child relationship and with all metadata from all apps and all dims that we have. Let’s say we need to create an hierarchical table based in a parent child table. Some people does one union for each level that we have in the source table (too much work), another will do filter all over the places, I don’t, I just use connect by with regular expression. Let’s take a look.

As we can see, we have 17 levels or generations for the account dimension. quite big. First of all, let’s do a connect by path to flat the entire hierarchy in one row and remove the parent child relationship.

Now, as we can see, if we just do a connect by prior with the sys_connect_by _path, the query will generate 1 row for each level of data. We don’t want that, unless you want to have the ability to input data in the upper level of the hierarchy, that is usually not the case. What we want is to have one the entire path for each leaf member, then we need to filter just the leafs in our queries. In my case my table already have a column that says if a member has children or not, but I’ll proceed as if I don’t have it.

If we don’t have the leaf information we can generate it by using Connect_By_IsLeaf that basically return 0 if is a parent or 1 if is a leaf. Again, we are filtering this outside the connect by prior and now we have just the leafs and the entire hierarchy above him. Next step is to split this back into rows.

REGEXP_SUBSTR is the perfect tool to split the string we have into new columns. As you can see, we can use the follow expression: REGEXP_SUBSTR(Column, ‘[^|]+’, 1, 1) Basically the regexp uses what is in [] to find what yo want and split it in the way you want. In our case I’m looking for the first | (that is defined by the 1, 1) and get the first word after it. Then I have another regexp and I changed the 1, 1 to 1, 2 to get the second word and keep doing this until I get to the 17 generation (as we saw in the beginning).

Basically if you have 17 generation you need to have 17 columns in the table, then you need to have 17 REGEXP, one for each column, you just need to change the 1, 1, 1, 2, 1, 3… until the 1, 17.

Here’s when ODI come in handy because if you use ODI you just need to do a select in the source to figure out how many generations you will have and then generate dynamically the 17 REGEXP and pass this to the target dynamically. It’s very easy since the code itself is always the same and the only thing that changes is the second parameter, and it’s a sequential parameter.

I hope this can be as useful for you as it’s for me and see you soon.

ORACLE SQL for EPM tips and tricks S01EP02!

Posted in ACE, Connect By, EPM, Oracle Database, Performance, Query, SQL, Tips and Tricks, WITH Clause with tags , , , , , , , on March 21, 2019 by RZGiampaoli

hey guys how are you? Let’s continue the SQL for EPM series. Today I’ll continue to talk about With with a small bonus of Connect by :). let’s start.

A lot of people uses Connect By in a daily bases but as far I having seeing, most of then don’t know how to use it properly. I already lost count with the amount of people complaining about performance issue with Connect By.

The thing is, Connect By works a little different than everything else in Oracle. We can say that Connect By has 2 stages and we’ll see why I’m saying that with this example. Let’s get back to our metadata table and let’s do a Connect By to extract the Balance Sheet Hierarchy from the Juno application:

As we can see, inside this table we have more than one application and more than one hierarchies for each application. That’s ok, we just need to filter it in our SQL right?

If we filter the APP_NAME and the HIER_NAME we’ll get all accounts for that Application and this will generate 12,622 rows. By the way, this table has all metadata from all our applications and we always filter by APP_NAME and HIER_NAME to select what we want (the table is also partitioned and sub-partitioned by these 2 columns). It’s important to know that without filtering anything this table has:

Ok, now, if we want to get just the BS hierarchy we just need to do the Connect By right?

That works… perfect… or not? Well in fact, this the wrong way to use Connect by because what I said before, the 2 stages.

As you can see, this query took 25 sec just to return the first 50 rows. In a integration this will take way more time, in fact, if you join this table to a data table to do a SUM in the BS level, this will take ages to return.

The reason is that for the Connect by, first Oracle does everything that is after the word Connect by and after the word Start with and then, and only then, it does what is in the where condition. That means, first he did the connect by in those 2.260.372 rows (and they are all repeated) and then after all the processing, it filtered what we wanted, that is the APP_NAME and the HIER_NAME. Then the right way to use it is:

Now it looks way better. 0.375 seconds to do exactly the same thing as before, and the only thing I did was to move our filters to the right place. Now Oracle is filtering and doing the Connect by at same time.

Now, if you do a SYS_CONNECT_BY_PATH and want to get just the leaf (to have the complete path that the hierarchy does, you can filter the leafs in the where clause (and need to be there otherwise it’ll not have the entire hierarchy during the connect by). This is how:

Now you see that the connect by filtered what needs to be filter during the Connect by execution and afterwards, it filtered just the leafs (using the CONNECT_BY_ISLEAF that returns if a member is a leaf or not).

Also, i used the CONNECT_BY_ROOT to generate the Root member used in this query (BS) and the SYS_CONNECT_BY_PATH to generate the entire path of the metadata (Very useful to transform parent/child tables in generation tables using this Technic and a regexp [we’ll see this in another post]).

Ok, now that the “Bonus” is written, let’s talk about the WITH that was the main subject here. Even with this Connect by write in the right way with the filters in the right place, we can still improve the performance using WITH.

That’s right, the idea is to prepare our subset of data using WITH before we ask Oracle to do the Connect by and leave it as simple as possible. Let’s take a look:

This is by far the best way to use a Connect by clause. You can, instead of using WITH use a sub-query but I think this way is easier and more organised as well. Also, I know the time difference doesn’t look to big between the previous example and this one but when you join this with data and start to SUM everything, you’ll see a huge difference between this method and the previous one.

Also, some times Oracle get lost with the previous method making everything slower but with the WITH method, it never happens then I advise you start to use this.

I hope you guys enjoy this little tip and see you next time.

ORACLE SQL for EPM tips and tricks S01EP01!

Posted in DEVEPM, ETL, Oracle, Oracle Database, Performance, SQL, Tips and Tricks, Uncategorized, WITH Clause with tags , , , , , , on January 21, 2019 by RZGiampaoli

Hey guys how are you? I decide to start a new series called ORACLE SQL for EPM tips and tricks. The idea here is to show the most useful SQL commands for EPM, how to improve performance, tips, tricks and everything that can be useful from a SQL point of view!

And to start well, I’ll show something very old but very useful that I don’t see too many people using these days. “WITH” clause.

I love to use “WITH” in my code. It helps organize the code, helps to optimize it and more important, to make it more efficient.

When you use “WITH” Oracle treats your query inside it as an inline view or resolved as a temporary table, making it easier and faster for Oracle to access that data if you need it again.

Simply putting, every time you needs to right a query that uses the same table over and over, it’ll probably be way more efficient if you use “WITH”.

The “WITH”clause works a little bit different from a regular SQL. We can say that we split the query in 2, one is the “WITH” declaration (That will behave like a table) and the other is the SQL that will Query the “WITH”.

WITH name_of_temp_table_here AS
(
    YOUR QUERY HERE
),
   name_of_temp_table_here2 AS
(
   SELECT *
   FROM name_of_temp_table_here, another_table...
)
SELECT *
FROM name_of_temp_table_here, name_of_temp_table_here2 

In the “WITH” you can have any kind of query you want. You can do joins, group by, you can also have more than one “WITH”, you can use the result of one “WITH” in the next “WITH”, you can do a lot of things.

But for now, lets take a look in a more real example. Let’s say that you have a table like I do, that contains all metadata from all yours applications:

Let’s say you want to get the Parent of a attribute that is associated with your Entity dimension. You probably will be doing something like this:

In the “FROM” we call the table 2 times and we join and filter everything we need. Since we don’t have attribute association in all levels we do a “Left Join” to make sure all Entities comes in the query. If we run a Explain Plan now we’ll get something like this:

As you can see, Oracle is querying the METADATA_EXTRACT table twice and each time it’s doing a FULL in one Partition (‘ENTITY’ and ‘PHYSICAL_GEOGRAPHY’ partitions).

Now, if we change the query (and we can do it in different ways, this is just one of them) to a “WITH” clause we ‘ll have something like this:

As you can see, we achieved the same results with the code a little bit different. Now I have all my filters in the “WITH” query and in the bottom I just call the “WITH” query 2 times and do what needs to be done.

If we run a Explain Plain now we will have:

As you can see, Oracle now is querying the METADATA_EXTRACT table just once and then his queries the SYS.SYS TEMP table twice. The only problem with this query and the way I did is that since we are creating a temporary table filtering 2 partitions and then later I’m filtering again, it’s basically doing 2 FULL scan in the same TEMP table, and even so, it’s a few seconds faster then the original query.

But this is just an example on how we can reduce the amount of times that Oracle needs to query a table. WITH is not a miracle clause or anything like that, is just another tool that we can use to optimize our code, and its performance needs to be evaluated in a case-by-case basis.

And even if the performance doesn’t change, I believe using “WITH” clause makes any query easier to ready, to test, to update and to right since you can divide your huge query in small bits and then join
everything in the bottom query.

“WITH” is a huge subject and we’ll be talking more about it in the next post, and this time we’ll be improving performance as well using “WITH” with “CONNECT BY”.

Kscope 17 is approaching fast!!! And we’ll be there!

Posted in ACE, Data Warehouse, Essbase, Hyperion Essbase, Java, Kscope 17, ODI, ODI Architecture, Oracle, Performance, Tips and Tricks, Uncategorized with tags , , , , , , , , on June 8, 2017 by RZGiampaoli

Hi guys how are you? We are sorry for being away for so much time but this year we have a lot of exiting things going one, then let’s start with what we’ll be doing at Kscope 17!

This year we’ll present 2 sessions:

Essbase Statistics DW: How to Automatically Administrate Essbase Using ODI (Jun 28, 2017, Wednesday Session 12 , 9:45 am – 10:45 am)

In order to have a performatic Essbase cube, we must keep vigilance and follow up its growth and its data movements so we can distribute caches and adjust the database parameters accordingly. But this is a very difficult task to achieve, since Essbase statistics are not temporal and only tell you the cube statistics is in that specific time frame.

This session will present how ODI can be used to create a historical statistical DW containing Essbase cube’s information and how to identify trends and patterns, giving us the ability for programmatically tune our Essbase databases automatically.

And…

Data Warehouse 2.0: Master Techniques for EPM Guys (Powered by ODI)  (Jun 26, 2017, Monday Session 2 , 11:45 am – 12:45 pm)

EPM environments are generally supported by a Data Warehouse; however, we often see that those DWs are not optimized for the EPM tools. During the years, we have witnessed that modeling a DW thinking about the EPM tools may greatly increase the overall architecture performance.

The most common situation found in several projects is that the people who develop the data warehouse do not have a great knowledge about EPM tools and vice-versa. This may create a big gap between those two concepts which may severally impact performance.

This session will show a lot of techniques to model the right Data Warehouse for EPM tools. We will discuss how to improve performance using partitioned tables, create hierarchical queries with “Connect by Prior”, the correct way to use multi-period tables for block data load using Pivot/Unpivot and more. And if you want to go ever further, we will show you how to leverage all those techniques using ODI, which will create the perfect mix to perform any process between your DW and EPM environments.

These presentations you can expect a lot of technical content, some very good tips and some very good ideas to improve your EPM environment!

Also I’ll be graduating in this year leadership program and this year we’ll be all over the place with the K-Team, a special team created to make the newcomers fell more welcome and help them to get the most of the kscope.

Also Rodrigo will be at Tuesday Lunch and Learn for the EPM Data Integration track on Cibolo 2/3/4.

And of course we will be around having fun an gathering new ideas for the next year!!!

And the last but not least, this year we’ll have a friend of us making his first appearance at Kscope with the presentation OBIEE Going Global! Getting Ready for More Than +140k Users (Jun 26, 2017, Monday Session 4 , 3:15 pm – 4:15 pm).

A standard Oracle Business Intelligence (OBIEE) reporting application can hold more or less 1,200 users. This may be a reasonable number of users for the majority of the companies out there, but what happens when an IT leader like Dell decides to acquire another IT giant like EMC and all of their combined 140,000-plus users need to have access to an HR OBIEE instance? What does that setup looks like? What kind of architecture do we need to have to support those users in a fast and reliable way?
This session shows the complexity of Dell’s OBIEE environment, describing all processes and steps performed to create such environment, meeting the most varied needs from business demands and L2 support, always aiming to improve environment stability. This architecture relies on a range of different technologies to support that huge amount of end users such as LDAP & SSL, Kerberos, SSO, SSL, BigIP, Shared Folders using NAS, Weblogic running into a cluster within #4 application servers.
If the challenge was not hard enough already, all of this setup also needed to consider Dell’s legacy OBIEE upgrade from v11.1.1.6.9 to v11.1.1.7.160119, so we will explain what were the pain points, considerations and orchestration needed to do all of this in parallel.

Thank you guys and see you there!

kscope17logo-pngm