Oracle SQL for EPM Tips and Tricks S01EP13

Posted in ACE, Data Warehouse, Hacking, Hyperion Planning, ODI, Oracle, Oracle 11.2.0, Oracle Database, SQL with tags , , , , , on April 1, 2020 by RZGiampaoli

Hey guys how are you? Let’s take a look today in the opposite of S01EP12 situation, in fact we’ll use the same example again to show how can we convert a string in a list of values in a easy and dynamic way, starting with this query here:

I’ll transform this query in a with and I’ll use REGEXP to put this back into a list of values. This is very useful when we extract metadata from essbase for example, because essbase exports the UDA’s as a list of values. Of coarse this has many uses other than this but let’s keep this one in mind.

Now what we need to do is to split the strings by comma in this case, then the idea is to count the amount of commas we have in a row and split the strings by that amount.

The idea here is to use the REGEXP_COUNT to count how many words we have in between the commas and then use it to multiply the rows in the CONNECT BY LEVEL. For example, if we have 3 words, the connect by will create 3 rows of the same row, one with the LEVEL = 1 another with the LEVEL =2 and the last one with LEVEL=3.

With that we just need to use the REGEXP_SUBSTR to extract the words based in the LEVEL, this way we’ll have the REGEXP_SUBSTR(STR, ‘[^,]+’, 1, LEVEL (that will be 1 for the first row, 2 for the second and 3 for the third one).

I hope this can be useful and see you soon.

Oracle SQL for EPM Tips and Tricks S01EP12

Posted in ACE, Data Extract, Hacking, Hyperion Planning, ODI, ODI 10g, ODI 11g, ODI 12c, Oracle, Oracle 11.2.0, Oracle 11.2.0.4, Oracle Database, SQL with tags , , , , , on March 31, 2020 by RZGiampaoli

Hey guys how are you keeping? I hope everybody is healthy and keep this way in this difficult times.

And to make our life less complicate, here’s another tip. Let’s talk about how to concatenate stuff in Oracle.

Imagine a simple case, we want to query the Planning repository to get the list of UDA’s a member have. We can easily do that by query the HSP_OBJECT, HSP_MEMBER_TO_UDA and HSP_UDA tables.

I’m filtering just 3 products to make it easier for us to see. The results shows that each project has a different number of UDA’s, and we never know how many it’ll be, then the easiest way to concatenate them is to use the command LISTAGG (or WM_CONCAT if you are in a DB version prior to 11.1).

The command is very simple LISTAGG(Column, Separator) WITHIN GROUP (ORDER BY column). As we can see the command allow us to select the separator we want (can be comma or any string really) as well to order the results by another column). Let’s take a look in the example above.

As you can see, it easily create a list split my comma (as specified) for me, and the nice thing about it is that I don’t need to do any string treatment if return null or if I have just one string on it and things like that.

This is an extremely good Function and we heavily use it in ODI to generate dynamic code because its simplicity, for example, we can generate a SQL statement on the fly using the command on source and command on target:

With this results we can easily pass this info to the command on target to generate a dynamic query where ODI will replace the columns we got in the target as well the table name and will also loop for each row we have in the source. This is very handy.

And for the ones that are not in the ORACLE 11.2 and ahead, we can still do that using WM_CONCAT. Is not as powerful as LISTAGG, but works pretty well. Let’s try the first example again:

I cannot show you the results since WM_CONCAT was decommissioned in the 12c (my version), but it’ll work like this. We don’t have the option to choose the separator and to make the string unique and to order by it we need to add DISTINCT in the command WM_CONCAT(DISTINCT column).

I hope that is useful and have a great day.

Oracle SQL for EPM Tips and Tricks S01EP11

Posted in ACE, Data Warehouse, Hacking, ODI, ODI 10g, ODI 11g, ODI 12c, Oracle, Oracle Database with tags , , , on March 25, 2020 by RZGiampaoli

Hey guys how are you?

Today I’ll post something that is very simple but very useful specially when working with ODI.

When we work with partitioned table we know that if we filter that table by the partitioned column Oracle will use that partition as source of data. But what if we are doing an Insert, Update or Merge?

There’s another way to explicit refer to a partition and make sure Oracle will be working inside that one and is by defining it in the From clause.

For example if I want to query the Partition “DELL_BALANCES_FY20_FEB” I can query:

As we can see, after the table name I specified the PARTITION (DELL_BALANCES_FY20_FEB) and put inside the parentheses the partition name (don’t specify as string) and that makes oracle distinct all the rows in that partition, and my Distinct of the PARTITION_KEY shows only one results as expected. (this command needs to come before the table alias).

If we are doing an Insert, Update or Merge the idea is the same:

This way we can, specially in the MERGE, make sure Oracle will be working in the right partition in the target table.

And it’s specially useful with ODI because we always know the partition we want to query or insert data when we use ODI, then we can always bind Oracle to a specific partition and make sure he’ll stay there.

I hope this is help full and see you soon.

DEVEPM in Kscope20!

Posted in Essbase, Hacking, Kscope 20, ODI 12c, ODI Architecture, Tips and Tricks with tags , , , , on March 2, 2020 by RZGiampaoli

We are delighted to tell everybody that we’ll be at KScope 20 in Boston.

We’ll be presenting the session Essbase Statistics DW: How automatically administrate Essbase using ODI on room 304, Level 3 => Wed, Jul 01, 2020 (02:15 PM – 03:15 PM). In this session we’ll talk about how to keep Essbase configuration up to date using ODI. We also will review

To have a fast Essbase cube, we must keep vigilance and follow its growth and its data movements, so we can distribute caches and adjust the database parameters accordingly. But this is a very difficult task to achieve since Essbase statistics are not temporal and only tell you the cube statistics are in that specific time frame.
This session will present how ODI can be used to create a historical DW containing Essbase cube’s information and how to identify trends and patterns, giving us the ability to programmatically tune our Essbase databases automatically.

We hope to see you all there.

Thank you and see you soon!

ODI Hidden Gems – SNP tables: Query to get executed code example

Posted in ODI, ODI 12c, ODI Architecture with tags , , on February 26, 2020 by radk00

Hi all, today’s gem is something extremely useful that I’ve being using in every single project that I work on. Today’s gem is about SNP tables, which are the database tables that are used by ODI to store its metadata. When we install ODI, the installer asks us where we want to create our Master and Work repositories. Each repository contains a set of tables with different kind of information. From Oracle documentation:

  • Master Repository: This is a data structure containing information on the topology of the company’s IT resources, on security and on version management of projects and data models. This repository is stored on a relational database accessible in client/server mode from the different Oracle Data Integrator modules.
  • Work Repository: This is a data structure containing information about data models, projects, and their use. This repository is stored on a relational database accessible in client/server mode from the different Oracle Data Integrator modules.

You may think of them being the place where contains all information about the code that was developed in ODI, all the jobs that were executed, all the source and target tables and so on. These tables may give us answers to questions like: how many mapping objects does project X have? Which are the target mappings for a specific job? How many jobs are executing in a daily basis, how long does each of them take and how much data do they manipulate (insert/delete/update)? All those questions will eventually come to you after some time, and querying the SNP tables will provide you all the answers on what is going on in your ODI projects.

Below is one example of a query that returns a lot of information regarding all the ODI executions that happened in an ODI repository in a give time frame. It gives you the name of the scenarios, versions, when it began and ended, the session status, the order that they happened and (maybe the most important) which code was executed. The last info, together with how much time it took to execute, may be very useful to analyze which are the steps that are taking longer in your environment and then do something about them.

I wont go over each table and what they mean, but you may take a look on “Doc ID 1903225.1 : Oracle Data Integrator 11g and 12c Repository Description” in Oracle support for a full list of tables and their description. In the beginning, the number of tables and attributes may look intimidating, but once you start to use them you will see that the data architechture is fairly simple and you may retrieve a lot of good information out of them.

Without further due, here is the SQL. This one was created over ODI 12.2.1. Please notice that each ODI version may have changes in the repositories tables, which may lead you to modify those queries accordingly.

SELECT
SS.SESS_NO,
SS.SCEN_NAME,
SS.SCEN_VERSION,
SS.SESS_NAME,
SS.PARENT_SESS_NO,
SS.SESS_BEG,
SS.SESS_END,
SS.SESS_STATUS,
DECODE(SS.SESS_STATUS,'D','Done','E','Error','M','Warning','Q','Queued','R','Running','W','Waiting',SS.SESS_STATUS) AS SESS_STATUS_DESC,
SSL.NNO,
SSTL.NB_RUN,
SST.TASK_TYPE,
DECODE(SST.TASK_TYPE,'C','Loading','J','Mapping','S','Procedure','V','Variable',SST.TASK_TYPE) AS TASK_TYPE_DESC,
SST.EXE_CHANNEL,
DECODE(SST.EXE_CHANNEL,'B','Oracle Data Integrator Scripting','C','Oracle Data Integrator Connector','J','JDBC','O','Operating System'
,'Q','Queue','S','Oracle Data Integrator Command','T','Topic','U','XML Topic',SST.EXE_CHANNEL) AS EXE_CHANNEL_DESC,
SSTL.SCEN_TASK_NO,
SST.PAR_SCEN_TASK_NO,
SST.TASK_NAME1,
SST.TASK_NAME2,
SST.TASK_NAME3,
SSTL.TASK_DUR,
SSTL.NB_ROW,
SSTL.NB_INS,
SSTL.NB_UPD,
SSTL.NB_DEL,
SSTL.NB_ERR,
SSS.LSCHEMA_NAME
|| '.'
|| SSS.RES_NAME AS TARGET_TABLE,
CASE
WHEN SST.COL_TECH_INT_NAME IS NOT NULL
AND SST.COL_LSCHEMA_NAME IS NOT NULL THEN SST.COL_TECH_INT_NAME
|| '.'
|| SST.COL_LSCHEMA_NAME
ELSE NULL
END AS TARGET_SCHEMA,
SSTL.DEF_TXT AS TARGET_COMMAND,
CASE
WHEN SST.DEF_TECH_INT_NAME IS NOT NULL
AND SST.DEF_LSCHEMA_NAME IS NOT NULL THEN SST.DEF_TECH_INT_NAME
|| '.'
|| SST.DEF_LSCHEMA_NAME
ELSE NULL
END AS SOURCE_SCHEMA,
SSTL.COL_TXT AS SOURCE_COMMAND
FROM
SNP_SESSION SS
INNER JOIN SNP_STEP_LOG SSL ON SS.SESS_NO = SSL.SESS_NO
INNER JOIN SNP_SESS_TASK_LOG SSTL ON SS.SESS_NO = SSTL.SESS_NO
INNER JOIN SNP_SB_TASK SST ON SSTL.SB_NO = SST.SB_NO
AND SSTL.SCEN_TASK_NO = SST.SCEN_TASK_NO
AND SSL.NNO = SSTL.NNO
AND SSTL.NNO = SST.NNO
AND SSL.NB_RUN = SSTL.NB_RUN
LEFT JOIN SNP_SB_STEP SSS ON SST.SB_NO = SSS.SB_NO
AND SST.NNO = SSS.NNO
WHERE
SS.SESS_BEG >= TRUNC(SYSDATE) - 1
ORDER BY
SESS_NO,
NNO,
SCEN_TASK_NO

See ya!

Oracle SQL for EPM Tips and Tricks S01EP10

Posted in ACE, Hacking, Oracle, Oracle Database, Performance, SQL, Tips and Tricks with tags , , , , , , on February 26, 2020 by RZGiampaoli

Hey guys how are you?

Today a quick tip that I think is very useful. From time to time the business ask us to validate if a table has data or not before we load it. It’s fare, specially if you use a truncate and insert approach.

The problem is, sometimes, the table/view they are asking for has millions of rows, and there’s no other safe way to validate if a table has data or not than querying it.

I just fixed a case where an interface had a validation that basically counts 3 different tables that together had 40 million rows per period. This validations were taking around 1000 sec to happens.

The data load that happens before that took 1200 sec. Then, basically the validation process were taking as much time as the load process.

After some changes, the query now is validating the 3 tables in 0.3 seconds. Way better than before. Basically I just used 3 things:

The hint /*+ FIRST_ROWS(1) */ that makes oracle prepare the best plan to query just one row (in my case since I used 1 as parameter.

The filter ROWNUM = 1 to make sure oracle just return 1 row, if we don’t use that, the hint can make everything very slow because oracle will be planning for just one row, but without filtering it’ll bring more (using the best plan possible for 1 row).

And UNION ALL instead of UNION, because there’s a huge difference between them. when you use UNION, oracle matches the sets of data to make sure you have unique rows after that. UNION ALL in other case, just bring everything each set return without any extra process to validate anything. UNION ALL is always faster than UNION.

In the end I have an query like this:

As you can see, the query is very simple and for this example I just had the name of the table there, then we know the table is not empty for that period. We can do other approach like summing then all together and validate if the results is = 3 for example or any other logic we need can be implemented on top of this query.

I hope this is helpful for you guys and see you in the next post.

Oracle SQL for EPM Tips and Tricks S01EP09

Posted in ACE, Hyperion Planning, Oracle, Oracle Database, Performance, Tips and Tricks with tags , , , , , , on February 19, 2020 by RZGiampaoli

Hey Guys how are you?

Today it’ll be a quick tip for you guys that like/need to query the Planning repository.

The Planning repository stores both the Plan Type and the Consolidation in a very particular way, in fact this is true for a lot of other things like security, form properties etc… but I’ll focus these 2, that are the more often used and the solution is the same for all of them anyways.

If we take a look in the HSP_PLAN_TYPE table we’ll have something similar to this (depending in how many plan types you have in your app).

As we can see Planning stores in this table all the plan types that were created when we setup the application. In my case I have 4 plan types and we can have up to 5 BSO plan types in a Planning app. Now, if we join the HSP_OBJECT and the HSP_MEMBER filtering the OBJECT_TYPE = 2 we can take a look in all the dimensions we set in the repository.

The USED_IN columns is the column that says to planning in witch plan type that member will exists. The interesting thing here is that, you don’t see the PLAN_TYPE ID that you supposed to right? And that is because a member can exists in more than 1 plan type right, and if we use the PLAN_TYPE ID straight, we would need one row for each plan type right?

Instead, we have just one row but we also have the ability to tell Planning where that member should exists, and we can do that by summing the PLAN_TYPE ID’s together. In the example above, the Account dimension exists in all 4 plan types (1+2+4+8 = 15). Now the Products dimension exists only in one plan type (1), and by the number you can say that is the Pnl Plan type.

you seen the idea here is to check if a PLAN_TYPE ID exists inside that number we have here in the USED_IN column. Another example is the Employee dimension that has the USED_IN set as 8. The only number that will fit in here is the 8 itself (1+2+4 = 7, 1+8 = 9…) meaning the Plan type is WrkForce.

I think the most used way for us to figure out if a number exists inside another number is to use MOD.

  • CASE WHEN MOD(USED_IN,2)>=1 THEN 1 ELSE 0 END PT_1
  • CASE WHEN MOD(USED_IN,4)>=2 THEN 1 ELSE 0 END PT_2
  • CASE WHEN MOD(USED_IN,8)>=4 THEN 1 ELSE 0 END PT_3
  • CASE WHEN MOD(USED_IN,16)>=8 THEN 1 ELSE 0 END PT_4
  • CASE WHEN MOD(USED_IN,32)>=16 THEN 1 ELSE 0 END PT_5

The Oracle MOD(N,M) is used to return the remainder of a dividend divided by a divisor where:

NDividend.
MDivisor.
Pictorial Presentation of MOD() function

Then in our case, we need to test if the USED_IN number contains the PLAN_TYPE ID on it, and for that we need to MOD it by rolling sum of the plan types + 1. To make it easier I’ll put that in numbers:

  • N = USED_IN = 31 (max number possible)
  • M = PLAN_TYPE ID = 1 (Pnl) what I want to test) + 1 = 2
  • MODE (31, 2) = 1
  • 31/2 = 15 Reminder = 1
  • MOD = 1

What that is telling us is that if the MOD is = 1, the Plan type 1 exists in that number. I run a simulation just to show us when the Plan Type 1 does not exists in the USED_IN:

As we can see, the Plan Type 1 only exists in the odd possible results (as expected) what means in any other possible combination of the other 4 plan types he doesn’t exists (2, 4, 2+4=6, 8, 8+2=10, 8+4=12, 8+2+4=14, 16, 16+2=18, 16+4=20, 16+2+4=22, 16+8=24, 16+8+2=26, 16+8+4=28, 16+8+2+4=30).

The same is true for the other Plan types, you can try then out using the MOD. Now, this work well but there’s a way easier and clean way to do exactly the same thing using the function BITAND.

The BITAND function treats its inputs and its output as vectors of bits, the output is the bitwise AND of the inputs. Basically it performs below steps.

  • Converts the inputs into binary.
  • Performs a standard bitwise AND operation on these two strings.
  • Converts the binary result back into decimal and returns the value.

Ok, it looks more complicated now, but the good news is that to use is simpler than it sounds like. The main difference between this function and MOD is that MOD returns a boolean, BITAND return the value you asked if it’s true. Expanding my previous test using BITAND:

As you can see, with BITAND returning the number you asked for instead of 0 or 1 make it possible for us to Join the HSP_PLAN_TYPE with HSP_MEMBER using the USED_IN and the PLAN_TYPE in the BITAND Function as a Join:

As you can see, this is a far better way to split the members by Plan Type. And now we can see that the Dimension Products only exists in the Plan Type Pnl and that Entity exists in 4 different plan types. We don’t need to worry about any mathematics formula to create all our MODs, we just need to Join our Plan Type table with the BITAND of USED_IN by PLAN_TYPE.

The Consolidation is another place where you can use the exactly same thing. Instead of using something like this:

  • DECODE(MOD(CONSOL_OP,8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON1
  • DECODE(MOD(TRUNC(CONSOL_OP/8),8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON2
  • DECODE(MOD(TRUNC(CONSOL_OP/64),8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON3
  • DECODE(MOD(TRUNC(CONSOL_OP/512),8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON4
  • DECODE(MOD(TRUNC(CONSOL_OP/4096),8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON5

You can create a parameter table with the values for each kind of consolidation and use BITAND with that info:

I hope you this will be useful for you guys and see you soon.

ODTUG Women in Technology Applications open

Posted in ODTUG with tags , on February 14, 2020 by RZGiampaoli

Hey girls how are you?

The applications for the ODTUG #WIT Scholarship program has been open! They are looking for candidates to represent ODTUG and serve as role models & mentors for women in the technology industry. Learn more and apply HERE today.

It’s a very good opportunity to participate in this great community and as a bonus, earn a free pass to KScope 20 🙂

That’s all for today, I hope you guys enjoy.

See you soon!

Fragmented and Aggregated tables in OBIEE using ODI Part 5/5: Setting the OBIEE Repository

Posted in Hacking, OBIEE, ODI, ODI Architecture, Oracle, Oracle Database, Tips and Tricks with tags , , , , on February 13, 2020 by RZGiampaoli

Hey guys, how are you?

Finally, we have arrived in the final chapter of the series Fragmented and Aggregated tables in OBIEE and today we are talking about how to Setting the OBIEE Repository.

Just to make easier for you to navigate in this series, here’s the parts of it:

Creating the tables: I’ll explain the advantages and how you can create the fragmented tables and also how to further improve performance using partitioning and sub partitioning.

Managing the partitions of the Fragmented/Aggregated tables: Here you’ll see how to manage all partitions using ODI.

Populating the Fragmented tables using ODI: Here you’ll find how to create generic components to load all fragmented tables we need to.

Populating the Aggregated tables using ODI: Here you’ll find how to create generic components to load all Aggregated tables we need to.

Setting the OBIEE Repository: Finally, we’ll going to setting up the OBIEE repository to make use of all tables.

This post does not intend to be a step by step how to create an OBIEE repository for beginner or anything like that. My intend is to show the main points that we need to do to make our infrastructure to work in OBIEE. Also, I’m working in OBIEE 12c but this will work in the same way in OBIEE 11 too.

Let’s start then from the beginning. After we import all the tables to our repository the first thing, we need to do is to create the joins between the Dimensions and the Fact tables.

Right now, we have an important point to discuss about constraints. We can have the tables create with Primary Keys and Foreign Keys if you want, as well as not null and any other constraints you wish. The thing is, these things normally impact negatively in the data load times and since we are using ODI, we can have ODI to handle this kind of thing during the data load.

Instead of have a PK or an FK we can have a Flow control in ODI checking the metadata before load it. I always prefer this approach for the simple fact that ODI will generate an E$ table with all fallouts for me automatically, and this is very helpful for debugging.

In my case, I left the table without any constraints or Keys, then the first thing I need to do is to join all our star schema together. Since we have 18 table, all table needs to be joined to all Dimensions in the same way except the Period dimensions.

The Period Dimensions will tell OBIEE what is the set of tables he needs to query. If a user does an analysis in a quarter level, with our design, OBIEE must query only the Quarterly aggregated tables. That’s why we have 3 period dimensions, one for each level of aggregation.

For the DIM_PERIOD (the detailed dimension) we’ll going to join it with all detail Fact tables. As you can see, we joined with 3 “D” tables (BS, Income, PL2) and with the other 3 “E” table (same as before).

For the DIM_PERIOD_MONTH we’ll going to join it with all Monthly Fact tables. As you can see, we joined with 3 “D” tables (in the “M” level) and with the other 3 “E” table (also in the “M” level).

And for the DIM_PERIOD_QUARTER we’ll going to join it with all Monthly Fact tables. As you can see, we joined with 3 “D” tables (in the “Q” level) and with the other 3 “E” table (also in the “Q” level).

This is the first step to make OBIEE work with Aggregated tables. The second and last step we need to do is in the Business layer.

After we finish to join everything (if you have all FK’s in place, you’ll not need to do the joins, OBIEE will load then for you) we can start to do our final settings in the Business layer. In this layer is where we’ll going to tell OBIEE how to behave in front of the aggregate tables and the fragmented tables as well.

First, let’s address the Period Dimension. We’ll drag and drop the more detailed dimension first (DIM_PERIOD) and then we’ll going to drag and drop the other 2 period dimensions on top of the first one. This will create 3 sources in that logical dimension.

If you click in each source, you’ll see that OBIEE will automatically map the columns (By Column Name, then all columns must have the same name [case sensitive]).

As you can see, OBIEE maps the columns available in each dimension, making the Fiscal Quarter column for example, have 3 different sources, one for the DIM_PERIOD_QUARTER, another for the DIM_PERIOD_MONTH and one last one for the DIM_PERIOD.

The next thing we need to do is create a dimension for the DIM_PERIOD logical table. This is the last step needed for OBIEE decide which table it’ll query depending of the analysis created. As I said before, if the user does an analysis at quarter level, OBIEE will know by the DIM_PERIOD dimension and the Table sources that the smaller table to query is the DIM_PERIOD_QUARTER, because it’ll be in the beginning of the Drill path.

OBIEE knows for the design of the drill that the Years level has less members than the Quarter level and so on. That’s how OBIEE defines the aggregate table he’ll query.

The last thing we need to do is in the fact table, and it’ll be done at same time we and in the same place we set the fragmentation content. For the Fact tables we’ll do the same thing as the Period. We’ll drag any Fact table first and then we’ll going to drag all the other 17 tables on top of it like this:

As you can see, we have all sources under the same logical table and in the same way of the DIM_PERIOD, OBIEE will map all columns to the right source. In my case you can see that the Details Sources has more columns than the Aggregated Source (as expected).

At this point is important to point out that OBIEE will always going to try to get the most aggregated table possible but, if an user does an analysis at quarter level but ask for a column that only exists in the Detail table, OBIEE will be obliged to query the detail level and ask the database to aggregate the data for us (making the query slower).

Now, we have only one more thing to do for our architecture to work. We need to define which fragmented table OBIEE will access depending of the Source System and the Account hierarchy name. To do that, we’ll have to add a very simple parameter, that can be very complex if we don’t design well, to the Sources in the fact table.

Inside each Source we have a tab called “Content” and in that table we can specify some very important things:

First, we can/need to specify the Logical level that will be used for each dimension in relation to the fact table. What I mean for that is, for example, for the detail table, every dimension will be using the Detail level of the Dimensions (leaf level) as we can see in the image above. For the Monthly level Fact table, instead of the leaf level, we’ll be using the monthly level of the Period Dimension. That’s the last piece of configuration for the aggregated tables. With this setting OBIEE will know that for that Level of Dimension, he should be using the fact that have the logical level set as Month.

The second important thing we need to set in this tab is the fragmentation filter, and by that we have a field called Fragmentation Content. In this section we’ll going to use a Dimension or more to filter the content. What OBIEE does in this case is, depending of what is selected in the analysis, it’ll select one or more table to query.

For example, in our case we want to, when the Account HIER_NAME is equal to “BS” we want OBIEE to use only the BS tables, if is “INCOME” the use the INCOME tables and lastly if is “PL2” he needs to use the PL2 tables.

It’s nice to know that you don’t need to have the column you want to use in the fact tables, for example, the HIER_NAME column is the highest level of the Account Hierarchy and we don’t have any information regarding this in the fact table. OBIEE just read the Filter and select the right table.

Another very important point about the fragmentation content is that, in cases that you have more than one option, you need to do all possible combinations for that to work properly. For example, if we are doing fragmentation with 2 dimensions, like we are doing, and the dimension A has the values A, B and C and dimension B has values 1, 2 and 3, if the user can select more than 1 value you need to do something like this:

(Dimension A = A and Dimension B = 1) or (Dimension A = A and Dimension B = 2) Or…..

You need to have all possible combinations because in this setting if you say something like Dimension A in (A, B, C) this will only be valid if the user select all 3 values in the dashboard. If he selects just A and B, this filter will not be used.

Then in our case, for simplicity, I had to create an UDA for the Source System otherwise I would have to create all possible combinations between Hier_Name and Source System. Then In my DIM_SOURCE_SYSTEM I have something Like this:

As you can see, the UDA split my Source Systems in the same way I split the data in the table. In the E tables I have just EMC data and in the D tables I have DELL, DTC and STAT data. This allows me to do a simple filter in the Fragmentation Content filter making our lives way easier.

The third important thing is that, in our case, since we can have in an analysis 2 or more sources at same time, for example, the user can select the Source System Dell and EMC, we need to flag the option “This source should be combined with others at this same level”.

This will make OBIEE ALWAYS create an UNION ALL between at least one D table and one E table, even if the user select just EMC for example, we’ll have the UNION ALL between the same level (Month for example) with the filter Source System = ‘EMC’, making the result set return just EMC data.

If we don’t flag this option, OBIEE will never have 2 fragmented table at same time, and that’s not what we want here.

Then basically we have 3 configurations to do in our 18 sources. Looks a lot but is very simple in the end. I create a color code to try make it easier for us to see all the configurations in our source. Yellow is the configuration regarding the Source System, Green is related with the Account Hier_Name and Red is regarding the level of the aggregated data.

As you can see, we have our 3 configurations combined in our 18 sources.

  • Period Aggregation:
    • For detail Fact table we assign the Leaf level of periods;
    • For Month Fact table we assign the Month level of periods;
    • For Quarter Fact table we assign the Quarter level of periods;
  • Account Fragmentation:
    • For BS Fact table we filter HIER_NAME = ‘BS’;
    • For INCOME Fact table we filter HIER_NAME = ‘INCOME’;
    • For PL2 Fact table we filter HIER_NAME = ‘PL2’;
  • For Source System Fragmentation:
    • For EMC Fact tables (E tables) we filter UDA = ‘E’;
    • For Dell, DTC and STAT Fact tables (D tables) we filter UDA = ‘D’;

And that’s all we need to do to config OBIEE for this architecture. It’s looks overwhelming but in fact is very simple and very fast to do it, and the performance gains are absurd. With this approach I can query 15 quarter of data in the quarter level in 5 seconds. Billions of data in 5 seconds, it’s a lot.

One thing that I would like to mentioning is that normally in the Business Layer is where I rename all the columns for a more business friendly. In this case I decide to do a little test and I left all the names in the same way it’s in the Physical Layer and decide to create Aliases in the Presentation layer. I did that for 2 very simple reasons, one is that it’s easier to just drag and drop staff from the Physical Layer to the Business Layer if everything has the same name. If things don’t match, he duplicates columns, you need to drag and drop column over column, one by one and it’s a lot of work. Second because I wanted to test if this approach is better than my old one or not.

I don’t have any opinion about that yet and in fact, I could had renaming everything and if I need to expand to 36 table for example, I could rename back the columns, do all the mappings and rename back again, then not sure what’s the best approach on that.

It was way more work to rename stuff in the Presentation Layer because the Rename Wizard doesn’t create aliases, then I had to manually rename column by column then I still not sure about this approach.

And this is the end of our Fragmented and Aggregated tables in OBIEE using ODI. I hope this is helpful and see you in my next post.

Fragmented and Aggregated tables in OBIEE using ODI Part 4/5: Populating the Aggregated tables

Posted in Hacking, OBIEE, ODI, ODI Architecture, Oracle, Oracle Database, Tips and Tricks, Uncategorized with tags , , , , on February 12, 2020 by RZGiampaoli

Hey guys, how are you?

Continuing the series Fragmented and Aggregated tables in OBIEE using ODI and today we are talking about how to Populating the Aggregated tables using ODI.

Just to make easier for you to navigate in this series, here’s the parts of it:

Creating the tables: I’ll explain the advantages and how you can create the fragmented tables and also how to further improve performance using partitioning and sub partitioning.

Managing the partitions of the Fragmented/Aggregated tables: Here you’ll see how to manage all partitions using ODI.

Populating the Fragmented tables using ODI: Here you’ll find how to create generic components to load all fragmented tables we need to.

Populating the Aggregated tables using ODI: Here you’ll find how to create generic components to load all Aggregated tables we need to.

Setting the OBIEE Repository: Finally, we’ll going to setting up the OBIEE repository to make use of all tables.

Today we are in the final step before we can work in our OBIEE repository to put all these 18 fact tables together. The data load for our Aggregated tables.

The aggregation in fact is a very simple process, we just need to remove all detailed columns we have in the detail fact table and leave just the ID’s columns. After that we just need to reduce the level of the ID of the column we want to aggregate and sum all the data at that new level.

In our case we going to use the PERIOD_ID to do that, because period is the most common choice when we talk about aggregated table and serve well in most of the cases.

And that’s why I design the PERIOD_ID as YYYYQMMDD, because is very simple and easy to manipulate this number to go up or down a period, as well to do range or even transform it back to date. It’s way easier than create a surrogate key or whatever, even if you want to work with SCD.

As you probably already guest by now, we’ll use the command on source and command on target again to do the aggregations, this way we can have only one code to spread the data through out aggregate fact tables.

In the command on source for the monthly level table, we just need a query in the source that return the name of the detailed table plus the name of the monthly table. Since I designed all tables with a specific name pattern, we can easily manipulate the table name to get the month table from the detail table like this:

We don’t need anything fancy here because in the last post we create a proc to call six time the same scenario passing different parameter to it.

One of these parameter is the name of the fact table it needs to be loaded and this information is store in the variable #JAT_TABLE_NAME (already replaced by the value in the picture) and what I have done there is just split the table name using REGEXP to get the forth and the fifth occurrences of ‘_’ and concatenate everything back adding a _M_ in the middle of it, creating my Monthly level fact table name.

With data we have the detail table name that the scenario needs to load and also the monthly level fact table name that we need to use for that loop. We just need to create a query to aggregate the data, what’s very straight forward.

As the query will change depending of the design of the table, this one is just intended for explain what needs to be done since the idea is very simple. First we replace the name of the table that  we wish to insert data and the table that we wish to get the data from for our to variables: #JAT_TABLE_NAME that we are sending when we call this scenario and #JAT_M_TABLE_NAME that we just create in the Command on Source tab.

With this, for each loop, the scenario will get the data from the right source table and will insert in the right aggregated table. Now we need to aggregate the data, also a very simple matter. All we need to do is to join the detailed fact table with the period dimension and, because this is a range partition, we need to get the first date of that month and the last date of that month, that’s why we have that MIN and MAX filtered by the Year and Month.

With the right data filtered, we just need to aggregate the data and use the FISCAL_MONTH_ID instead of the PERIOD_ID, this way the data will be aggregated by month. And we are done here.

By the way, we could instead of using the between to get the right range of the partition filtered the partition itself using explicit partition filtering:

This will make oracle to go straight to that partition. The only thing is that you need to pass the partition name, that in our case is very straight forward, specially because we are creating and managing the partitions ourselves. We could have oracle create the partitions automatically using INTERVAL (for another post) but if we do that oracle will also create the names of partitions like SYS###### and that will make everything harder to filter by partitions. Anyway, this is just another option we can have.

For the quarter level it’s even easier because we don’t need to worry about range partitions anymore. We just need to have our Command on Source return the Monthly level table name and our Quarterly level table name:

As you can see, the query is the same, the only difference is that we insert a ‘_Q_’ in the middle of our string. And for the insert in the Target Command tab we just need, as before, replace the tables using the right variable in each case, join with the DIM_PERIOD_MONTH to have the right level of data, filter the Monthly level table using any method and then use the QUARTER_ID to sum the Monthly level data to the Quarterly level data and that’s it.

That’s all we need to do to populate all aggregated table. And we finally have all data populated in our tables and now we can start to create the OBIEE repository. It’s wort to mentioning that if you resume everything that I said until this point, we basically had to:

  • Create our tables;
  • Create one procedure to:
    • Create the partitions;
    • Truncate the partitions before we load data;
    • Call and loop the scenario that will load data;
    • Drop the old partitions
  • Create another procedure to:
    • Load detail data;
    • Load Monthly level data;
    • Load Quarterly level data;

What this means is that with just two procedure we can maintain and populate a very complex environment in a very clean way with very little points of failures.

This is a very elegant approach and as I always said, if is too complex and/or difficult, you were doing something wrong. It is just a meter of think about the design first, all that needs to be done, and split it in a logical and reusable way. If you do that, everything will be simple to be archived using ODI.

I hope this series has been useful so far and I see you in the last chapter.