Archive for Data Warehouse

Oracle SQL for EPM Tips and Tricks S01EP09

Posted in ACE, Hyperion Planning, Oracle, Oracle Database, Performance, Tips and Tricks with tags , , , , , , on February 19, 2020 by RZGiampaoli

Hey Guys how are you?

Today it’ll be a quick tip for you guys that like/need to query the Planning repository.

The Planning repository stores both the Plan Type and the Consolidation in a very particular way, in fact this is true for a lot of other things like security, form properties etc… but I’ll focus these 2, that are the more often used and the solution is the same for all of them anyways.

If we take a look in the HSP_PLAN_TYPE table we’ll have something similar to this (depending in how many plan types you have in your app).

As we can see Planning stores in this table all the plan types that were created when we setup the application. In my case I have 4 plan types and we can have up to 5 BSO plan types in a Planning app. Now, if we join the HSP_OBJECT and the HSP_MEMBER filtering the OBJECT_TYPE = 2 we can take a look in all the dimensions we set in the repository.

The USED_IN columns is the column that says to planning in witch plan type that member will exists. The interesting thing here is that, you don’t see the PLAN_TYPE ID that you supposed to right? And that is because a member can exists in more than 1 plan type right, and if we use the PLAN_TYPE ID straight, we would need one row for each plan type right?

Instead, we have just one row but we also have the ability to tell Planning where that member should exists, and we can do that by summing the PLAN_TYPE ID’s together. In the example above, the Account dimension exists in all 4 plan types (1+2+4+8 = 15). Now the Products dimension exists only in one plan type (1), and by the number you can say that is the Pnl Plan type.

you seen the idea here is to check if a PLAN_TYPE ID exists inside that number we have here in the USED_IN column. Another example is the Employee dimension that has the USED_IN set as 8. The only number that will fit in here is the 8 itself (1+2+4 = 7, 1+8 = 9…) meaning the Plan type is WrkForce.

I think the most used way for us to figure out if a number exists inside another number is to use MOD.

  • CASE WHEN MOD(USED_IN,2)>=1 THEN 1 ELSE 0 END PT_1
  • CASE WHEN MOD(USED_IN,4)>=2 THEN 1 ELSE 0 END PT_2
  • CASE WHEN MOD(USED_IN,8)>=4 THEN 1 ELSE 0 END PT_3
  • CASE WHEN MOD(USED_IN,16)>=8 THEN 1 ELSE 0 END PT_4
  • CASE WHEN MOD(USED_IN,32)>=16 THEN 1 ELSE 0 END PT_5

The Oracle MOD(N,M) is used to return the remainder of a dividend divided by a divisor where:

NDividend.
MDivisor.
Pictorial Presentation of MOD() function

Then in our case, we need to test if the USED_IN number contains the PLAN_TYPE ID on it, and for that we need to MOD it by rolling sum of the plan types + 1. To make it easier I’ll put that in numbers:

  • N = USED_IN = 31 (max number possible)
  • M = PLAN_TYPE ID = 1 (Pnl) what I want to test) + 1 = 2
  • MODE (31, 2) = 1
  • 31/2 = 15 Reminder = 1
  • MOD = 1

What that is telling us is that if the MOD is = 1, the Plan type 1 exists in that number. I run a simulation just to show us when the Plan Type 1 does not exists in the USED_IN:

As we can see, the Plan Type 1 only exists in the odd possible results (as expected) what means in any other possible combination of the other 4 plan types he doesn’t exists (2, 4, 2+4=6, 8, 8+2=10, 8+4=12, 8+2+4=14, 16, 16+2=18, 16+4=20, 16+2+4=22, 16+8=24, 16+8+2=26, 16+8+4=28, 16+8+2+4=30).

The same is true for the other Plan types, you can try then out using the MOD. Now, this work well but there’s a way easier and clean way to do exactly the same thing using the function BITAND.

The BITAND function treats its inputs and its output as vectors of bits, the output is the bitwise AND of the inputs. Basically it performs below steps.

  • Converts the inputs into binary.
  • Performs a standard bitwise AND operation on these two strings.
  • Converts the binary result back into decimal and returns the value.

Ok, it looks more complicated now, but the good news is that to use is simpler than it sounds like. The main difference between this function and MOD is that MOD returns a boolean, BITAND return the value you asked if it’s true. Expanding my previous test using BITAND:

As you can see, with BITAND returning the number you asked for instead of 0 or 1 make it possible for us to Join the HSP_PLAN_TYPE with HSP_MEMBER using the USED_IN and the PLAN_TYPE in the BITAND Function as a Join:

As you can see, this is a far better way to split the members by Plan Type. And now we can see that the Dimension Products only exists in the Plan Type Pnl and that Entity exists in 4 different plan types. We don’t need to worry about any mathematics formula to create all our MODs, we just need to Join our Plan Type table with the BITAND of USED_IN by PLAN_TYPE.

The Consolidation is another place where you can use the exactly same thing. Instead of using something like this:

  • DECODE(MOD(CONSOL_OP,8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON1
  • DECODE(MOD(TRUNC(CONSOL_OP/8),8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON2
  • DECODE(MOD(TRUNC(CONSOL_OP/64),8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON3
  • DECODE(MOD(TRUNC(CONSOL_OP/512),8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON4
  • DECODE(MOD(TRUNC(CONSOL_OP/4096),8),’0′,’+’,’1′,’-‘,’2′,’*’,’3′,’/’,’4′,’%’,’5′,’~’,’6′,’Never’,CONSOL_OP) CON5

You can create a parameter table with the values for each kind of consolidation and use BITAND with that info:

I hope you this will be useful for you guys and see you soon.

Advertisement

Building dynamic ODI code using Oracle metadata dictionary

Posted in Data Warehouse, ODI, ODI 12c with tags , , , on July 27, 2018 by Rodrigo Radtke de Souza

Hi all, today’s post will be about how ODI can be used to generate any kind of SQL statements using Oracle metadata tables. We always like to say that ODI is way more than just an ETL tool and that people needs to start to think about ODI as being a full development platform, where you may create any kind of code that you wish there. Today I’ll describe how we may create a simple (but dynamic) merge statement between two similar tables using an ODI procedure that will read from ALL_CONSTRAINTS, ALL_CONS_COLUMNS and ALL_TAB_COLS Oracle tables to figure out what to do.

This is the scenario that we will be working on: we have several stage tables that are truncated and loaded everyday with daily records from a source system. We have another set of tables that are used to store all the historical information and the process uses the first stage tables as sources, merging its data against the historical tables using their primary key. This is very common in a lot of places where we have a stage data layer that stores daily data pulls and then a “base” data layer that stores the historical data. In this scenario that we will describe here, both source and target set of tables have very similar structures, with the same column names, data types and so on.

Using the conventional ODI development process, we would need to create one mapping object for each set of source->target tables, so if we have 50 sources that needs to be merged against 50 targets, we would need to create 50 ODI mappings. Since the set of source->target tables are similar in this case, we may be smarter and create an ODI process that will receive a table name as a input parameter (in this case the target table name) and it will create a merge statement against those two tables in a dynamic way using Oracle metadata dictionary.

For those that are not familiar with Oracle metadata dictionary, its nothing more than a set of tables that exists in Oracle database that contains information about its existing components like, information about its tables, what are the columns that they have, which is their data type and so on. This is a great resource place that ODI may read from it and build generic code using its results. Let’s see how it looks like with a real example.

Imagine that you have two tables with the following structure:

As you can see, our base table is almost the same as our stage table and the only difference is that it contains 2 additional columns named INSERT_DTTM and UPDATE_DTTM that will be used as “control columns” to identify when that data was inserted/updated in our base table. For ODI to figure out which columns are presented in which table, we may query ALL_TAB_COLS in Oracle filtering its table name, as below:

3

This is showing us all the table columns that this table contains. Similarly, if we query ALL_CONSTRAINTS and ALL_CONS_COLUMNS, we may get all the table constraints (like Primary Key) with all its associated columns:

4

With those two sets of data, we may create a SQL that will build our dynamic merge statement. To make it easier, I’ll show you the final SQL statement now, that is divided in two pieces, and then I’ll explain each of them:

WITH TABLE_PARAMS AS
(
SELECT 
    'BASE_TABLE_A' AS TABLE_NAME,
    'SCHEMA_A' AS TABLE_OWNER
FROM DUAL
),
TABLE_PK AS
(
SELECT
    ACC.OWNER, 
    ACC.TABLE_NAME, 
    ACC.COLUMN_NAME
FROM  ALL_CONSTRAINTS AC, 
      ALL_CONS_COLUMNS ACC,
      TABLE_PARAMS 
WHERE 1=1
AND AC.OWNER = ACC.OWNER
AND AC.TABLE_NAME = ACC.TABLE_NAME
AND AC.OWNER = TABLE_PARAMS.TABLE_OWNER
AND AC.TABLE_NAME = TABLE_PARAMS.TABLE_NAME
AND AC.CONSTRAINT_NAME = ACC.CONSTRAINT_NAME
AND AC.CONSTRAINT_TYPE = 'P'
)
,
MAIN_TAB_COLS AS
(
SELECT 
    ATC.OWNER,
    ATC.TABLE_NAME,
    ATC.COLUMN_NAME
FROM ALL_TAB_COLS ATC,
     TABLE_PARAMS 
WHERE 1=1
AND ATC.TABLE_NAME = TABLE_PARAMS.TABLE_NAME
AND ATC.OWNER = TABLE_PARAMS.TABLE_OWNER
AND ATC.COLUMN_NAME NOT IN ('INSERT_DTTM','UPDATE_DTTM')
AND ATC.COLUMN_NAME NOT IN (SELECT COLUMN_NAME FROM TABLE_PK)
)
SELECT
    MTC.TABLE_NAME AS TARGET_TABLE,
    REPLACE(MTC.TABLE_NAME,'BASE_','STG_') AS SOURCE_TABLE,
    PK_ST_LIST,
    PK_S_LIST||','||(LISTAGG('S.'||MTC.COLUMN_NAME ,',') WITHIN GROUP (ORDER BY MTC.COLUMN_NAME)) || ',SYSDATE,SYSDATE' AS TABLE_S,
    PK_T_LIST||','||(LISTAGG('T.'||MTC.COLUMN_NAME ,',') WITHIN GROUP (ORDER BY MTC.COLUMN_NAME)) || ',T.INSERT_DTTM,T.UPDATE_DTTM' AS TABLE_T,
    LISTAGG ('T.'||MTC.COLUMN_NAME||'=S.'||MTC.COLUMN_NAME , ',')  WITHIN GROUP (ORDER BY MTC.COLUMN_NAME ) AS ST_COLS
FROM MAIN_TAB_COLS MTC, 
    (SELECT 
        TP.OWNER,
        TP.TABLE_NAME,
        LISTAGG ('T.'||TP.COLUMN_NAME||'=S.'||TP.COLUMN_NAME , ' AND ')  WITHIN GROUP (ORDER BY TP.COLUMN_NAME ) PK_ST_LIST,
        LISTAGG ('S.'||TP.COLUMN_NAME, ',')  WITHIN GROUP (ORDER BY TP.COLUMN_NAME ) PK_S_LIST,
        LISTAGG ('T.'||TP.COLUMN_NAME, ',')  WITHIN GROUP (ORDER BY TP.COLUMN_NAME ) PK_T_LIST
    FROM TABLE_PK TP
    GROUP BY 
        TP.OWNER,
        TP.TABLE_NAME
    ) TP
WHERE 1=1
AND MTC.OWNER = TP.OWNER
AND MTC.TABLE_NAME = TP.TABLE_NAME
GROUP BY
    MTC.OWNER,
    MTC.TABLE_NAME,
    PK_ST_LIST,
    PK_S_LIST,
    PK_T_LIST;

The first piece of the SQL contains a WITH clause with three sections:

  • TABLE_PARAMS: used to “receive” the table name parameter that will be sent by ODI;
  • TABLE_PK: used to figure out which is the table PK that will be used to do a “merge” in our target table;
  • MAIN_TAB_COLS: used to retrieve all columns that exists in a table removing any kind of control columns (in this case INSERT_DTTM and UPDATE_DTTM) and any column that is already presented in the PK columns;

The second piece is the main one where we will use the three sub selects from the WITH section to build what we need. In this case, it will return the following columns:

  • TARGET_TABLE: name of the target table that will be merged;
  • SOURCE_TABLE: name of the source table that will be used as a source of the merge process;
  • PK_ST_LIST: PK columns list that will be used on merge process;
  • TABLE_S: column names from the source table;
  • TABLE_T: column names from target table;
  • ST_COLS: combination of source and target columns for update process inside the merge;

When we run the SQL for our tables in this example, this is the result:

5

Now we have all information that we need to create a dynamic merge statement for any set of similar tables, but how do we use it in ODI? This is very simple with one of the best features that ODI has (if you read our blog, you know that we just love it): command on source/target. Let’s create a procedure and add our SQL statement in the command on source tab:

6.png

In our command on target tab, we will add the following code there:

7

As you can see, this SQL contains a lot of variables in it. These variables will be used at runtime to receive the return values from the SQL in command on source. In this way, we don’t need to worry about creating 50 mappings to do 50 merge processes. Instead, we have one procedure that will receive a table name as a parameter and will build the necessary SQL accordingly. Let’s see how it looks like in an ODI package:

8

As you can see, it’s a very simple package that is receiving a table name as a parameter and then building/running a dynamic merge SQL. This package can be called by an external package that may run it N times with different table names (like doing 50 table mergers with one single procedure). Of course, that this was just one example of a simple merge task, but it shows you the main idea of having ODI building the code for you. You may add more tasks to your procedure to create temp tables, run gather statistics and so on. There are almost no limits on what you may do using this kind of technique.

I hope you have enjoyed! See ya!

Kscope 17 is approaching fast!!! And we’ll be there!

Posted in ACE, Data Warehouse, Essbase, Hyperion Essbase, Java, Kscope 17, ODI, ODI Architecture, Oracle, Performance, Tips and Tricks, Uncategorized with tags , , , , , , , , on June 8, 2017 by RZGiampaoli

Hi guys how are you? We are sorry for being away for so much time but this year we have a lot of exiting things going one, then let’s start with what we’ll be doing at Kscope 17!

This year we’ll present 2 sessions:

Essbase Statistics DW: How to Automatically Administrate Essbase Using ODI (Jun 28, 2017, Wednesday Session 12 , 9:45 am – 10:45 am)

In order to have a performatic Essbase cube, we must keep vigilance and follow up its growth and its data movements so we can distribute caches and adjust the database parameters accordingly. But this is a very difficult task to achieve, since Essbase statistics are not temporal and only tell you the cube statistics is in that specific time frame.

This session will present how ODI can be used to create a historical statistical DW containing Essbase cube’s information and how to identify trends and patterns, giving us the ability for programmatically tune our Essbase databases automatically.

And…

Data Warehouse 2.0: Master Techniques for EPM Guys (Powered by ODI)  (Jun 26, 2017, Monday Session 2 , 11:45 am – 12:45 pm)

EPM environments are generally supported by a Data Warehouse; however, we often see that those DWs are not optimized for the EPM tools. During the years, we have witnessed that modeling a DW thinking about the EPM tools may greatly increase the overall architecture performance.

The most common situation found in several projects is that the people who develop the data warehouse do not have a great knowledge about EPM tools and vice-versa. This may create a big gap between those two concepts which may severally impact performance.

This session will show a lot of techniques to model the right Data Warehouse for EPM tools. We will discuss how to improve performance using partitioned tables, create hierarchical queries with “Connect by Prior”, the correct way to use multi-period tables for block data load using Pivot/Unpivot and more. And if you want to go ever further, we will show you how to leverage all those techniques using ODI, which will create the perfect mix to perform any process between your DW and EPM environments.

These presentations you can expect a lot of technical content, some very good tips and some very good ideas to improve your EPM environment!

Also I’ll be graduating in this year leadership program and this year we’ll be all over the place with the K-Team, a special team created to make the newcomers fell more welcome and help them to get the most of the kscope.

Also Rodrigo will be at Tuesday Lunch and Learn for the EPM Data Integration track on Cibolo 2/3/4.

And of course we will be around having fun an gathering new ideas for the next year!!!

And the last but not least, this year we’ll have a friend of us making his first appearance at Kscope with the presentation OBIEE Going Global! Getting Ready for More Than +140k Users (Jun 26, 2017, Monday Session 4 , 3:15 pm – 4:15 pm).

A standard Oracle Business Intelligence (OBIEE) reporting application can hold more or less 1,200 users. This may be a reasonable number of users for the majority of the companies out there, but what happens when an IT leader like Dell decides to acquire another IT giant like EMC and all of their combined 140,000-plus users need to have access to an HR OBIEE instance? What does that setup looks like? What kind of architecture do we need to have to support those users in a fast and reliable way?
This session shows the complexity of Dell’s OBIEE environment, describing all processes and steps performed to create such environment, meeting the most varied needs from business demands and L2 support, always aiming to improve environment stability. This architecture relies on a range of different technologies to support that huge amount of end users such as LDAP & SSL, Kerberos, SSO, SSL, BigIP, Shared Folders using NAS, Weblogic running into a cluster within #4 application servers.
If the challenge was not hard enough already, all of this setup also needed to consider Dell’s legacy OBIEE upgrade from v11.1.1.6.9 to v11.1.1.7.160119, so we will explain what were the pain points, considerations and orchestration needed to do all of this in parallel.

Thank you guys and see you there!

kscope17logo-pngm