Archive for the Data Warehouse Category

Oracle SQL for EPM Tips and Tricks S01EP11

Posted in ACE, Data Warehouse, Hacking, ODI, ODI 10g, ODI 11g, ODI 12c, Oracle, Oracle Database with tags , , , on March 25, 2020 by RZGiampaoli

Hey guys how are you?

Today I’ll post something that is very simple but very useful specially when working with ODI.

When we work with partitioned table we know that if we filter that table by the partitioned column Oracle will use that partition as source of data. But what if we are doing an Insert, Update or Merge?

There’s another way to explicit refer to a partition and make sure Oracle will be working inside that one and is by defining it in the From clause.

For example if I want to query the Partition “DELL_BALANCES_FY20_FEB” I can query:

As we can see, after the table name I specified the PARTITION (DELL_BALANCES_FY20_FEB) and put inside the parentheses the partition name (don’t specify as string) and that makes oracle distinct all the rows in that partition, and my Distinct of the PARTITION_KEY shows only one results as expected. (this command needs to come before the table alias).

If we are doing an Insert, Update or Merge the idea is the same:

This way we can, specially in the MERGE, make sure Oracle will be working in the right partition in the target table.

And it’s specially useful with ODI because we always know the partition we want to query or insert data when we use ODI, then we can always bind Oracle to a specific partition and make sure he’ll stay there.

I hope this is help full and see you soon.

Fragmented and Aggregated tables in OBIEE using ODI Part 3/5: Populating the Fragmented tables

Posted in ACE, Data Warehouse, Hacking, OBIEE, ODI, ODI Architecture, Oracle, Oracle Database, Query, SQL with tags , , , , on February 11, 2020 by RZGiampaoli

Hey guys, how are you?

Continuing the series Fragmented and Aggregated tables in OBIEE using ODI and today we are talking about how to Populating the Fragmented tables using ODI.

Just to make easier for you to navigate in this series, here’s the parts of it:

Creating the tables: I’ll explain the advantages and how you can create the fragmented tables and also how to further improve performance using partitioning and sub partitioning.

Managing the partitions of the Fragmented/Aggregated tables: Here you’ll see how to manage all partitions using ODI.

Populating the Fragmented tables using ODI: Here you’ll find how to create generic components to load all fragmented tables we need to.

Populating the Aggregated tables using ODI: Here you’ll find how to create generic components to load all Aggregated tables we need to.

Setting the OBIEE Repository: Finally, we’ll going to setting up the OBIEE repository to make use of all tables.

In my previous posts we had design our 18 tables and the partition management process. Now we’ll talk about how we’ll going to load data to our fragmented tables. As you can imagine, for the 6 fragmented tables, the code will be the same, the only difference is what goes to each table. Then what we need to make ODI to do for us is to dynamically filter and load the data that comes from the same query to different tables based in a rule.

If you are following this series from the beginning, you probably already guessed what we’ll need to do here. The same thing we did for the Partition management process, well, almost the same thing because we’ll going to do something a little bit more optimize for the data load since this is the most time-consuming step.

The main difference between the approach I explained in the previous post is that it loops one Statement by each row that returns from the Source command in SERIAL, that means, we need to wait it to finish for the second loop to start. For the Partition Management that’s ok because everything there is very fast, but for that load we can have something better.

We don’t need and we don’t want to wait for a serial execution because we have one table for each Loop we’ll perform (This is not the case but even if we have only one table, if we have partitions and/or Sub-Partitions, we can Loop the code by the Partition/Sub partition level and load everything in Parallel. The only case we can’t load in parallel is if we have a single table without partition).

In our case we’ll going to loop all the 6 tables load in parallel. To do that, we are going to use the same approach as before but instead of having the ALTER TABLE in the Command on Target, we’ll going to have an OdiStartScen, to call a Scenario multiple times in PARALLEL. Let’s take a look in the Command on Target:

As we can see, the Command on Target contains an ODI tool called OdiStartScen that is used to call a scenario. We have a few parameters there to set how we’ll going to call the scenario and, in this case, the important one is the SYNC_MODE=2, that means it’ll execute everything in parallel (1 is serial). Everything else are variables that we are using to pass information from the Command on Source to the Called scenario.

A quick tip here before we continue, if you want to know how in havens, I have all this setting in mind, for your information I don’t, but there’s a very easy way to find out. Just drop an ODI Start Scen from the ODI Toolbox in any part of your scenario and fill with the information you need:

In the General we set the scenario we want to load, version of the scenario (-1 is always the latest scenario you have), the Synchronous and Asynchronous mode (serial and parallel), Session name, in case you want to have an different name showing in your execution (this is always a good Idea to have since you are looping the same code with different parameters, then in my case I pass as a variable the name of the table, the period and the hierarchy that scenario will be loading).

In the Additional Variables tab you can set all variables you need to use inside the scenario (as you can see in my case, I have a lot of variables because the complexity of my sources, you case will be different).

And after you set everything, you just need to click in the Command tab to get the code that ODI generates based in the setting you did:

This is the code that you need to paste in the Command on target to call a scenario. After that you can delete the OdiStartScen object from your scenario and continue the development. Another advantage of this is that if you have the code and want to create an OdiStartScen step in a scenario, you can paste in the Command tab and the General/Additional Variable will be filled based on the code.

With the Target code ready we just need a query that will going to pass all the information needed to call the scenario. In my case, I pass a lot of information to the target, as you can see by the amount of variable I have. Also, the variable information doesn’t need to come only from the command on source. If you have variable in your scenario and they are the same for all the loops, you can just use these variables to pass their values to the Scenario the command on target is calling. The command on source needs only to have the information that will dynamically change in each loop.

In my case I’m passing all kind of information like the SESSION_NAME (the name of the session that we use to create parameter specific for each session we have and use the same variables for all, more information HERE) but the important one for us is the TABLE_NM.

In my case, the other variables will change the values depending of the table we need to load for each time we loop. To do that I insert all information I need to be pass to the scenario I’m calling in a Parameter table (that I like to call ODI_PARAMETERS). This way I can easily query this table to get specific information I need for each Loop I’m going to perform.

With all this set, we just need to create a child scenario and add “Declare” variables for each Variable we want to pass to the internal scenario like this:

As you can see, this is a very simple scenario that contains just one procedure that’ll be used to load the data to our tables.

This approach can be done with Interfaces as well, in fact with anything at all. We can have a mains scenario that loops a very complex scenario, that loops another scenario and so on. This is a very nice way to loop something because we have a lot of control over this kind of loop as well, we can change the behavior of each loop because the info we get from the Command on Source.

Also, a very good feature is that, if the source query doesn’t return any rows, the Command on Target is not executed at all. Then you’ll always have a clean execution. For example, if you have a folder that can have 0 to X number of files and you want to create an scenario to load it, the best approach would be to read this folder files (OS command to a file and loaf file or Java or Phyton or…) and store the info in our ODI parameter. Then we create a proc to call a scenario that will contain the interface that will load each file, and finally we just do a Select in the Command on Source to get the name of the files in the folder. With that, if we have 0 files, nothing will be executed, but if we have 1000 files, we’ll loop the scenario 1000 times and each time we’ll pass the name of a different file to the internal scenario. Very neat way to do it.

Inside the procedure we just need to create the query need to load the data and use the variables to filter the right data and load the right table. This is not my case but if you have different source for each table you want to load, just insert this information in the parameter table and pass it to the internal scenario. Use these variables to replace the Table Names in the SQL and you are done.

And that’s it. This is all we need to do to load all our 6 fragmented tables. In my next post I’ll be talking about how we can aggregate our data and load our Aggregated tables.

I hope you guys enjoy and see you soon.

Building dynamic ODI code using Oracle metadata dictionary

Posted in Data Warehouse, ODI, ODI 12c with tags , , , on July 27, 2018 by radk00

Hi all, today’s post will be about how ODI can be used to generate any kind of SQL statements using Oracle metadata tables. We always like to say that ODI is way more than just an ETL tool and that people needs to start to think about ODI as being a full development platform, where you may create any kind of code that you wish there. Today I’ll describe how we may create a simple (but dynamic) merge statement between two similar tables using an ODI procedure that will read from ALL_CONSTRAINTS, ALL_CONS_COLUMNS and ALL_TAB_COLS Oracle tables to figure out what to do.

This is the scenario that we will be working on: we have several stage tables that are truncated and loaded everyday with daily records from a source system. We have another set of tables that are used to store all the historical information and the process uses the first stage tables as sources, merging its data against the historical tables using their primary key. This is very common in a lot of places where we have a stage data layer that stores daily data pulls and then a “base” data layer that stores the historical data. In this scenario that we will describe here, both source and target set of tables have very similar structures, with the same column names, data types and so on.

Using the conventional ODI development process, we would need to create one mapping object for each set of source->target tables, so if we have 50 sources that needs to be merged against 50 targets, we would need to create 50 ODI mappings. Since the set of source->target tables are similar in this case, we may be smarter and create an ODI process that will receive a table name as a input parameter (in this case the target table name) and it will create a merge statement against those two tables in a dynamic way using Oracle metadata dictionary.

For those that are not familiar with Oracle metadata dictionary, its nothing more than a set of tables that exists in Oracle database that contains information about its existing components like, information about its tables, what are the columns that they have, which is their data type and so on. This is a great resource place that ODI may read from it and build generic code using its results. Let’s see how it looks like with a real example.

Imagine that you have two tables with the following structure:

As you can see, our base table is almost the same as our stage table and the only difference is that it contains 2 additional columns named INSERT_DTTM and UPDATE_DTTM that will be used as “control columns” to identify when that data was inserted/updated in our base table. For ODI to figure out which columns are presented in which table, we may query ALL_TAB_COLS in Oracle filtering its table name, as below:

3

This is showing us all the table columns that this table contains. Similarly, if we query ALL_CONSTRAINTS and ALL_CONS_COLUMNS, we may get all the table constraints (like Primary Key) with all its associated columns:

4

With those two sets of data, we may create a SQL that will build our dynamic merge statement. To make it easier, I’ll show you the final SQL statement now, that is divided in two pieces, and then I’ll explain each of them:

WITH TABLE_PARAMS AS
(
SELECT 
    'BASE_TABLE_A' AS TABLE_NAME,
    'SCHEMA_A' AS TABLE_OWNER
FROM DUAL
),
TABLE_PK AS
(
SELECT
    ACC.OWNER, 
    ACC.TABLE_NAME, 
    ACC.COLUMN_NAME
FROM  ALL_CONSTRAINTS AC, 
      ALL_CONS_COLUMNS ACC,
      TABLE_PARAMS 
WHERE 1=1
AND AC.OWNER = ACC.OWNER
AND AC.TABLE_NAME = ACC.TABLE_NAME
AND AC.OWNER = TABLE_PARAMS.TABLE_OWNER
AND AC.TABLE_NAME = TABLE_PARAMS.TABLE_NAME
AND AC.CONSTRAINT_NAME = ACC.CONSTRAINT_NAME
AND AC.CONSTRAINT_TYPE = 'P'
)
,
MAIN_TAB_COLS AS
(
SELECT 
    ATC.OWNER,
    ATC.TABLE_NAME,
    ATC.COLUMN_NAME
FROM ALL_TAB_COLS ATC,
     TABLE_PARAMS 
WHERE 1=1
AND ATC.TABLE_NAME = TABLE_PARAMS.TABLE_NAME
AND ATC.OWNER = TABLE_PARAMS.TABLE_OWNER
AND ATC.COLUMN_NAME NOT IN ('INSERT_DTTM','UPDATE_DTTM')
AND ATC.COLUMN_NAME NOT IN (SELECT COLUMN_NAME FROM TABLE_PK)
)
SELECT
    MTC.TABLE_NAME AS TARGET_TABLE,
    REPLACE(MTC.TABLE_NAME,'BASE_','STG_') AS SOURCE_TABLE,
    PK_ST_LIST,
    PK_S_LIST||','||(LISTAGG('S.'||MTC.COLUMN_NAME ,',') WITHIN GROUP (ORDER BY MTC.COLUMN_NAME)) || ',SYSDATE,SYSDATE' AS TABLE_S,
    PK_T_LIST||','||(LISTAGG('T.'||MTC.COLUMN_NAME ,',') WITHIN GROUP (ORDER BY MTC.COLUMN_NAME)) || ',T.INSERT_DTTM,T.UPDATE_DTTM' AS TABLE_T,
    LISTAGG ('T.'||MTC.COLUMN_NAME||'=S.'||MTC.COLUMN_NAME , ',')  WITHIN GROUP (ORDER BY MTC.COLUMN_NAME ) AS ST_COLS
FROM MAIN_TAB_COLS MTC, 
    (SELECT 
        TP.OWNER,
        TP.TABLE_NAME,
        LISTAGG ('T.'||TP.COLUMN_NAME||'=S.'||TP.COLUMN_NAME , ' AND ')  WITHIN GROUP (ORDER BY TP.COLUMN_NAME ) PK_ST_LIST,
        LISTAGG ('S.'||TP.COLUMN_NAME, ',')  WITHIN GROUP (ORDER BY TP.COLUMN_NAME ) PK_S_LIST,
        LISTAGG ('T.'||TP.COLUMN_NAME, ',')  WITHIN GROUP (ORDER BY TP.COLUMN_NAME ) PK_T_LIST
    FROM TABLE_PK TP
    GROUP BY 
        TP.OWNER,
        TP.TABLE_NAME
    ) TP
WHERE 1=1
AND MTC.OWNER = TP.OWNER
AND MTC.TABLE_NAME = TP.TABLE_NAME
GROUP BY
    MTC.OWNER,
    MTC.TABLE_NAME,
    PK_ST_LIST,
    PK_S_LIST,
    PK_T_LIST;

The first piece of the SQL contains a WITH clause with three sections:

  • TABLE_PARAMS: used to “receive” the table name parameter that will be sent by ODI;
  • TABLE_PK: used to figure out which is the table PK that will be used to do a “merge” in our target table;
  • MAIN_TAB_COLS: used to retrieve all columns that exists in a table removing any kind of control columns (in this case INSERT_DTTM and UPDATE_DTTM) and any column that is already presented in the PK columns;

The second piece is the main one where we will use the three sub selects from the WITH section to build what we need. In this case, it will return the following columns:

  • TARGET_TABLE: name of the target table that will be merged;
  • SOURCE_TABLE: name of the source table that will be used as a source of the merge process;
  • PK_ST_LIST: PK columns list that will be used on merge process;
  • TABLE_S: column names from the source table;
  • TABLE_T: column names from target table;
  • ST_COLS: combination of source and target columns for update process inside the merge;

When we run the SQL for our tables in this example, this is the result:

5

Now we have all information that we need to create a dynamic merge statement for any set of similar tables, but how do we use it in ODI? This is very simple with one of the best features that ODI has (if you read our blog, you know that we just love it): command on source/target. Let’s create a procedure and add our SQL statement in the command on source tab:

6.png

In our command on target tab, we will add the following code there:

7

As you can see, this SQL contains a lot of variables in it. These variables will be used at runtime to receive the return values from the SQL in command on source. In this way, we don’t need to worry about creating 50 mappings to do 50 merge processes. Instead, we have one procedure that will receive a table name as a parameter and will build the necessary SQL accordingly. Let’s see how it looks like in an ODI package:

8

As you can see, it’s a very simple package that is receiving a table name as a parameter and then building/running a dynamic merge SQL. This package can be called by an external package that may run it N times with different table names (like doing 50 table mergers with one single procedure). Of course, that this was just one example of a simple merge task, but it shows you the main idea of having ODI building the code for you. You may add more tasks to your procedure to create temp tables, run gather statistics and so on. There are almost no limits on what you may do using this kind of technique.

I hope you have enjoyed! See ya!

Kscope 17 is approaching fast!!! And we’ll be there!

Posted in ACE, Data Warehouse, Essbase, Hyperion Essbase, Java, Kscope 17, ODI, ODI Architecture, Oracle, Performance, Tips and Tricks, Uncategorized with tags , , , , , , , , on June 8, 2017 by RZGiampaoli

Hi guys how are you? We are sorry for being away for so much time but this year we have a lot of exiting things going one, then let’s start with what we’ll be doing at Kscope 17!

This year we’ll present 2 sessions:

Essbase Statistics DW: How to Automatically Administrate Essbase Using ODI (Jun 28, 2017, Wednesday Session 12 , 9:45 am – 10:45 am)

In order to have a performatic Essbase cube, we must keep vigilance and follow up its growth and its data movements so we can distribute caches and adjust the database parameters accordingly. But this is a very difficult task to achieve, since Essbase statistics are not temporal and only tell you the cube statistics is in that specific time frame.

This session will present how ODI can be used to create a historical statistical DW containing Essbase cube’s information and how to identify trends and patterns, giving us the ability for programmatically tune our Essbase databases automatically.

And…

Data Warehouse 2.0: Master Techniques for EPM Guys (Powered by ODI)  (Jun 26, 2017, Monday Session 2 , 11:45 am – 12:45 pm)

EPM environments are generally supported by a Data Warehouse; however, we often see that those DWs are not optimized for the EPM tools. During the years, we have witnessed that modeling a DW thinking about the EPM tools may greatly increase the overall architecture performance.

The most common situation found in several projects is that the people who develop the data warehouse do not have a great knowledge about EPM tools and vice-versa. This may create a big gap between those two concepts which may severally impact performance.

This session will show a lot of techniques to model the right Data Warehouse for EPM tools. We will discuss how to improve performance using partitioned tables, create hierarchical queries with “Connect by Prior”, the correct way to use multi-period tables for block data load using Pivot/Unpivot and more. And if you want to go ever further, we will show you how to leverage all those techniques using ODI, which will create the perfect mix to perform any process between your DW and EPM environments.

These presentations you can expect a lot of technical content, some very good tips and some very good ideas to improve your EPM environment!

Also I’ll be graduating in this year leadership program and this year we’ll be all over the place with the K-Team, a special team created to make the newcomers fell more welcome and help them to get the most of the kscope.

Also Rodrigo will be at Tuesday Lunch and Learn for the EPM Data Integration track on Cibolo 2/3/4.

And of course we will be around having fun an gathering new ideas for the next year!!!

And the last but not least, this year we’ll have a friend of us making his first appearance at Kscope with the presentation OBIEE Going Global! Getting Ready for More Than +140k Users (Jun 26, 2017, Monday Session 4 , 3:15 pm – 4:15 pm).

A standard Oracle Business Intelligence (OBIEE) reporting application can hold more or less 1,200 users. This may be a reasonable number of users for the majority of the companies out there, but what happens when an IT leader like Dell decides to acquire another IT giant like EMC and all of their combined 140,000-plus users need to have access to an HR OBIEE instance? What does that setup looks like? What kind of architecture do we need to have to support those users in a fast and reliable way?
This session shows the complexity of Dell’s OBIEE environment, describing all processes and steps performed to create such environment, meeting the most varied needs from business demands and L2 support, always aiming to improve environment stability. This architecture relies on a range of different technologies to support that huge amount of end users such as LDAP & SSL, Kerberos, SSO, SSL, BigIP, Shared Folders using NAS, Weblogic running into a cluster within #4 application servers.
If the challenge was not hard enough already, all of this setup also needed to consider Dell’s legacy OBIEE upgrade from v11.1.1.6.9 to v11.1.1.7.160119, so we will explain what were the pain points, considerations and orchestration needed to do all of this in parallel.

Thank you guys and see you there!

kscope17logo-pngm