Archive for the ODI Category

KScope 18 Speaker Award

Posted in ACE, Career, DEVEPM, EPM, Kscope, Kscope 18, ODI, ODI Architecture, ODTUG, PBCS with tags , , , , , , on September 17, 2018 by RZGiampaoli

Hey guys how are you?

It has been awhile since last time I wrote anything here…. and surprise, surprise, it’s because I’m crazy working in a project that was sized small but turn out huge and the size didn’t change…. 🙂 never happened before heheheh 😉

This is just a small post to tell how grateful and happy we are for receiving the EPM Data Integration Speaker Award in Kscope 18 with the presentation: How to Use Your ODI On-Premise to Seamlessly Integrate PBCS.

We start this blog in 2012 and we have been presenting at Kscope since 2013 and it has been very rewarding, not only because we become Oracle ACEs because of this, but because every single post or presentation we learn a lot with it.

When you do a presentation you need to stop to think in a solution for a specif project and start to thinking in a solution that can be used to all projects. This alone is a challenge, but the amount of thing we learn is a great deal. We can easily said that our code improved a lot since 2012 when we began with this blog and it’s in great part because of this blog and our presentations.

Then we thank you all of you that read our blog (even if we don’t post as much as we would like), to everybody that goes to KScope and decide to watch our presentations and to ODTUD that provide this bi-lateral learning platform.

Thank you all of you for supporting us and see you soon.

Advertisements

Building dynamic ODI code using Oracle metadata dictionary

Posted in Data Warehouse, ODI, ODI 12c with tags , , , on July 27, 2018 by radk00

Hi all, today’s post will be about how ODI can be used to generate any kind of SQL statements using Oracle metadata tables. We always like to say that ODI is way more than just an ETL tool and that people needs to start to think about ODI as being a full development platform, where you may create any kind of code that you wish there. Today I’ll describe how we may create a simple (but dynamic) merge statement between two similar tables using an ODI procedure that will read from ALL_CONSTRAINTS, ALL_CONS_COLUMNS and ALL_TAB_COLS Oracle tables to figure out what to do.

This is the scenario that we will be working on: we have several stage tables that are truncated and loaded everyday with daily records from a source system. We have another set of tables that are used to store all the historical information and the process uses the first stage tables as sources, merging its data against the historical tables using their primary key. This is very common in a lot of places where we have a stage data layer that stores daily data pulls and then a “base” data layer that stores the historical data. In this scenario that we will describe here, both source and target set of tables have very similar structures, with the same column names, data types and so on.

Using the conventional ODI development process, we would need to create one mapping object for each set of source->target tables, so if we have 50 sources that needs to be merged against 50 targets, we would need to create 50 ODI mappings. Since the set of source->target tables are similar in this case, we may be smarter and create an ODI process that will receive a table name as a input parameter (in this case the target table name) and it will create a merge statement against those two tables in a dynamic way using Oracle metadata dictionary.

For those that are not familiar with Oracle metadata dictionary, its nothing more than a set of tables that exists in Oracle database that contains information about its existing components like, information about its tables, what are the columns that they have, which is their data type and so on. This is a great resource place that ODI may read from it and build generic code using its results. Let’s see how it looks like with a real example.

Imagine that you have two tables with the following structure:

As you can see, our base table is almost the same as our stage table and the only difference is that it contains 2 additional columns named INSERT_DTTM and UPDATE_DTTM that will be used as “control columns” to identify when that data was inserted/updated in our base table. For ODI to figure out which columns are presented in which table, we may query ALL_TAB_COLS in Oracle filtering its table name, as below:

3

This is showing us all the table columns that this table contains. Similarly, if we query ALL_CONSTRAINTS and ALL_CONS_COLUMNS, we may get all the table constraints (like Primary Key) with all its associated columns:

4

With those two sets of data, we may create a SQL that will build our dynamic merge statement. To make it easier, I’ll show you the final SQL statement now, that is divided in two pieces, and then I’ll explain each of them:

WITH TABLE_PARAMS AS
(
SELECT 
    'BASE_TABLE_A' AS TABLE_NAME,
    'SCHEMA_A' AS TABLE_OWNER
FROM DUAL
),
TABLE_PK AS
(
SELECT
    ACC.OWNER, 
    ACC.TABLE_NAME, 
    ACC.COLUMN_NAME
FROM  ALL_CONSTRAINTS AC, 
      ALL_CONS_COLUMNS ACC,
      TABLE_PARAMS 
WHERE 1=1
AND AC.OWNER = ACC.OWNER
AND AC.TABLE_NAME = ACC.TABLE_NAME
AND AC.OWNER = TABLE_PARAMS.TABLE_OWNER
AND AC.TABLE_NAME = TABLE_PARAMS.TABLE_NAME
AND AC.CONSTRAINT_NAME = ACC.CONSTRAINT_NAME
AND AC.CONSTRAINT_TYPE = 'P'
)
,
MAIN_TAB_COLS AS
(
SELECT 
    ATC.OWNER,
    ATC.TABLE_NAME,
    ATC.COLUMN_NAME
FROM ALL_TAB_COLS ATC,
     TABLE_PARAMS 
WHERE 1=1
AND ATC.TABLE_NAME = TABLE_PARAMS.TABLE_NAME
AND ATC.OWNER = TABLE_PARAMS.TABLE_OWNER
AND ATC.COLUMN_NAME NOT IN ('INSERT_DTTM','UPDATE_DTTM')
AND ATC.COLUMN_NAME NOT IN (SELECT COLUMN_NAME FROM TABLE_PK)
)
SELECT
    MTC.TABLE_NAME AS TARGET_TABLE,
    REPLACE(MTC.TABLE_NAME,'BASE_','STG_') AS SOURCE_TABLE,
    PK_ST_LIST,
    PK_S_LIST||','||(LISTAGG('S.'||MTC.COLUMN_NAME ,',') WITHIN GROUP (ORDER BY MTC.COLUMN_NAME)) || ',SYSDATE,SYSDATE' AS TABLE_S,
    PK_T_LIST||','||(LISTAGG('T.'||MTC.COLUMN_NAME ,',') WITHIN GROUP (ORDER BY MTC.COLUMN_NAME)) || ',T.INSERT_DTTM,T.UPDATE_DTTM' AS TABLE_T,
    LISTAGG ('T.'||MTC.COLUMN_NAME||'=S.'||MTC.COLUMN_NAME , ',')  WITHIN GROUP (ORDER BY MTC.COLUMN_NAME ) AS ST_COLS
FROM MAIN_TAB_COLS MTC, 
    (SELECT 
        TP.OWNER,
        TP.TABLE_NAME,
        LISTAGG ('T.'||TP.COLUMN_NAME||'=S.'||TP.COLUMN_NAME , ' AND ')  WITHIN GROUP (ORDER BY TP.COLUMN_NAME ) PK_ST_LIST,
        LISTAGG ('S.'||TP.COLUMN_NAME, ',')  WITHIN GROUP (ORDER BY TP.COLUMN_NAME ) PK_S_LIST,
        LISTAGG ('T.'||TP.COLUMN_NAME, ',')  WITHIN GROUP (ORDER BY TP.COLUMN_NAME ) PK_T_LIST
    FROM TABLE_PK TP
    GROUP BY 
        TP.OWNER,
        TP.TABLE_NAME
    ) TP
WHERE 1=1
AND MTC.OWNER = TP.OWNER
AND MTC.TABLE_NAME = TP.TABLE_NAME
GROUP BY
    MTC.OWNER,
    MTC.TABLE_NAME,
    PK_ST_LIST,
    PK_S_LIST,
    PK_T_LIST;

The first piece of the SQL contains a WITH clause with three sections:

  • TABLE_PARAMS: used to “receive” the table name parameter that will be sent by ODI;
  • TABLE_PK: used to figure out which is the table PK that will be used to do a “merge” in our target table;
  • MAIN_TAB_COLS: used to retrieve all columns that exists in a table removing any kind of control columns (in this case INSERT_DTTM and UPDATE_DTTM) and any column that is already presented in the PK columns;

The second piece is the main one where we will use the three sub selects from the WITH section to build what we need. In this case, it will return the following columns:

  • TARGET_TABLE: name of the target table that will be merged;
  • SOURCE_TABLE: name of the source table that will be used as a source of the merge process;
  • PK_ST_LIST: PK columns list that will be used on merge process;
  • TABLE_S: column names from the source table;
  • TABLE_T: column names from target table;
  • ST_COLS: combination of source and target columns for update process inside the merge;

When we run the SQL for our tables in this example, this is the result:

5

Now we have all information that we need to create a dynamic merge statement for any set of similar tables, but how do we use it in ODI? This is very simple with one of the best features that ODI has (if you read our blog, you know that we just love it): command on source/target. Let’s create a procedure and add our SQL statement in the command on source tab:

6.png

In our command on target tab, we will add the following code there:

7

As you can see, this SQL contains a lot of variables in it. These variables will be used at runtime to receive the return values from the SQL in command on source. In this way, we don’t need to worry about creating 50 mappings to do 50 merge processes. Instead, we have one procedure that will receive a table name as a parameter and will build the necessary SQL accordingly. Let’s see how it looks like in an ODI package:

8

As you can see, it’s a very simple package that is receiving a table name as a parameter and then building/running a dynamic merge SQL. This package can be called by an external package that may run it N times with different table names (like doing 50 table mergers with one single procedure). Of course, that this was just one example of a simple merge task, but it shows you the main idea of having ODI building the code for you. You may add more tasks to your procedure to create temp tables, run gather statistics and so on. There are almost no limits on what you may do using this kind of technique.

I hope you have enjoyed! See ya!

DEVEPM will be at Kscope18!

Posted in ACE, DEVEPM, Essbase, Kscope 18, ODI, ODTUG with tags , on March 14, 2018 by radk00

Hi all, how are you doing? With some delay (same as last year, I was on vacation 🙂 ) we are very happy to announce that DEVEPM will be once again at KScope! We are very honored to be selected to present on the best EPM conference in the word! We got two presentations in, so here is what we are going present at Kscope18:

Dynamic Metadata Integrations for Multiple ASO Applications

Metadata management is a challenge on complex environments, since it is always changing over time. If the metadata process is not build in a robust and dynamic way, we will end up having a lot of rework every time that the business changes.

One of the best ways to maintain metadata on ASO cubes is using ODI, which gives us great flexibility to create real complex enterprise ETL processes. But if we use ODI in its standard way, we will end up having multiple similar ODI objects, since each ASO application/dimension is tied to a particular ODI data store. In other words, the higher the number of ASO applications we have, the higher is the number of ODI objects, increasing the possible failure points and code rework if something changes, which will make us lose a lot of time, money and trust on those systems.

This study case describes how to implement a smart EPM environment that uses Oracle Data Integrator with Oracle Essbase and take full advantage of its potential. This session will show how to create dynamic processes that changes automatically for any number of Essbase applications, allowing metadata maintenance that meets the business needs with low development costs.

Incredible ODI tips to work with Hyperion tools that you ever wanted to know

ODI is an incredible and flexible development tool that goes beyond simple data integration. But most of its development power comes from outside the box ideas.

  • Did you ever wanted to dynamically run any number of “OS” commands using a single ODI component?
  • Did you ever wanted to have only one datastore and loop different sources without the need of different ODI contexts?
  • Did you ever wanted to have only one interface and loop any number of ODI Objects with a lot of control?
  • Did you ever need to have a “Third Command Tab” in your procedures or KMs to improve ODI powers?
  • Did you still use an old version of ODI and miss a way to know the values of the Variables in a scenario execution?
  • Did you know that ODI has 4 “Substitution Tags”? And do you know how useful they are?
  • Do you use “Dynamic Variables” and know how powerful they can be?
  • Do you know how to have control over you ODI priority jobs automatically? (Stop, Start and Restart scenarios)

If you want to know the answer of all this questions please join us in this session to learn the special secrets of ODI that will take your development skills to the next level.

ys2header

Kscope is the largest EPM conference in the world and it will be held in Orlando on June 2018. It will feature more than 300 technical sessions, five symposiums, deep dive sessions, and hands on labs over the course of five days.

Got interested? If you register by March 29th you’ll take advantage of the Kscope early bird rates. Don’t waste more time and let’s be part of the greatest EPM event in the world. If you are still unsure about it, read our post about how Kscope/ODTUG changed our lives! Kscope is indeed a life changer event!

 

DEVEPM on Oracle Magazine

Posted in ACE, Career, DEVEPM, ETL, ODI with tags , , on November 16, 2017 by radk00

Hi all!

Very quick post today! Rodrigo was interviewed by Oracle Magazine for the November/December issue. Check it out!

DEVEPM on Oracle Magazine

Thanks!

ODI 12c Standalone Agent Install for an ODI 11g guy

Posted in InfraStructure, Install, ODI, ODI 11g, ODI 12c, ODI Architecture with tags , , , , , on July 17, 2017 by radk00

Hi everybody! Today’s post is about installing an ODI 12c standalone agent. This is not a “new” topic and the steps to perform it can also be found at the Oracle site, however it got me a little bit “off guard” when I was requested to install one and the reason is that it changed considerably comparing to ODI11g (and yeah, we still work A LOT with ODI11g, so installing ODI12c agent was “new” for us).

Prior to ODI 12 version, the ODI agent was configured by simply editing a file called odiparams.bat (odiparams.sh in Linux), which would contain all the necessary agent configuration parameters. It was a simple step, where you would enter the ODI master/work configuration, DB/ODI connection users and so on. After that, you would simply run the agent program and that was it, very short and easy to do. However, in ODI 12 version, it changed considerably and now we need to go through two wizard setups, one for creating the necessary pre-requisite DB schema for ”Common Infrastructure Services” and the other one to configure the ODI Standalone agent for us.

This change added some extra complexity to an architecture that was (talking exclusively about ODI Standalone Agent here) very simple to setup in the old days. Although Oracle provides wizards for us to minimize this effort, nothing was easier than simply configuring a parameter file and running a java program. But enough grumbling, let’s see how we may accomplish this task on ODI 12.

The first wizard that we need to run is the Repository Creation Utility (RCU) that is located here at ORACLE_HOME/oracle_common/bin/rcu.bat. Before we run it, we must understand what RCU is and what it can do for us. As its name suggests, it is a utility that may be used to create any repository component required for Oracle Fusion Middleware products, including the ODI Master/Work repository.

In our project, we did not create ODI Master/Work repository with RCU, but instead we got two empty Oracle DB schemas and installed ODI directly there. The reason why we did not use RCU in this situation is because RCU will force you to create one single Oracle DB schema that will store both ODI Master and Work repositories and this is not a good approach when dealing with large environments. We think that Oracle’s rational on this subject was to simplify certain ODI installs by unifying all in a single place, but again, this removes some of the ODI’s architecture flexibility and complicates the use of complex architectures in the future, like using multiple Work repositories attached to one Master.

So, if we already have ODI Master/Work repositories created, why do we still need RCU? This is because, from ODI 12 version on, we need a third Oracle DB schema that will be used to store the “Common Infrastructure Services” tables that are required for the ODI Standalone agent and the only way to create these tables are using the RCU utility.

Now that we have set our expectations around RCU, let’s run it. The first screen is just a welcome screen explaining what RCU is about, so just click Next.

1

Now let’s select “Create Repository” and “System Load and Product Load”. Just notice that you will be asked for a DBA user in the next steps, since this DBA user will be used to create the necessary database objects (including the DB schema itself) in the new “Common Infrastructure Services” schema. Click Next.

2

Add the database and DBA information and click next.

3

ODI installer will check your information and if everything is ok, all tasks will be green. Select Ok to proceed.

4

In the next screen is where we may select which components we want RCU to install. We may notice that RCU is able to create several schemas for different components, from ODI to WebLogic. Since we already have our Master and Work repositories created, we just need to select “AS Common Schemas”/”Common Infrastructure Services”. Note here that, for this schema, RCU will create it using what is added in the “Create new prefix” option plus a “_STB” postfix. Click Next.

5

The installer will check the pre-requisites to install and if it is ok, a green check will appear. Click OK.

6

In the next screen you will identify which schema password will be used on the new created DB schema. Add a password and click next.

7

Define the Default and Temp table spaces that will be used by the new schema and click Next.

8

If the table spaces does not exist, they will be created for you. Click Ok.

9

The installer will check once more if everything is okay and also create the necessary table spaces. Click Ok.

10

On the next page, we are going to have a Summary on what the installer will do. If everything looks correct, click Create to create the necessary DB objects.

11

Check the Completion Summary, click close and that’s it! You have successfully created the “Common Infrastructure Services” schema, which is a pre-requisite for the ODI Agent install.

12

The next step is to run the wizard setup that will configure the ODI Standalone agent for us. Run the Config program on ORACLE_HOME/oracle_common/common/bin/config.cmd. In the first screen let’s create a new domain. In this domain folder is where the ODI Agent batch programs will reside, such as Start/Stop agent. Select a meaningful folder and click next.

13

In the next screen you will select “Oracle Data Integrator – Standalone Agent – 12.2.1.2.6 [odi]” and click next. This step will also install some basic Standalone components required for the ODI Agent.

14

Select a valid JDK location and click next.

15

Since we did not create our Master and Work repositories using RCU, we won’t be able to use the “RCU Data” option for Auto Configuration here. It is not a big deal, since we may select “Manual Configuration” and click next.

16

Here we will need to input all the information related to two schemas: The ODI Master and the “Common Infrastructure Services“. The way that this screen works is tricky and confusing, since there are options that may be typed for all schemas at once. The best way to do it without any mistake is by selecting one of them, add all information, then uncheck and check the other one and add all the information again. Click next.

17

The installer will check the information that was added here and if it is okay, two green marks will be showed in the Status column. Click next.

18

The next screen will be used to define our ODI Agent name. Create a meaningful name here, since this will be used by the ODI users to select on which ODI agent they will run their ETL processes. Click next.

19

Add the server address, the port and an ODI user/password that has “Supervisor” access. On preferred Data source option, leave it as odiMasterRepository and click next.

20

Although we are not going to use our ODI Standalone Agent in a Node Manager object, which would be controlled by WebLogic, we still need to select a type for it and create a new credential. Add any name and a password for it (don’t worry, you will not use it for the ODI Standalone Agent) and click next.

21

Review the install summary and if everything is ok, just click Create.

22

Check all the steps as they turn into green checks and once completed, click next.

23

That’s the end of the configurations! You have successfully completed the ODI Standalone agent configuration and it is ready to run.

24

In order to run the ODI agent, open a CMD command, navigate to your base domain folder and run the ODI Agent start program with its name as an input argument: agent.cmd –NAME=DEV_AGENT. Wait a little bit for it to load and when its status gets to “started” it is good to go.

25

Now that the ODI agent is up and running, we may go to ODI Topology/Agent and double click the ODI agent that you have created. Now we may click on the Test button and see what happens. If everything is correct, you will see an information windows saying that the ODI agent Test was Successful!

26

Congratulations, now you have an ODI12c Standalone Agent configured. As you can see, we now have some more extra steps to do compared to ODI11g. I hope this post helps you to get prepared for this new kind of installs.

Thanks, see ya!

 

Kscope 17 is approaching fast!!! And we’ll be there!

Posted in ACE, Data Warehouse, Essbase, Hyperion Essbase, Java, Kscope 17, ODI, ODI Architecture, Oracle, Performance, Tips and Tricks, Uncategorized with tags , , , , , , , , on June 8, 2017 by RZGiampaoli

Hi guys how are you? We are sorry for being away for so much time but this year we have a lot of exiting things going one, then let’s start with what we’ll be doing at Kscope 17!

This year we’ll present 2 sessions:

Essbase Statistics DW: How to Automatically Administrate Essbase Using ODI (Jun 28, 2017, Wednesday Session 12 , 9:45 am – 10:45 am)

In order to have a performatic Essbase cube, we must keep vigilance and follow up its growth and its data movements so we can distribute caches and adjust the database parameters accordingly. But this is a very difficult task to achieve, since Essbase statistics are not temporal and only tell you the cube statistics is in that specific time frame.

This session will present how ODI can be used to create a historical statistical DW containing Essbase cube’s information and how to identify trends and patterns, giving us the ability for programmatically tune our Essbase databases automatically.

And…

Data Warehouse 2.0: Master Techniques for EPM Guys (Powered by ODI)  (Jun 26, 2017, Monday Session 2 , 11:45 am – 12:45 pm)

EPM environments are generally supported by a Data Warehouse; however, we often see that those DWs are not optimized for the EPM tools. During the years, we have witnessed that modeling a DW thinking about the EPM tools may greatly increase the overall architecture performance.

The most common situation found in several projects is that the people who develop the data warehouse do not have a great knowledge about EPM tools and vice-versa. This may create a big gap between those two concepts which may severally impact performance.

This session will show a lot of techniques to model the right Data Warehouse for EPM tools. We will discuss how to improve performance using partitioned tables, create hierarchical queries with “Connect by Prior”, the correct way to use multi-period tables for block data load using Pivot/Unpivot and more. And if you want to go ever further, we will show you how to leverage all those techniques using ODI, which will create the perfect mix to perform any process between your DW and EPM environments.

These presentations you can expect a lot of technical content, some very good tips and some very good ideas to improve your EPM environment!

Also I’ll be graduating in this year leadership program and this year we’ll be all over the place with the K-Team, a special team created to make the newcomers fell more welcome and help them to get the most of the kscope.

Also Rodrigo will be at Tuesday Lunch and Learn for the EPM Data Integration track on Cibolo 2/3/4.

And of course we will be around having fun an gathering new ideas for the next year!!!

And the last but not least, this year we’ll have a friend of us making his first appearance at Kscope with the presentation OBIEE Going Global! Getting Ready for More Than +140k Users (Jun 26, 2017, Monday Session 4 , 3:15 pm – 4:15 pm).

A standard Oracle Business Intelligence (OBIEE) reporting application can hold more or less 1,200 users. This may be a reasonable number of users for the majority of the companies out there, but what happens when an IT leader like Dell decides to acquire another IT giant like EMC and all of their combined 140,000-plus users need to have access to an HR OBIEE instance? What does that setup looks like? What kind of architecture do we need to have to support those users in a fast and reliable way?
This session shows the complexity of Dell’s OBIEE environment, describing all processes and steps performed to create such environment, meeting the most varied needs from business demands and L2 support, always aiming to improve environment stability. This architecture relies on a range of different technologies to support that huge amount of end users such as LDAP & SSL, Kerberos, SSO, SSL, BigIP, Shared Folders using NAS, Weblogic running into a cluster within #4 application servers.
If the challenge was not hard enough already, all of this setup also needed to consider Dell’s legacy OBIEE upgrade from v11.1.1.6.9 to v11.1.1.7.160119, so we will explain what were the pain points, considerations and orchestration needed to do all of this in parallel.

Thank you guys and see you there!

kscope17logo-pngm

ODI KMs for HFM 11.1.2.4

Posted in 11.1.1.9.0, ACE, Configuration, DEVEPM, ETL, Hacking, HFM, Knowledge Models, ODI, ODI 11g, ODI Architecture, Uncategorized with tags , , , , , on March 3, 2017 by RZGiampaoli

Hi guys how are you? Today we are proud to announce that we are making available the ODI KMs for HFM 11.1.2.4.

—- EDITED on June/17 —-

We developed these KMs around 6 months ago, but we were waiting to release them together with an article that we wrote for Oracle.

Since OTN had some “Priority changes”, our article was postponed to later this year. As we had some people asking for these KMs we decide to release the KMs now and when the article is published we will let you guys know as well.

The article is live here! And if you guys are having errors with our KMs, please check our troubleshooting post here.

—- EDITED on June/17 —-

Prior to version 11.1.2.4, ODI could be easily used for HFM integration processes. ODI used its KMs with specific HFM drivers (HFMDriver.dll) provided by Oracle that were used to access and manipulate HFM applications. However, on HFM’s latest version, Oracle decided to remove its support for ODI, meaning that all HFM integrations would have to move from ODI to either manual iteration with HFM, usage of another integration tool (Like FDMEE) or create custom code using the new Java HFM API.

Since we didn’t want to re-write all our ODI environment and also none of the above options are robust enough, we decided to recreate the ODI KMs using Java HFM API. For these KMs to work we need to do two things: import them from ODI Java Net and do some setup in the ODI agent.

In the article we explain all options and how do we came up with this solution, but here we will not talk about it since we want you guys to read our article as well and we can’t use the content of the article here since we already signed an exclusivity agreement with Oracle.

The first part is easy and you just need to download the files from the link below

ODI KMS for HFM 11.1.2.4

The second one is more difficult. We need to make the new HFM Jars available to the ODI Agent and in order to do so we have two options:

Install the agent in the HFM machine OR copy the necessary jar files to the agent drivers folder (oracledi\agent\drivers).

If your architecture allows to have both HFM and ODI agent in the same server, then you may use this approach, which is very simple. The only thing to do is to change odiparams file (oracledi\agent\bin\odiparams.bat file in a standalone agent) and add the location of those three HFM jar files. Open odiparams.bat file and search for “ODI_ADDITIONAL_CLASSPATH”. On that setting, just set the location of the HFM jar files, as below (this is just an example. Please adjust the path accordingly to your environment):

set ODI_ADDITIONAL_CLASSPATH=%ODI_ADDITIONAL_CLASSPATH%;

“D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_j2se.jar”;

“D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_thrift.jar”;

“D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_hfm_server.jar”

Save the file, restart the ODI agent and it is done

If you decide to go with the second option, we’ll provide a list of all the necessary jars (be prepared… it’s huge). In the article we explain how to identify all the necessary jar files in a systematic way but here this is not an option as explained before.

Search for all the Jars in the below list and copy all of them under oracledi\agent\drivers folder.

adm.jar
admaps.jar
admodbo.jar
ap.jar
ArtifactListing.jar
audit-client.jar
axiom-api-1.2.10.jar
axiom-impl-1.2.10.jar
axis-ant.jar
axis-jaxrpc-1.2.1.jar
axis.jar
axis2-adb-1.5.4.jar
axis2-kernel-1.5.4.jar
axis2-transport-http-1.5.4.jar
axis2-transport-local-1.5.4.jar
backport-util-concurrent.jar
broker-provider.jar
bsf.jar
castor-1.3.1-core.jar
castor-1.3.1.jar
com.bea.core.apache.commons.collections_3.2.0.jar
com.bea.core.apache.commons.net_1.0.0.0_1-4-1.jar
com.bea.core.apache.commons.pool_1.3.0.jar
com.bea.core.apache.log4j_1.2.13.jar
com.bea.core.apache.regexp_1.0.0.0_1-4.jar
com.bea.core.apache.xalan_2.7.0.jar
com.bea.core.apache.xml.serializer_2.7.0.jar
com.oracle.ws.orawsdl_1.4.0.0.jar
commons-cli-1.1.jar
commons-codec-1.4.jar
commons-compress-1.5.jar
commons-configuration-1.5.jar
commons-dbcp-1.4.0.jar
commons-discovery-0.4.jar
commons-el.jar
commons-fileupload-1.2.jar
commons-httpclient-3.1.jar
commons-io-1.4.jar
commons-lang-2.3.jar
commons-validator-1.3.1.jar
cpld.jar
css.jar
cssimportexport.jar
ctg.jar
ctg_custom.jar
dms.jar
epml.jar
epm_axis.jar
epm_hfm_web.jar
epm_j2se.jar
epm_jrf.jar
epm_lcm.jar
epm_misc.jar
epm_stellant.jar
epm_thrift.jar
essbaseplugin.jar
essbasestudioplugin.jar
ess_es_server.jar
ess_japi.jar
fm-actions.jar
fm-adm-driver.jar
fm-web-objectmodel.jar
fmcommon.jar
fmw_audit.jar
glassfish.jstl_1.2.0.1.jar
hssutil.jar
httpcore-4.0.jar
identitystore.jar
identityutils.jar
interop-sdk.jar
jacc-spi.jar
jakarta-commons.jar
javax.activation_1.1.jar
javax.mail_1.4.jar
javax.security.jacc_1.0.0.0_1-1.jar
jdom.jar
jmxspi.jar
jps-api.jar
jps-common.jar
jps-ee.jar
jps-internal.jar
jps-mbeans.jar
jps-unsupported-api.jar
jps-wls.jar
js.jar
json.jar
jsr173_1.0_api.jar
lcm-clu.jar
lcmclient.jar
LCMXMLBeans.jar
ldapbp.jar
ldapjclnt11.jar
libthrift-0.9.0.jar
log4j-1.2.14.jar
lucene-analyzers-1.9.1.jar
lucene-core-1.9.1.jar
lucene-spellchecker-1.9.1.jar
neethi-2.0.4.jar
ojdbc6dms.jar
ojdl.jar
opencsv-1.8.jar
oraclepki.jar
org.apache.commons.beanutils_1.8.3.jar
org.apache.commons.digester_1.8.jar
org.apache.commons.logging_1.1.1.jar
osdt_cert.jar
osdt_core.jar
osdt_xmlsec.jar
quartz.jar
registration_xmlBeans.jar
registry-api.jar
resolver.jar
saaj.jar
scheduler_ces.jar
servlet-api.jar
slf4j-api-1.5.8.jar
slf4j-log4j12-1.5.8.jar
sourceInfo.jar
stax-api-1.0.1.jar
wf_ces_utils.jar
wf_eng_agent.jar
wf_eng_api.jar
wf_eng_server.jar
wldb2.jar
wlpool.jar
wlsqlserver.jar
wsplugin.jar
xbean.jar
xmlparserv2.jar
xmlpublic.jar
xmlrpc-2.0.1.jar
XmlSchema-1.3.1.jar

Restart the ODI agent and it should be ready to execute any HFM Java code inside of ODI.

I know that this is a lot of jars and will take some time to find all of them but at least you’ll be able to upgrade you HFM and still use the same interfaces you have today in ODI to manage HFM (just remember to use the new data store objects reversed from the new RKM).

The KM usage is very similar to the old ones and we had the instructions in all its options so we’ll not explain then here (just in the article). The only important difference is on how to setup the “Cluster (Data Server)” information on Data Server (Physical Architecture). For the new HFM API, we need to inform two new settings: Oracle Home and Oracle Instance Paths. Those paths are related to the server where your HFM application is installed. These settings will be used internally in HFM API to figure out all HFM information related to that specific HFM instance.

Due to these two new settings and in order to continue to accommodate all connection information within a single place (ODI Topology), “Cluster (Data Server)” was overloaded to receive three settings instead of just one, separating them by colon. So now “Cluster (Data Server)” receives “dataServerName:oracleHomePath:oracleInstancePath” instead of just dataServerName.

data-server

Having those considerations in mind, it is just a matter to create a new Data Server and set the overloaded “Cluster (Data Server)” information and the user/password that ODI will use to access the HFM application. After that, we just need to create a Physical Schema with the name of the HFM application, a new Logical Schema and associate that to a context.

And that is it, you guys are ready to upgrade your HFM environment and still use your old ODI interface to maintain HFM. If you guys have any doubts/suggestions about the KMs please few free to contact us.

If you guys are having errors with our KMs, please check our troubleshooting post here.

I hope you guys enjoy these KMs. See you soon!