Archive for the 11.1.1.9.0 Category

Comparing ODI Scenario versions using SQL

Posted in 11.1.1.9.0, ODI 11g, Tips and Tricks with tags , , on October 26, 2017 by Rodrigo Radtke de Souza

Hi all, it has being a while that we don’t post! Busy days you know…. Anyway, let’s see what we got today.

The situation that I’m about to describe often happens in large/old ODI projects. Imagine the following: you received a task to change an ODI component that was created one year ago by someone else that is not even in the company anymore. The code is running fine in PROD and the business want a small fix to it. You open the ODI package and it contains a lot of interfaces, procedures, variables, etc. You need to change the code in one single interface, which seems very simple. You change it, save it, generate a new scenario and move it to PROD. When it gets there, the job fails due to an error in another interface that you did not touch! You start to troubleshoot and figure out that someone else changed something in DEV, saved it, but did not move the code to PROD. Unfortunately this ”unwanted” code change was included by you when you generated your scenario and now the mess is already created. If you already passed through this situation, than this post may help you.

Code versioning and code migration processes in general are things that everybody knows that are necessary, but sometimes they are overlooked by the companies because people think they are too complicated or does not work very well. ODI is a big example of this, since its native versioning system is not very intuitive and most of the times does not work in the way that we want to. There are companies out there that even build their own code versioning system (outside of ODI) to manage ODI code versions. I dare to say that most of the companies don’t even have any kind of code versioning or formal code migration process for ODI at all, which causes some big headaches on similar situations as the one that I just described.

The technique that I’ll explain here is not about code versioning itself. I’ll describe something that we may use when we do not have any other way to guarantee that the scenario that we are generating was not changed by someone else during a time period. Just to let you know, all the following SQL was done in ODI 11.1.1.9 version.

Let’s begin with the basics. Everything that you create in ODI is stored in SNP tables in its WORK and MASTER repositories. For this post we will focus in two main tables from the WORK repository:

  • SNP_SCEN: contains the basic information about the scenarios that exists in that WORK repository (like name, version, creation date and so on);
  • SNP_SCEN_TASK: the “main” scenario table that contains all the steps/tasks that are performed by a scenario. You may query this table and see exactly which tasks (like SQL commands, variables, flows) that scenario will perform when you run it in Operator;

So now let’s get back to our problem. There is a scenario that is running fine in Production for one year now (let’s say it calls ODI_SCENARIO Version 1_00_00) and this scenario is also in Development. I’ll make a change in only one interface (I’ll add a simple SUBSTR in a column named PACK_SLIP) of this scenario in Development and create a new version of it (ODI_SCENARIO Version 2_00_00). How do I guarantee that my code change was the only thing that changed in this new scenario and that it does not contain any other code from other developers? The answer lies on the SNP_SCEN_TASK table.

If you go to ODI WORK repository in Production, you may run the following query to get all the steps that the scenario is currently executing on its 1_00_00 Version:

SELECT NNO,
SCEN_TASK_NO,
TASK_TYPE,
TASK_NAME1,
TASK_NAME2,
TASK_NAME3,
EXE_CHANNEL,
DEF_CONTEXT_CODE,
DEF_LSCHEMA_NAME,
DEF_CONNECT_ID,
DEF_IND_COMMIT,
DEF_ISOL_LEVEL,
DEF_PLAN_COMP,
COL_CONTEXT_CODE,
COL_LSCHEMA_NAME,
COL_CONNECT_ID,
COL_ISOL_LEVEL,
COL_IND_COMMIT,
COL_PLAN_COMP,
ORD_TRT,
IND_ERR,
LOG_LEV_DET,
IND_LOG_NB,
DEF_TECH_INT_NAME,
COL_TECH_INT_NAME,
IND_LOG_METHOD,
COL_TXT,
COL_IND_ENC,
COL_ENC_KEY,
DEF_TXT,
DEF_IND_ENC,
DEF_ENC_KEY,
IND_LOG_FINAL_CMD
FROM SNP_SCEN_TASK
WHERE SCEN_NO IN
(SELECT SCEN_NO
FROM SNP_SCEN
WHERE SCEN_NAME = 'ODI_SCENARIO'
AND SCEN_VERSION = '1_00_00')
ORDER BY SCEN_TASK_NO,NNO;

2017-10-25_17-17-51

There are a lot of important columns in this table that can give you a lot of valuable information. However, COL_TXT and DEF_TXT are generally the most important ones since they contain the code that is generated in the “Source and Target tabs” inside the procedures and interfaces. After you run this SQL in Production environment, you may export it to whatever you like. In this example here, I’ll export it as “Text” using Oracle SQL Developer as the following (Right click on any row and select “Export”):

2017-10-25_17-21-19

Save it somewhere in your computer:

2017-10-25_17-22-04

The result will be something like this:

2017-10-25_17-24-01

Now let’s run the SQL in the Development ODI WORK repository. The only thing that we will change now is our filter that will go from SCEN_VERSION = ‘1_00_00’ to SCEN_VERSION = ‘2_00_00’, which is the new scenario version that we just generated. Do the same steps as the Production SQL and you should end up with something like this:

2017-10-25_17-27-19

Now you need to compare both codes. I like to go simple and use Notepad ++ with “Compare” plugin. You may use any other tool for comparing txt files (Beyond Compare is awesome as well). In Notepad ++ you just need to open both files, click on the Production file and “Set the First Compare”, then click on de Development file and “Compare”.

2017-10-25_17-31-42

2017-10-25_17-32-54

You will have something similar to this when you compare:

2017-10-25_17-34-24

The “Compare NavBar” shows a lot of differences, way more than the one that I just did. However, we need to analyze it calmly to verify what do they really mean. You may navigate thought the changes using the “Next” button in the tool bar.

2017-10-25_17-43-00

There will be some blocks of code that contains “similar differences” due to the nature of ODI. For example, when you change one single thing in one column of one interface, it will be reflected in several steps within the Knowledge Module (in C$/I$/E$ creation for example). This is one example of it:

2017-10-25_17-49-46

This change is saying that we changed the order of PACK_SLIP column (which was the column that we added a SUBSTR command). Actually we didn’t change the order, but we changed its content. However, when ODI create its temporary tables (like C$, I$ and E$) we cannot control the order that they are going to be created as the code is generated automatically by ODI. So we don’t need to worry about this change, as it was somehow “expected”. When we click “Next”, we are going to have similar ones where the column just changed its order. Continuing further down, we will get to the place where our change occurred:

2017-10-25_17-58-43

Cool, this is the place that we changed our code and it looks good. Let’s keep going to see what else has changed. Now we will get something weird. A lot of “1” changes just appeared until the end of the file (explaining why we had a lot of changes in the comparison Navigation Bar):

2017-10-25_17-59-56

This “1” comes from IND_LOG_FINAL_CMD column, which identifies if the step should “Log Final Command” or not. This does not affect the code itself, but for the sake of my analyses I went to the KM to see if someone had changed this option:

2017-10-25_18-06-42

My suspicious was right and someone changed this option in one of the KMs, which got reflected in a lot of places in my ODI scenario. There was no more changes in my comparison, so I could conclude that:

  • PACK_SLIP changed the order in some temporary tables creation, which is ok;
  • I saw my PACK_SLIP mapping change (SUBSTR) in the Development code;
  • There was a change in the KM to “Log Final Command” in a specific KM step, which is also ok and does not affect the code itself;

No more differences were found between the scenarios, so I may safely deploy it to production. If someone else had changed something more critical, the compare method would have catch that and we could revert it back before moving to Production.

There are other ways for you to get and compare the codes, like if both scenarios are in the same DB you could just run two SQLs and compare them or you could export both XML scenario files and compare those, but this post here gives you a generic way that can be done in most of the cases and it is fairly easy to be used.

That’s it guys, I hope you have enjoyed!

Advertisement

ODI KMs for HFM 11.1.2.4

Posted in 11.1.1.9.0, ACE, Configuration, DEVEPM, ETL, Hacking, HFM, Knowledge Models, ODI, ODI 11g, ODI Architecture, Uncategorized with tags , , , , , on March 3, 2017 by RZGiampaoli

Hi guys how are you? Today we are proud to announce that we are making available the ODI KMs for HFM 11.1.2.4.

—- EDITED on June/17 —-

We developed these KMs around 6 months ago, but we were waiting to release them together with an article that we wrote for Oracle.

Since OTN had some “Priority changes”, our article was postponed to later this year. As we had some people asking for these KMs we decide to release the KMs now and when the article is published we will let you guys know as well.

The article is live here! And if you guys are having errors with our KMs, please check our troubleshooting post here.

—- EDITED on June/17 —-

Prior to version 11.1.2.4, ODI could be easily used for HFM integration processes. ODI used its KMs with specific HFM drivers (HFMDriver.dll) provided by Oracle that were used to access and manipulate HFM applications. However, on HFM’s latest version, Oracle decided to remove its support for ODI, meaning that all HFM integrations would have to move from ODI to either manual iteration with HFM, usage of another integration tool (Like FDMEE) or create custom code using the new Java HFM API.

Since we didn’t want to re-write all our ODI environment and also none of the above options are robust enough, we decided to recreate the ODI KMs using Java HFM API. For these KMs to work we need to do two things: import them from ODI Java Net and do some setup in the ODI agent.

In the article we explain all options and how do we came up with this solution, but here we will not talk about it since we want you guys to read our article as well and we can’t use the content of the article here since we already signed an exclusivity agreement with Oracle.

The first part is easy and you just need to download the files from the link below

ODI KMS for HFM 11.1.2.4

The second one is more difficult. We need to make the new HFM Jars available to the ODI Agent and in order to do so we have two options:

Install the agent in the HFM machine OR copy the necessary jar files to the agent drivers folder (oracledi\agent\drivers).

If your architecture allows to have both HFM and ODI agent in the same server, then you may use this approach, which is very simple. The only thing to do is to change odiparams file (oracledi\agent\bin\odiparams.bat file in a standalone agent) and add the location of those three HFM jar files. Open odiparams.bat file and search for “ODI_ADDITIONAL_CLASSPATH”. On that setting, just set the location of the HFM jar files, as below (this is just an example. Please adjust the path accordingly to your environment):

set ODI_ADDITIONAL_CLASSPATH=%ODI_ADDITIONAL_CLASSPATH%;

“D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_j2se.jar”;

“D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_thrift.jar”;

“D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_hfm_server.jar”

Save the file, restart the ODI agent and it is done

If you decide to go with the second option, we’ll provide a list of all the necessary jars (be prepared… it’s huge). In the article we explain how to identify all the necessary jar files in a systematic way but here this is not an option as explained before.

Search for all the Jars in the below list and copy all of them under oracledi\agent\drivers folder.

adm.jar
admaps.jar
admodbo.jar
ap.jar
ArtifactListing.jar
audit-client.jar
axiom-api-1.2.10.jar
axiom-impl-1.2.10.jar
axis-ant.jar
axis-jaxrpc-1.2.1.jar
axis.jar
axis2-adb-1.5.4.jar
axis2-kernel-1.5.4.jar
axis2-transport-http-1.5.4.jar
axis2-transport-local-1.5.4.jar
backport-util-concurrent.jar
broker-provider.jar
bsf.jar
castor-1.3.1-core.jar
castor-1.3.1.jar
com.bea.core.apache.commons.collections_3.2.0.jar
com.bea.core.apache.commons.net_1.0.0.0_1-4-1.jar
com.bea.core.apache.commons.pool_1.3.0.jar
com.bea.core.apache.log4j_1.2.13.jar
com.bea.core.apache.regexp_1.0.0.0_1-4.jar
com.bea.core.apache.xalan_2.7.0.jar
com.bea.core.apache.xml.serializer_2.7.0.jar
com.oracle.ws.orawsdl_1.4.0.0.jar
commons-cli-1.1.jar
commons-codec-1.4.jar
commons-compress-1.5.jar
commons-configuration-1.5.jar
commons-dbcp-1.4.0.jar
commons-discovery-0.4.jar
commons-el.jar
commons-fileupload-1.2.jar
commons-httpclient-3.1.jar
commons-io-1.4.jar
commons-lang-2.3.jar
commons-validator-1.3.1.jar
cpld.jar
css.jar
cssimportexport.jar
ctg.jar
ctg_custom.jar
dms.jar
epml.jar
epm_axis.jar
epm_hfm_web.jar
epm_j2se.jar
epm_jrf.jar
epm_lcm.jar
epm_misc.jar
epm_stellant.jar
epm_thrift.jar
essbaseplugin.jar
essbasestudioplugin.jar
ess_es_server.jar
ess_japi.jar
fm-actions.jar
fm-adm-driver.jar
fm-web-objectmodel.jar
fmcommon.jar
fmw_audit.jar
glassfish.jstl_1.2.0.1.jar
hssutil.jar
httpcore-4.0.jar
identitystore.jar
identityutils.jar
interop-sdk.jar
jacc-spi.jar
jakarta-commons.jar
javax.activation_1.1.jar
javax.mail_1.4.jar
javax.security.jacc_1.0.0.0_1-1.jar
jdom.jar
jmxspi.jar
jps-api.jar
jps-common.jar
jps-ee.jar
jps-internal.jar
jps-mbeans.jar
jps-unsupported-api.jar
jps-wls.jar
js.jar
json.jar
jsr173_1.0_api.jar
lcm-clu.jar
lcmclient.jar
LCMXMLBeans.jar
ldapbp.jar
ldapjclnt11.jar
libthrift-0.9.0.jar
log4j-1.2.14.jar
lucene-analyzers-1.9.1.jar
lucene-core-1.9.1.jar
lucene-spellchecker-1.9.1.jar
neethi-2.0.4.jar
ojdbc6dms.jar
ojdl.jar
opencsv-1.8.jar
oraclepki.jar
org.apache.commons.beanutils_1.8.3.jar
org.apache.commons.digester_1.8.jar
org.apache.commons.logging_1.1.1.jar
osdt_cert.jar
osdt_core.jar
osdt_xmlsec.jar
quartz.jar
registration_xmlBeans.jar
registry-api.jar
resolver.jar
saaj.jar
scheduler_ces.jar
servlet-api.jar
slf4j-api-1.5.8.jar
slf4j-log4j12-1.5.8.jar
sourceInfo.jar
stax-api-1.0.1.jar
wf_ces_utils.jar
wf_eng_agent.jar
wf_eng_api.jar
wf_eng_server.jar
wldb2.jar
wlpool.jar
wlsqlserver.jar
wsplugin.jar
xbean.jar
xmlparserv2.jar
xmlpublic.jar
xmlrpc-2.0.1.jar
XmlSchema-1.3.1.jar

Restart the ODI agent and it should be ready to execute any HFM Java code inside of ODI.

I know that this is a lot of jars and will take some time to find all of them but at least you’ll be able to upgrade you HFM and still use the same interfaces you have today in ODI to manage HFM (just remember to use the new data store objects reversed from the new RKM).

The KM usage is very similar to the old ones and we had the instructions in all its options so we’ll not explain then here (just in the article). The only important difference is on how to setup the “Cluster (Data Server)” information on Data Server (Physical Architecture). For the new HFM API, we need to inform two new settings: Oracle Home and Oracle Instance Paths. Those paths are related to the server where your HFM application is installed. These settings will be used internally in HFM API to figure out all HFM information related to that specific HFM instance.

Due to these two new settings and in order to continue to accommodate all connection information within a single place (ODI Topology), “Cluster (Data Server)” was overloaded to receive three settings instead of just one, separating them by colon. So now “Cluster (Data Server)” receives “dataServerName:oracleHomePath:oracleInstancePath” instead of just dataServerName.

data-server

Having those considerations in mind, it is just a matter to create a new Data Server and set the overloaded “Cluster (Data Server)” information and the user/password that ODI will use to access the HFM application. After that, we just need to create a Physical Schema with the name of the HFM application, a new Logical Schema and associate that to a context.

And that is it, you guys are ready to upgrade your HFM environment and still use your old ODI interface to maintain HFM. If you guys have any doubts/suggestions about the KMs please few free to contact us.

If you guys are having errors with our KMs, please check our troubleshooting post here.

I hope you guys enjoy these KMs. See you soon!

PBCS, BICS, DBCS and ODI!!! Is that possible???

Posted in 11.1.1.9.0, 11.1.2.4, ACE, BICS, DBCS, EPM, EPM Automate, ODI, ODI 10g, ODI 11g, ODI 12c, ODI Architecture, ODI Architecture, Oracle, OS Command, PBCS, Performance, Uncategorized with tags , , , , , , , on August 15, 2016 by RZGiampaoli

Hey guys, today I’ll talk a little bit about architecture, cloud architecture.

I just finished a very exciting project in Brazil and I would like to share how we put everything together for a 100% cloud solution that includes PBCS, BICS, DBCS and ODI. Yes ODI and still 100% cloud.

Now you would be thinking, how could be 100% cloud if ODI isn’t cloud yet? Well, it can be!

This client doesn’t have a big IT infrastructure, in fact, almost all client’ databases are supported and hosted by providers, but still, the client has the rights to have a good forecast and BI tool with a strong ETL process behind it right?

Thanks to the cloud solutions, we don’t need to worry about infrastructure anymore (or almost), the only problem is… ODI.

We still don’t have a KM for cloud services, or a cloud version of ODI, them basically we can’t use ODI to integrate could tools….

Or can we? Yes we can 🙂

The design is simple:

  1. PBCS: Basically we’ll work in the same way we would if it was just it.
  2. BICS: Same thing here, but instead of use the database that comes with BICS, we need to contract a DBCS as well and point the DW schema to it.
  3. DBCS: here’s the trick. Oracle’s DBCS is not else then a Linux machine hosted in a server. That means, we can install other things in the server, other things like ODI and VPN’s.
  4. ODI: we just need to install it in the same way we would do in an on premise environment, including the agent.
  5. VPN’s: the final touch, we just need to create VPN’s between the DBCS and the client DB’s, this way ODI will have access to everything it needs.

Yes you read it right, we can install ODI in the DBCS, and that makes ODI a “cloud” solution.

cloud solution

The solution looks like this:

BICS: It’ll read directly from his DW schema in the DBCS.

PBCS: There’re no direct integration between the PBCS and DBCS (where the ODI Agent is installed), but I found it a lot better and easy to integrate them using EPM Automate.

EPM Automate: With EPM Automate we can do anything we want, extract data and metadata, load data and metadata, execute BR and more. For now the easiest way to go is create a script and call it from ODI, passing anything you need to it.

VPN’s: For each server we need to integrate we’ll need one VPN created. With the VPN between the DBCS and the hosts working, use ODI is extremely strait forward, we just need to create the topology as always, revert anything we need and work in the interfaces.

And that’s it. With this design you can have everything in the cloud and still have your ODI behind scenes! By the way, you can exactly the same thing with ODI on premise and as a bonus you can get rid of all VPN’s.

In another post I’ll give more detail about the integration between ODI and PBCS using EPM Automate, but I can say, it works extremely well and as far I know is a lot easier than FDMEE (at least for me).

Thanks guys and see you soon.

 

Remotely Ziping files with ODI

Posted in 11.1.1.9.0, ACE, Configuration, EPM, Essbase, ETL, Hacking, Hyperion Essbase, InfraStructure, ODI, ODI 10g, ODI 11g, ODI 12c, ODI Architecture, ODI Architecture, OS Command, Performance, Remotely, Tips and Tricks, Zip Files with tags , , , , , , , , , , , on April 5, 2016 by RZGiampaoli

Hi guys how are you? It has been a long time since last time I wrote something but it was for a good reason! We were working in our two Kscope sessions! Yes, this year we will have 2 sessions and I think they will be great!

Anyway, let us get to the point!

Today I want to talk about something that should be very simple to do it but in the end, it is a nightmare…. Zip a file in a remote server…

A little bit of context! I was working in a backup interface for one client and, because their cubes are very big, I was trying to improve the performance as much as I can.

Part of the backup was to copy the .ind and .pag files and the data extract files as well. For an app we are talking in 30 gb of .pag and 40 gb of data extract files.

Their ODI infrastructure is like this:

Infrastructure

Basically I need to extract/copy data from Essbase server to the disaster recovery server (DR Server). Nothing special here. The problem is, because the size of the files I wanted to Zip the files first and then send it to the DR server.

If you use the ODI tools to Zip the file, what it does is bring all the files to the ODI Agent server, zip everything and the send it back. I really do not want all this traffic in the network and all the time lost in this process (also, the agent server is a LOT less powerful then the Essbase server).

Regular odi tools zip process

Then I start to research how I could do that (and thank you my colleague and friend Luis Fernando Cairo that help me a lot doing a lot of tests on this)

First of all we have three main options here:

  1. Create a .bat file and run it remotely: I did not like it because I do not want a lot of .bats all over the places
  2. Use windows invoke command: I need a program in the server like 7 zip or so and I don’t have access to install freely and I do not want to install zip’s program all over the places too
  3. Use Psexec to execute a program in the server: Same as the previous one.

Ok, I figure out that in the end I’ll need to create/install something in the server… and I rate it. Well, let’s at least optimize the problem right!

Then I was thinking, what I have in common in all Hyperion servers? The answer is JAVA.

Then I thought, I can use the JAR command to zip a file:

jar cfM file.zip *.pag *.ind

Where:

c: Creates a new archive file named jarfile (if f is specified) or to standard output (if f and jarfile are omitted). Add to it the files and directories specified by inputfiles.

f: Specifies the file jarfile to be created (c), updated (u), extracted (x), indexed (i), or viewed (t). The -f option and filename jarfile are a pair — if present, they must both appear. Omitting f and jarfile accepts a “jar file” from standard input (for x and t) or sends the “jar file” to standard output (for c and u).

M: Do not create a manifest file entry (for c and u), or delete a manifest file entry if one exists (for u).

Humm, things start to looks better. Now I had to decide if I would use the Invoke command or Psexec.

I started trying the Invoke command, but after sometime I figure out that I can’t execute the jar command using invoke.

Then my last alternative was Psexec.

The good thing about it is that is a zip file that you need just to unzip in the agent server, set it in the Environment Variables (PATH) and you are good to go.

It works amazingly.

You can run anything remotely with this and it’s a centralized solution and non-invasive as well (what I liked).

You just need to:

psexec \\Server  -accepteula  -w “work dir” javapath\jar cfM file.zip *.pag *.ind

Where:

-w: Set the working directory of the process (relative to remote computer).

-accepteula: This flag suppresses the display of the license dialog.

There’s one catch, for some unknown reason, the ODI agent does not get the PATH correctly then you need to use the complete path where it was “Installed”. The ODI is like this:

OdiOSCommand “-OUT_FILE=Log_Path/Zip_App_Files-RUM-PNL.Log” “-ERR_FILE =Log_Path /Zip_App_Files-RUM-PNL.err”

D:\Oracle\PSTools\psexec \\server -accepteula -w \\arborpath\APP\RUM\PNL\ JAVA_PATH\jdk160_35\bin\jar cfM App_Files-RUM-PNL.zip *.pag *.ind

With this, we will have a process like this:

Remotly Zip Process

This should not be something that complicate but it is and believe me, I create a very fast process and the client is very happy.

I hope you guys enjoy it and see you soon.

Dynamically exporting objects from ODI

Posted in 11.1.1.9.0, ACE, ODI, ODI Architecture, Tips and Tricks with tags , on March 1, 2016 by Rodrigo Radtke de Souza

Hi all!

Today we will be talking about how we can export any object from ODI in a dynamic way. But first, why would we want to do that? One good example to do this is to figure out which ODI objects changed during a period range and export their xml to be stored in a code versioning repository. Another one could be to export all ODI scenarios with a certain marker, or from specific projects/folders in an automated way. Exporting Load Plans: Few people realize it, but there is no easy way in ODI to export several Load Plans at once (you may move the desired load plans to a folder and then export the entire folder with “Child components export” selected, but that would be considered cheating 🙂 ). Or maybe you just want to do it for the sake of doing something in a dynamic way (if you already read some of our posts, you already know that we like dynamic coding!).

First, let’s take a look on the OdiExportObject object from the Toolbox.

1

From Oracle Documentation:
Use this command to export an object from the current repository. This command reproduces the behavior of the export feature available in the user interface.

Great, that’s what we want: export any object (even Load Plans, that it’s not listed in the Oracle documentation) from the current repository. You may read about all its parameters here:

https://docs.oracle.com/middleware/11119/odi/develop/appendix_a.htm#ODIDG805

An example of a Load Plan export would look like this:

2

Two important parameters here: First we have the Object ID, that indicates which object you are about to export. This ID can be found by double clicking the ODI object and checking its Version tab:

3

The other parameter is the Classname. This one you may check on Oracle documentation but, as I said before, there may be some class names missing in the documentation, like SnpLoadPlan. So, the easiest way to check the correct Classname for any ODI object is to export it using the user interface, like below:

4

5

Go to the folder and open the xml file in a text editor. The Classname will be the Object Class right in the beginning of the xml file:

6

Ok, but it does not seem very dynamic, since we need to pass the object ID/Classname in order to export the correct object. So here we will use two of our favorite techniques to make it dynamic: Command on Source/Target and ODI metadata repository SQL. This is how it works: we will create an ODI procedure that will contain a SQL that queries the ODI metadata repository in the “Command on Source” tab (returning all the objects that we want to export) and OdiExportObject command on the “Command on Target” tab to actually export the objects.

Let’s begin with the “Command on Source” tab. First create a connection to you ODI work repository and define a Logical Schema to it. In the Command, add the SQL that will meet your requirement (in this example, retrieve all Load Plans that were created/modified since last week):

7

Our query needs to return three columns: the OBJECT_ID, OBJECT_CLASS and FILE_NAME. This information will be passed to the “Command on Target” to identify which objects needs to be exported.

Now, we need to add the OdiExportObject to the “Command on Target” tab and this is pretty simple to do. Every ODI object found in the Toolbox can be added to an ODI procedure and be called as “ODI Tools” Technology. If you are not sure how to do it, a good tip here is to add the ODI object that you want to add in the ODI procedure in an ODI package, set its parameters as you would normally do and click at the “Command” tab, like bellow:

8

9

Now just copy the command text and add it to your procedure in the “Command on Target” tab, selecting “ODI Tools” as its Technology:

10

As you can see, we have added three # variables here that will receive the information from the “Command on Source” tab. When you run this procedure, if 10 load plans were created/modified since last week, those will be exported to the EXPORT_DIR folder.

In this example we queried SNP_LOAD_PLAN table in order to get all load plan information. Luckily, the ODI table names are very similar to its Classname, so they should not be hard to find. Here is a list of the most common objects that you will likely export from ODI:

Capture

That’s it guys. I hope you liked it! See ya!

Oracle Data Integrator and OBIEE (11.1.1.9.0) Just Released!

Posted in 11.1.1.9.0, OBIEE, ODI with tags , , on May 14, 2015 by Rodrigo Radtke de Souza

Hi all!

A real quick post today: Oracle just released Oracle Data Integrator and OBIEE (11.1.1.9.0)!

ODI is avalable for download here:

http://www.oracle.com/technetwork/middleware/data-integrator/downloads/index.html

Just a couple of new features (security related) that you may check here:

https://docs.oracle.com/middleware/11119/odi/develop/whatsnew.htm#ODIDG1560

You may check OBIEE here:

http://www.oracle.com/technetwork/middleware/bi-enterprise-edition/downloads/bi-downloads-2537285.html

See you later!