Archive for ODI 11g

Comparing ODI Scenario versions using SQL

Posted in 11.1.1.9.0, ODI 11g, Tips and Tricks with tags , , on October 26, 2017 by Rodrigo Radtke de Souza

Hi all, it has being a while that we don’t post! Busy days you know…. Anyway, let’s see what we got today.

The situation that I’m about to describe often happens in large/old ODI projects. Imagine the following: you received a task to change an ODI component that was created one year ago by someone else that is not even in the company anymore. The code is running fine in PROD and the business want a small fix to it. You open the ODI package and it contains a lot of interfaces, procedures, variables, etc. You need to change the code in one single interface, which seems very simple. You change it, save it, generate a new scenario and move it to PROD. When it gets there, the job fails due to an error in another interface that you did not touch! You start to troubleshoot and figure out that someone else changed something in DEV, saved it, but did not move the code to PROD. Unfortunately this ”unwanted” code change was included by you when you generated your scenario and now the mess is already created. If you already passed through this situation, than this post may help you.

Code versioning and code migration processes in general are things that everybody knows that are necessary, but sometimes they are overlooked by the companies because people think they are too complicated or does not work very well. ODI is a big example of this, since its native versioning system is not very intuitive and most of the times does not work in the way that we want to. There are companies out there that even build their own code versioning system (outside of ODI) to manage ODI code versions. I dare to say that most of the companies don’t even have any kind of code versioning or formal code migration process for ODI at all, which causes some big headaches on similar situations as the one that I just described.

The technique that I’ll explain here is not about code versioning itself. I’ll describe something that we may use when we do not have any other way to guarantee that the scenario that we are generating was not changed by someone else during a time period. Just to let you know, all the following SQL was done in ODI 11.1.1.9 version.

Let’s begin with the basics. Everything that you create in ODI is stored in SNP tables in its WORK and MASTER repositories. For this post we will focus in two main tables from the WORK repository:

  • SNP_SCEN: contains the basic information about the scenarios that exists in that WORK repository (like name, version, creation date and so on);
  • SNP_SCEN_TASK: the “main” scenario table that contains all the steps/tasks that are performed by a scenario. You may query this table and see exactly which tasks (like SQL commands, variables, flows) that scenario will perform when you run it in Operator;

So now let’s get back to our problem. There is a scenario that is running fine in Production for one year now (let’s say it calls ODI_SCENARIO Version 1_00_00) and this scenario is also in Development. I’ll make a change in only one interface (I’ll add a simple SUBSTR in a column named PACK_SLIP) of this scenario in Development and create a new version of it (ODI_SCENARIO Version 2_00_00). How do I guarantee that my code change was the only thing that changed in this new scenario and that it does not contain any other code from other developers? The answer lies on the SNP_SCEN_TASK table.

If you go to ODI WORK repository in Production, you may run the following query to get all the steps that the scenario is currently executing on its 1_00_00 Version:

SELECT NNO,
SCEN_TASK_NO,
TASK_TYPE,
TASK_NAME1,
TASK_NAME2,
TASK_NAME3,
EXE_CHANNEL,
DEF_CONTEXT_CODE,
DEF_LSCHEMA_NAME,
DEF_CONNECT_ID,
DEF_IND_COMMIT,
DEF_ISOL_LEVEL,
DEF_PLAN_COMP,
COL_CONTEXT_CODE,
COL_LSCHEMA_NAME,
COL_CONNECT_ID,
COL_ISOL_LEVEL,
COL_IND_COMMIT,
COL_PLAN_COMP,
ORD_TRT,
IND_ERR,
LOG_LEV_DET,
IND_LOG_NB,
DEF_TECH_INT_NAME,
COL_TECH_INT_NAME,
IND_LOG_METHOD,
COL_TXT,
COL_IND_ENC,
COL_ENC_KEY,
DEF_TXT,
DEF_IND_ENC,
DEF_ENC_KEY,
IND_LOG_FINAL_CMD
FROM SNP_SCEN_TASK
WHERE SCEN_NO IN
(SELECT SCEN_NO
FROM SNP_SCEN
WHERE SCEN_NAME = 'ODI_SCENARIO'
AND SCEN_VERSION = '1_00_00')
ORDER BY SCEN_TASK_NO,NNO;

2017-10-25_17-17-51

There are a lot of important columns in this table that can give you a lot of valuable information. However, COL_TXT and DEF_TXT are generally the most important ones since they contain the code that is generated in the “Source and Target tabs” inside the procedures and interfaces. After you run this SQL in Production environment, you may export it to whatever you like. In this example here, I’ll export it as “Text” using Oracle SQL Developer as the following (Right click on any row and select “Export”):

2017-10-25_17-21-19

Save it somewhere in your computer:

2017-10-25_17-22-04

The result will be something like this:

2017-10-25_17-24-01

Now let’s run the SQL in the Development ODI WORK repository. The only thing that we will change now is our filter that will go from SCEN_VERSION = ‘1_00_00’ to SCEN_VERSION = ‘2_00_00’, which is the new scenario version that we just generated. Do the same steps as the Production SQL and you should end up with something like this:

2017-10-25_17-27-19

Now you need to compare both codes. I like to go simple and use Notepad ++ with “Compare” plugin. You may use any other tool for comparing txt files (Beyond Compare is awesome as well). In Notepad ++ you just need to open both files, click on the Production file and “Set the First Compare”, then click on de Development file and “Compare”.

2017-10-25_17-31-42

2017-10-25_17-32-54

You will have something similar to this when you compare:

2017-10-25_17-34-24

The “Compare NavBar” shows a lot of differences, way more than the one that I just did. However, we need to analyze it calmly to verify what do they really mean. You may navigate thought the changes using the “Next” button in the tool bar.

2017-10-25_17-43-00

There will be some blocks of code that contains “similar differences” due to the nature of ODI. For example, when you change one single thing in one column of one interface, it will be reflected in several steps within the Knowledge Module (in C$/I$/E$ creation for example). This is one example of it:

2017-10-25_17-49-46

This change is saying that we changed the order of PACK_SLIP column (which was the column that we added a SUBSTR command). Actually we didn’t change the order, but we changed its content. However, when ODI create its temporary tables (like C$, I$ and E$) we cannot control the order that they are going to be created as the code is generated automatically by ODI. So we don’t need to worry about this change, as it was somehow “expected”. When we click “Next”, we are going to have similar ones where the column just changed its order. Continuing further down, we will get to the place where our change occurred:

2017-10-25_17-58-43

Cool, this is the place that we changed our code and it looks good. Let’s keep going to see what else has changed. Now we will get something weird. A lot of “1” changes just appeared until the end of the file (explaining why we had a lot of changes in the comparison Navigation Bar):

2017-10-25_17-59-56

This “1” comes from IND_LOG_FINAL_CMD column, which identifies if the step should “Log Final Command” or not. This does not affect the code itself, but for the sake of my analyses I went to the KM to see if someone had changed this option:

2017-10-25_18-06-42

My suspicious was right and someone changed this option in one of the KMs, which got reflected in a lot of places in my ODI scenario. There was no more changes in my comparison, so I could conclude that:

  • PACK_SLIP changed the order in some temporary tables creation, which is ok;
  • I saw my PACK_SLIP mapping change (SUBSTR) in the Development code;
  • There was a change in the KM to “Log Final Command” in a specific KM step, which is also ok and does not affect the code itself;

No more differences were found between the scenarios, so I may safely deploy it to production. If someone else had changed something more critical, the compare method would have catch that and we could revert it back before moving to Production.

There are other ways for you to get and compare the codes, like if both scenarios are in the same DB you could just run two SQLs and compare them or you could export both XML scenario files and compare those, but this post here gives you a generic way that can be done in most of the cases and it is fairly easy to be used.

That’s it guys, I hope you have enjoyed!

Advertisement

ODI 12c Standalone Agent Install for an ODI 11g guy

Posted in InfraStructure, Install, ODI, ODI 11g, ODI 12c, ODI Architecture with tags , , , , , on July 17, 2017 by Rodrigo Radtke de Souza

Hi everybody! Today’s post is about installing an ODI 12c standalone agent. This is not a “new” topic and the steps to perform it can also be found at the Oracle site, however it got me a little bit “off guard” when I was requested to install one and the reason is that it changed considerably comparing to ODI11g (and yeah, we still work A LOT with ODI11g, so installing ODI12c agent was “new” for us).

Prior to ODI 12 version, the ODI agent was configured by simply editing a file called odiparams.bat (odiparams.sh in Linux), which would contain all the necessary agent configuration parameters. It was a simple step, where you would enter the ODI master/work configuration, DB/ODI connection users and so on. After that, you would simply run the agent program and that was it, very short and easy to do. However, in ODI 12 version, it changed considerably and now we need to go through two wizard setups, one for creating the necessary pre-requisite DB schema for ”Common Infrastructure Services” and the other one to configure the ODI Standalone agent for us.

This change added some extra complexity to an architecture that was (talking exclusively about ODI Standalone Agent here) very simple to setup in the old days. Although Oracle provides wizards for us to minimize this effort, nothing was easier than simply configuring a parameter file and running a java program. But enough grumbling, let’s see how we may accomplish this task on ODI 12.

The first wizard that we need to run is the Repository Creation Utility (RCU) that is located here at ORACLE_HOME/oracle_common/bin/rcu.bat. Before we run it, we must understand what RCU is and what it can do for us. As its name suggests, it is a utility that may be used to create any repository component required for Oracle Fusion Middleware products, including the ODI Master/Work repository.

In our project, we did not create ODI Master/Work repository with RCU, but instead we got two empty Oracle DB schemas and installed ODI directly there. The reason why we did not use RCU in this situation is because RCU will force you to create one single Oracle DB schema that will store both ODI Master and Work repositories and this is not a good approach when dealing with large environments. We think that Oracle’s rational on this subject was to simplify certain ODI installs by unifying all in a single place, but again, this removes some of the ODI’s architecture flexibility and complicates the use of complex architectures in the future, like using multiple Work repositories attached to one Master.

So, if we already have ODI Master/Work repositories created, why do we still need RCU? This is because, from ODI 12 version on, we need a third Oracle DB schema that will be used to store the “Common Infrastructure Services” tables that are required for the ODI Standalone agent and the only way to create these tables are using the RCU utility.

Now that we have set our expectations around RCU, let’s run it. The first screen is just a welcome screen explaining what RCU is about, so just click Next.

1

Now let’s select “Create Repository” and “System Load and Product Load”. Just notice that you will be asked for a DBA user in the next steps, since this DBA user will be used to create the necessary database objects (including the DB schema itself) in the new “Common Infrastructure Services” schema. Click Next.

2

Add the database and DBA information and click next.

3

ODI installer will check your information and if everything is ok, all tasks will be green. Select Ok to proceed.

4

In the next screen is where we may select which components we want RCU to install. We may notice that RCU is able to create several schemas for different components, from ODI to WebLogic. Since we already have our Master and Work repositories created, we just need to select “AS Common Schemas”/”Common Infrastructure Services”. Note here that, for this schema, RCU will create it using what is added in the “Create new prefix” option plus a “_STB” postfix. Click Next.

5

The installer will check the pre-requisites to install and if it is ok, a green check will appear. Click OK.

6

In the next screen you will identify which schema password will be used on the new created DB schema. Add a password and click next.

7

Define the Default and Temp table spaces that will be used by the new schema and click Next.

8

If the table spaces does not exist, they will be created for you. Click Ok.

9

The installer will check once more if everything is okay and also create the necessary table spaces. Click Ok.

10

On the next page, we are going to have a Summary on what the installer will do. If everything looks correct, click Create to create the necessary DB objects.

11

Check the Completion Summary, click close and that’s it! You have successfully created the “Common Infrastructure Services” schema, which is a pre-requisite for the ODI Agent install.

12

The next step is to run the wizard setup that will configure the ODI Standalone agent for us. Run the Config program on ORACLE_HOME/oracle_common/common/bin/config.cmd. In the first screen let’s create a new domain. In this domain folder is where the ODI Agent batch programs will reside, such as Start/Stop agent. Select a meaningful folder and click next.

13

In the next screen you will select “Oracle Data Integrator – Standalone Agent – 12.2.1.2.6 [odi]” and click next. This step will also install some basic Standalone components required for the ODI Agent.

14

Select a valid JDK location and click next.

15

Since we did not create our Master and Work repositories using RCU, we won’t be able to use the “RCU Data” option for Auto Configuration here. It is not a big deal, since we may select “Manual Configuration” and click next.

16

Here we will need to input all the information related to two schemas: The ODI Master and the “Common Infrastructure Services“. The way that this screen works is tricky and confusing, since there are options that may be typed for all schemas at once. The best way to do it without any mistake is by selecting one of them, add all information, then uncheck and check the other one and add all the information again. Click next.

17

The installer will check the information that was added here and if it is okay, two green marks will be showed in the Status column. Click next.

18

The next screen will be used to define our ODI Agent name. Create a meaningful name here, since this will be used by the ODI users to select on which ODI agent they will run their ETL processes. Click next.

19

Add the server address, the port and an ODI user/password that has “Supervisor” access. On preferred Data source option, leave it as odiMasterRepository and click next.

20

Although we are not going to use our ODI Standalone Agent in a Node Manager object, which would be controlled by WebLogic, we still need to select a type for it and create a new credential. Add any name and a password for it (don’t worry, you will not use it for the ODI Standalone Agent) and click next.

21

Review the install summary and if everything is ok, just click Create.

22

Check all the steps as they turn into green checks and once completed, click next.

23

That’s the end of the configurations! You have successfully completed the ODI Standalone agent configuration and it is ready to run.

24

In order to run the ODI agent, open a CMD command, navigate to your base domain folder and run the ODI Agent start program with its name as an input argument: agent.cmd –NAME=DEV_AGENT. Wait a little bit for it to load and when its status gets to “started” it is good to go.

25

Now that the ODI agent is up and running, we may go to ODI Topology/Agent and double click the ODI agent that you have created. Now we may click on the Test button and see what happens. If everything is correct, you will see an information windows saying that the ODI agent Test was Successful!

26

Congratulations, now you have an ODI12c Standalone Agent configured. As you can see, we now have some more extra steps to do compared to ODI11g. I hope this post helps you to get prepared for this new kind of installs.

Thanks, see ya!

 

ODI KMs for HFM 11.1.2.4

Posted in 11.1.1.9.0, ACE, Configuration, DEVEPM, ETL, Hacking, HFM, Knowledge Models, ODI, ODI 11g, ODI Architecture, Uncategorized with tags , , , , , on March 3, 2017 by RZGiampaoli

Hi guys how are you? Today we are proud to announce that we are making available the ODI KMs for HFM 11.1.2.4.

—- EDITED on June/17 —-

We developed these KMs around 6 months ago, but we were waiting to release them together with an article that we wrote for Oracle.

Since OTN had some “Priority changes”, our article was postponed to later this year. As we had some people asking for these KMs we decide to release the KMs now and when the article is published we will let you guys know as well.

The article is live here! And if you guys are having errors with our KMs, please check our troubleshooting post here.

—- EDITED on June/17 —-

Prior to version 11.1.2.4, ODI could be easily used for HFM integration processes. ODI used its KMs with specific HFM drivers (HFMDriver.dll) provided by Oracle that were used to access and manipulate HFM applications. However, on HFM’s latest version, Oracle decided to remove its support for ODI, meaning that all HFM integrations would have to move from ODI to either manual iteration with HFM, usage of another integration tool (Like FDMEE) or create custom code using the new Java HFM API.

Since we didn’t want to re-write all our ODI environment and also none of the above options are robust enough, we decided to recreate the ODI KMs using Java HFM API. For these KMs to work we need to do two things: import them from ODI Java Net and do some setup in the ODI agent.

In the article we explain all options and how do we came up with this solution, but here we will not talk about it since we want you guys to read our article as well and we can’t use the content of the article here since we already signed an exclusivity agreement with Oracle.

The first part is easy and you just need to download the files from the link below

ODI KMS for HFM 11.1.2.4

The second one is more difficult. We need to make the new HFM Jars available to the ODI Agent and in order to do so we have two options:

Install the agent in the HFM machine OR copy the necessary jar files to the agent drivers folder (oracledi\agent\drivers).

If your architecture allows to have both HFM and ODI agent in the same server, then you may use this approach, which is very simple. The only thing to do is to change odiparams file (oracledi\agent\bin\odiparams.bat file in a standalone agent) and add the location of those three HFM jar files. Open odiparams.bat file and search for “ODI_ADDITIONAL_CLASSPATH”. On that setting, just set the location of the HFM jar files, as below (this is just an example. Please adjust the path accordingly to your environment):

set ODI_ADDITIONAL_CLASSPATH=%ODI_ADDITIONAL_CLASSPATH%;

“D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_j2se.jar”;

“D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_thrift.jar”;

“D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_hfm_server.jar”

Save the file, restart the ODI agent and it is done

If you decide to go with the second option, we’ll provide a list of all the necessary jars (be prepared… it’s huge). In the article we explain how to identify all the necessary jar files in a systematic way but here this is not an option as explained before.

Search for all the Jars in the below list and copy all of them under oracledi\agent\drivers folder.

adm.jar
admaps.jar
admodbo.jar
ap.jar
ArtifactListing.jar
audit-client.jar
axiom-api-1.2.10.jar
axiom-impl-1.2.10.jar
axis-ant.jar
axis-jaxrpc-1.2.1.jar
axis.jar
axis2-adb-1.5.4.jar
axis2-kernel-1.5.4.jar
axis2-transport-http-1.5.4.jar
axis2-transport-local-1.5.4.jar
backport-util-concurrent.jar
broker-provider.jar
bsf.jar
castor-1.3.1-core.jar
castor-1.3.1.jar
com.bea.core.apache.commons.collections_3.2.0.jar
com.bea.core.apache.commons.net_1.0.0.0_1-4-1.jar
com.bea.core.apache.commons.pool_1.3.0.jar
com.bea.core.apache.log4j_1.2.13.jar
com.bea.core.apache.regexp_1.0.0.0_1-4.jar
com.bea.core.apache.xalan_2.7.0.jar
com.bea.core.apache.xml.serializer_2.7.0.jar
com.oracle.ws.orawsdl_1.4.0.0.jar
commons-cli-1.1.jar
commons-codec-1.4.jar
commons-compress-1.5.jar
commons-configuration-1.5.jar
commons-dbcp-1.4.0.jar
commons-discovery-0.4.jar
commons-el.jar
commons-fileupload-1.2.jar
commons-httpclient-3.1.jar
commons-io-1.4.jar
commons-lang-2.3.jar
commons-validator-1.3.1.jar
cpld.jar
css.jar
cssimportexport.jar
ctg.jar
ctg_custom.jar
dms.jar
epml.jar
epm_axis.jar
epm_hfm_web.jar
epm_j2se.jar
epm_jrf.jar
epm_lcm.jar
epm_misc.jar
epm_stellant.jar
epm_thrift.jar
essbaseplugin.jar
essbasestudioplugin.jar
ess_es_server.jar
ess_japi.jar
fm-actions.jar
fm-adm-driver.jar
fm-web-objectmodel.jar
fmcommon.jar
fmw_audit.jar
glassfish.jstl_1.2.0.1.jar
hssutil.jar
httpcore-4.0.jar
identitystore.jar
identityutils.jar
interop-sdk.jar
jacc-spi.jar
jakarta-commons.jar
javax.activation_1.1.jar
javax.mail_1.4.jar
javax.security.jacc_1.0.0.0_1-1.jar
jdom.jar
jmxspi.jar
jps-api.jar
jps-common.jar
jps-ee.jar
jps-internal.jar
jps-mbeans.jar
jps-unsupported-api.jar
jps-wls.jar
js.jar
json.jar
jsr173_1.0_api.jar
lcm-clu.jar
lcmclient.jar
LCMXMLBeans.jar
ldapbp.jar
ldapjclnt11.jar
libthrift-0.9.0.jar
log4j-1.2.14.jar
lucene-analyzers-1.9.1.jar
lucene-core-1.9.1.jar
lucene-spellchecker-1.9.1.jar
neethi-2.0.4.jar
ojdbc6dms.jar
ojdl.jar
opencsv-1.8.jar
oraclepki.jar
org.apache.commons.beanutils_1.8.3.jar
org.apache.commons.digester_1.8.jar
org.apache.commons.logging_1.1.1.jar
osdt_cert.jar
osdt_core.jar
osdt_xmlsec.jar
quartz.jar
registration_xmlBeans.jar
registry-api.jar
resolver.jar
saaj.jar
scheduler_ces.jar
servlet-api.jar
slf4j-api-1.5.8.jar
slf4j-log4j12-1.5.8.jar
sourceInfo.jar
stax-api-1.0.1.jar
wf_ces_utils.jar
wf_eng_agent.jar
wf_eng_api.jar
wf_eng_server.jar
wldb2.jar
wlpool.jar
wlsqlserver.jar
wsplugin.jar
xbean.jar
xmlparserv2.jar
xmlpublic.jar
xmlrpc-2.0.1.jar
XmlSchema-1.3.1.jar

Restart the ODI agent and it should be ready to execute any HFM Java code inside of ODI.

I know that this is a lot of jars and will take some time to find all of them but at least you’ll be able to upgrade you HFM and still use the same interfaces you have today in ODI to manage HFM (just remember to use the new data store objects reversed from the new RKM).

The KM usage is very similar to the old ones and we had the instructions in all its options so we’ll not explain then here (just in the article). The only important difference is on how to setup the “Cluster (Data Server)” information on Data Server (Physical Architecture). For the new HFM API, we need to inform two new settings: Oracle Home and Oracle Instance Paths. Those paths are related to the server where your HFM application is installed. These settings will be used internally in HFM API to figure out all HFM information related to that specific HFM instance.

Due to these two new settings and in order to continue to accommodate all connection information within a single place (ODI Topology), “Cluster (Data Server)” was overloaded to receive three settings instead of just one, separating them by colon. So now “Cluster (Data Server)” receives “dataServerName:oracleHomePath:oracleInstancePath” instead of just dataServerName.

data-server

Having those considerations in mind, it is just a matter to create a new Data Server and set the overloaded “Cluster (Data Server)” information and the user/password that ODI will use to access the HFM application. After that, we just need to create a Physical Schema with the name of the HFM application, a new Logical Schema and associate that to a context.

And that is it, you guys are ready to upgrade your HFM environment and still use your old ODI interface to maintain HFM. If you guys have any doubts/suggestions about the KMs please few free to contact us.

If you guys are having errors with our KMs, please check our troubleshooting post here.

I hope you guys enjoy these KMs. See you soon!

Tips and Tricks: Working with ODI Variables and Global Parameters

Posted in ODI Architecture with tags , , , , , , , , , , , , on September 7, 2015 by RZGiampaoli

Hi guys, today we’ll talk about some very simple but powerful technic that we always use in our integrations. Its joins two concepts together and make our lives a lot easier and our integration a lot more dynamic. We are talking about variables and the concept of “Global” parameters.

In our integrations we never, ever have anything hard coded. Every time you hard code something it will come back to bite you in the future that is for sure.

Then the first thing we do in a project is to create a table that we call ODI_PARAMETER. This table will contain all configuration and parameters that needs to be validated, hard coded and so one.

I like to create this table in our work schema (to make easier to use) and its look like this:

ODI_PARAMETER Table

The “SESSION_NM” is used to make the variable reusable in all scenarios that we want, meaning we’ll have only one variable for packages in the project or even for all projects (if we make this variable global in ODI).

How it works? First of all we need to get the “Session Name” for our “Scenario/Package”. Why did I say “Scenario/Package”? Because the result could change depending if you are running a Scenario or a Package. Let me explain this.

To get the “Session Name” in ODI we use an ODI Substitution method called “odiRef.getSession”. This method has other parameter that could return the Session ID, and other stuff but what matters for us is the “SESS_NAME” parameter, that will return the name of the session, the same thing that appears in the operator when we run any object in ODI.

Why I said object? Because if you run a variable the session name will be the variable name. If you run an interface, the session name will be the interface name, it goes to procedure, package and scenario, and that is why I separate the “Scenario/Package” because if we do not pay attention, the name of the package would be different of the name of the scenario, causing a problem when we run one of them.

Let me show how it works. First of all, we’ll create a Global ODI variable called SESSION_NM (could be whatever you want, I just like to call it like this) and we’ll put this code inside of it:

SESSION_NM Variable

After that, we will run this variable to see the results:

SESSION_NM Results

As we can see, the value of the variable was the name of the Variable itself. Now, let us create a package, put this variable inside it, and see what’s happens:

Package test 1

Here is what the interface looks like and above its results:

Package test 1 results

As we can see the result of the variable is the same as the session but in UPPER case since I create the variable like this. But why I did that? Let me create a scenario of this package to show you why:

Scenario Creation

And this is why I create in the variable getting the result and put in UPPER and why I said we need to worry about some peculiarity regarding Scenarios and Packages. When you create a scenario will have the name of the interface in UPPER case and also, NO SPACES. Now, if we run the just created scenario we will have:

Scenario results

Meaning, if we will use the result of this variable as a way to return data from a table, we’ll have a problem because it’ll not find the same result if you run the package or the scenario of that package.

The easiest way to resolve that is to have the name of the main scenario (the scenario that will contain all the other scenarios) with no spaces and no special characters (ODI also transform special characters like % in to _).

Doing that and we are good to continue as we can see below:

Package results

Now we have the same results if we run the package or the scenario.

Ok next let us create another variable to return the LOG_PATH, the path where we will store all our logs from our integrations. The code that we will use for this variable is:

Query ODI_PARAMETER

As we can see we are using the result of the “SESSION_NM” variable in this “LOG_PATH” variable. This is what’ll make this variable reusable in all “Packages/Scenarios/Procedures”. Let us insert a value inside our ODI_PARAMETER Table and run the Package to see the results:

Insert Test 1

Package 1 Results

Now let us create a new package with a different name, use the same variable as above, and insert a new line in our ODI_PARAMETER table for the new interface:

Package 2 results

See, same code, two different results. That means, 90% of the interfaces needs just to be duplicated and the parameters in ODI_PARAMETER needs to be inserted for the new interface and it is done. Also, we don’t need a ton of variables to get different results. And there is more.

The code of the variable also does not change that much. For a new variable, we just need to duplicate the LOG_PATH variable and change the PARAMETER_TYPE, PARAMETER_NAME and PARAMETER_VALUE to get any other information from the ODI_PARAMETER. That means, easier to maintain.

However, let us not stop here. In this example, we are getting the LOG_PATH for our logs in our integrations. Normally this path does not change from integration to integration. What changes is the name of the integration that we are logging right? In addition, with our SESSION_NM variable we could just put in our LOG_PATH variable the root of our LOG folder and then use like this:

#LOG_PATH\#SESSION_NM

This would make the LOG_PATH equal for all integration right. Nevertheless, in the way we create our variables we will need to insert one line for each integration in our ODI_PARAMETER table right.

Well, we just need to change a little bit our code in our variable to create the concept of GLOBAL parameters. How it will work:

First, we will delete the two lines we just created and then we will insert just one line in ODI_PARAMETER table:

Insert Global

Now we just need to change the code from our LOG_PATH variable to this:

Query ODI_PARAMETER global

And here we go:

Global results

We have one global parameter that can be used for all integrations. And the cool thing is that the code above tests if we have a parameter for the actual SESSION_NM and if not it’ll get the parameter from the GLOBAL parameter, meaning if any integration needs a special LOG_PATH or something you just need to insert a new line in the ODI_PARAMETER to get the value just for that integration:

Global results exceptions

This will guarantee that you never ever needs to touch your code again to test or change anything that the business ask you for.

As I said, is a simple but very powerful tool to use.

Hope you guys enjoy and see you soon.

Master Note For Oracle Data Integrator (ODI) And Legacy Sunopsis, Releases and Patches

Posted in Documentation, Download, ODI 10g, ODI 11g, ODI 12c, Patches, Versions with tags , , , , , , , on July 9, 2015 by RZGiampaoli

Hi guys how are you doing?

Today a quickly but useful post.

How many times you guys wanted a specific version of ODI and you just can’t find it anywhere? Well, here is the solution for it.

This is the link for the ODI Master Documentation that contains ALL versions of ODI as well of all patches and bugs. Everything is there!

This is a document inside Oracle support than you need a user with access to this but even so, this is a very useful link.

You can find the Lifetime support Policy there:

Lifetime Support

The Grace period:

Grace Period

And off course, downloads of all version including version 10 and patches:

Download ODI

I hope you guys enjoy and see you later.