Archive for the Hacking Category

Fragmented and Aggregated tables in OBIEE using ODI Part 3/5: Populating the Fragmented tables

Posted in ACE, Data Warehouse, Hacking, OBIEE, ODI, ODI Architecture, Oracle, Oracle Database, Query, SQL with tags , , , , on February 11, 2020 by RZGiampaoli

Hey guys, how are you?

Continuing the series Fragmented and Aggregated tables in OBIEE using ODI and today we are talking about how to Populating the Fragmented tables using ODI.

Just to make easier for you to navigate in this series, here’s the parts of it:

Creating the tables: I’ll explain the advantages and how you can create the fragmented tables and also how to further improve performance using partitioning and sub partitioning.

Managing the partitions of the Fragmented/Aggregated tables: Here you’ll see how to manage all partitions using ODI.

Populating the Fragmented tables using ODI: Here you’ll find how to create generic components to load all fragmented tables we need to.

Populating the Aggregated tables using ODI: Here you’ll find how to create generic components to load all Aggregated tables we need to.

Setting the OBIEE Repository: Finally, we’ll going to setting up the OBIEE repository to make use of all tables.

In my previous posts we had design our 18 tables and the partition management process. Now we’ll talk about how we’ll going to load data to our fragmented tables. As you can imagine, for the 6 fragmented tables, the code will be the same, the only difference is what goes to each table. Then what we need to make ODI to do for us is to dynamically filter and load the data that comes from the same query to different tables based in a rule.

If you are following this series from the beginning, you probably already guessed what we’ll need to do here. The same thing we did for the Partition management process, well, almost the same thing because we’ll going to do something a little bit more optimize for the data load since this is the most time-consuming step.

The main difference between the approach I explained in the previous post is that it loops one Statement by each row that returns from the Source command in SERIAL, that means, we need to wait it to finish for the second loop to start. For the Partition Management that’s ok because everything there is very fast, but for that load we can have something better.

We don’t need and we don’t want to wait for a serial execution because we have one table for each Loop we’ll perform (This is not the case but even if we have only one table, if we have partitions and/or Sub-Partitions, we can Loop the code by the Partition/Sub partition level and load everything in Parallel. The only case we can’t load in parallel is if we have a single table without partition).

In our case we’ll going to loop all the 6 tables load in parallel. To do that, we are going to use the same approach as before but instead of having the ALTER TABLE in the Command on Target, we’ll going to have an OdiStartScen, to call a Scenario multiple times in PARALLEL. Let’s take a look in the Command on Target:

As we can see, the Command on Target contains an ODI tool called OdiStartScen that is used to call a scenario. We have a few parameters there to set how we’ll going to call the scenario and, in this case, the important one is the SYNC_MODE=2, that means it’ll execute everything in parallel (1 is serial). Everything else are variables that we are using to pass information from the Command on Source to the Called scenario.

A quick tip here before we continue, if you want to know how in havens, I have all this setting in mind, for your information I don’t, but there’s a very easy way to find out. Just drop an ODI Start Scen from the ODI Toolbox in any part of your scenario and fill with the information you need:

In the General we set the scenario we want to load, version of the scenario (-1 is always the latest scenario you have), the Synchronous and Asynchronous mode (serial and parallel), Session name, in case you want to have an different name showing in your execution (this is always a good Idea to have since you are looping the same code with different parameters, then in my case I pass as a variable the name of the table, the period and the hierarchy that scenario will be loading).

In the Additional Variables tab you can set all variables you need to use inside the scenario (as you can see in my case, I have a lot of variables because the complexity of my sources, you case will be different).

And after you set everything, you just need to click in the Command tab to get the code that ODI generates based in the setting you did:

This is the code that you need to paste in the Command on target to call a scenario. After that you can delete the OdiStartScen object from your scenario and continue the development. Another advantage of this is that if you have the code and want to create an OdiStartScen step in a scenario, you can paste in the Command tab and the General/Additional Variable will be filled based on the code.

With the Target code ready we just need a query that will going to pass all the information needed to call the scenario. In my case, I pass a lot of information to the target, as you can see by the amount of variable I have. Also, the variable information doesn’t need to come only from the command on source. If you have variable in your scenario and they are the same for all the loops, you can just use these variables to pass their values to the Scenario the command on target is calling. The command on source needs only to have the information that will dynamically change in each loop.

In my case I’m passing all kind of information like the SESSION_NAME (the name of the session that we use to create parameter specific for each session we have and use the same variables for all, more information HERE) but the important one for us is the TABLE_NM.

In my case, the other variables will change the values depending of the table we need to load for each time we loop. To do that I insert all information I need to be pass to the scenario I’m calling in a Parameter table (that I like to call ODI_PARAMETERS). This way I can easily query this table to get specific information I need for each Loop I’m going to perform.

With all this set, we just need to create a child scenario and add “Declare” variables for each Variable we want to pass to the internal scenario like this:

As you can see, this is a very simple scenario that contains just one procedure that’ll be used to load the data to our tables.

This approach can be done with Interfaces as well, in fact with anything at all. We can have a mains scenario that loops a very complex scenario, that loops another scenario and so on. This is a very nice way to loop something because we have a lot of control over this kind of loop as well, we can change the behavior of each loop because the info we get from the Command on Source.

Also, a very good feature is that, if the source query doesn’t return any rows, the Command on Target is not executed at all. Then you’ll always have a clean execution. For example, if you have a folder that can have 0 to X number of files and you want to create an scenario to load it, the best approach would be to read this folder files (OS command to a file and loaf file or Java or Phyton or…) and store the info in our ODI parameter. Then we create a proc to call a scenario that will contain the interface that will load each file, and finally we just do a Select in the Command on Source to get the name of the files in the folder. With that, if we have 0 files, nothing will be executed, but if we have 1000 files, we’ll loop the scenario 1000 times and each time we’ll pass the name of a different file to the internal scenario. Very neat way to do it.

Inside the procedure we just need to create the query need to load the data and use the variables to filter the right data and load the right table. This is not my case but if you have different source for each table you want to load, just insert this information in the parameter table and pass it to the internal scenario. Use these variables to replace the Table Names in the SQL and you are done.

And that’s it. This is all we need to do to load all our 6 fragmented tables. In my next post I’ll be talking about how we can aggregate our data and load our Aggregated tables.

I hope you guys enjoy and see you soon.

Automating Essbase Copy Outline Operation using Java API

Posted in ACE, BSO, Cubes, Essbase, Hacking, Hyperion Essbase, Java, Migration, Oracle with tags , , , , , , on August 9, 2017 by RZGiampaoli

Hi guys how are you? Did you guys ever tried to automate the process of coping a cube outline from one application to another?

Well, there’s an easy way to do that. Basically you copy the .otl from the server file system over the other cube. The problem is that if the cube is not empty, the database becomes corrupted since we just replaced an .otl file for another strange .otl file (no restructure happened).

Then if you want to copy the outline to an existing cube (that has data) this is not a solution.

The thing is, the only two possible ways to do what we want is the EAS “Save as” operation and the migration wizard. These both operations work because they copy the .otl file as .otn and then run a restructure in the database. The restructure “synchronize” the cube with the new outline, making the process safe for a cube that has data on it.

The problem is, none of these can be automated and there’re no way to do this operation using Maxl or EssCmd.

In fact, even using the Java API, it’s hard to figure out how to do that because all the copy methods seem to copy all kind of objects but the outline.

The good news is, we figured out a way to replicate the “Save as” operation using the Java API after hours of frustration and tears…

Here we go:

Save As Java code

The code is really simple. We need to connect in the essbase server, lock the target outline (the one we’ll overwrite) and then copy the outline from one application to another. To do that we are going to use the functions “lockOlapFileObject” and “copyOlapFileObjectToServer”.

This process that we just described will create an .otn file in the target cube. Now comes the great catch of this code (that is not documented anywhere):

If we open the target outline in EAS we will still see the old metadata. To commit the changes, we need to perform a restructure to merge the new outline (.otn) with the old one (.otl) updating the metadata.

To do that we are going to use the functions in the class “IEssCubeOutline” to “open”, “restructureCube” and “close” the target outline.

That is it. This process will do exactly what the “Save As” in EAS does, which means that you can copy outlines from one application to another even when the target database contains data.

I hope you guys enjoy and see you soon.

ODI KMs for HFM 11.1.2.4

Posted in 11.1.1.9.0, ACE, Configuration, DEVEPM, ETL, Hacking, HFM, Knowledge Models, ODI, ODI 11g, ODI Architecture, Uncategorized with tags , , , , , on March 3, 2017 by RZGiampaoli

Hi guys how are you? Today we are proud to announce that we are making available the ODI KMs for HFM 11.1.2.4.

—- EDITED on June/17 —-

We developed these KMs around 6 months ago, but we were waiting to release them together with an article that we wrote for Oracle.

Since OTN had some “Priority changes”, our article was postponed to later this year. As we had some people asking for these KMs we decide to release the KMs now and when the article is published we will let you guys know as well.

The article is live here! And if you guys are having errors with our KMs, please check our troubleshooting post here.

—- EDITED on June/17 —-

Prior to version 11.1.2.4, ODI could be easily used for HFM integration processes. ODI used its KMs with specific HFM drivers (HFMDriver.dll) provided by Oracle that were used to access and manipulate HFM applications. However, on HFM’s latest version, Oracle decided to remove its support for ODI, meaning that all HFM integrations would have to move from ODI to either manual iteration with HFM, usage of another integration tool (Like FDMEE) or create custom code using the new Java HFM API.

Since we didn’t want to re-write all our ODI environment and also none of the above options are robust enough, we decided to recreate the ODI KMs using Java HFM API. For these KMs to work we need to do two things: import them from ODI Java Net and do some setup in the ODI agent.

In the article we explain all options and how do we came up with this solution, but here we will not talk about it since we want you guys to read our article as well and we can’t use the content of the article here since we already signed an exclusivity agreement with Oracle.

The first part is easy and you just need to download the files from the link below

ODI KMS for HFM 11.1.2.4

The second one is more difficult. We need to make the new HFM Jars available to the ODI Agent and in order to do so we have two options:

Install the agent in the HFM machine OR copy the necessary jar files to the agent drivers folder (oracledi\agent\drivers).

If your architecture allows to have both HFM and ODI agent in the same server, then you may use this approach, which is very simple. The only thing to do is to change odiparams file (oracledi\agent\bin\odiparams.bat file in a standalone agent) and add the location of those three HFM jar files. Open odiparams.bat file and search for “ODI_ADDITIONAL_CLASSPATH”. On that setting, just set the location of the HFM jar files, as below (this is just an example. Please adjust the path accordingly to your environment):

set ODI_ADDITIONAL_CLASSPATH=%ODI_ADDITIONAL_CLASSPATH%;

“D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_j2se.jar”;

“D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_thrift.jar”;

“D:\Oracle\Middleware\EPMSystem11R1\common\jlib\11.1.2.0\epm_hfm_server.jar”

Save the file, restart the ODI agent and it is done

If you decide to go with the second option, we’ll provide a list of all the necessary jars (be prepared… it’s huge). In the article we explain how to identify all the necessary jar files in a systematic way but here this is not an option as explained before.

Search for all the Jars in the below list and copy all of them under oracledi\agent\drivers folder.

adm.jar
admaps.jar
admodbo.jar
ap.jar
ArtifactListing.jar
audit-client.jar
axiom-api-1.2.10.jar
axiom-impl-1.2.10.jar
axis-ant.jar
axis-jaxrpc-1.2.1.jar
axis.jar
axis2-adb-1.5.4.jar
axis2-kernel-1.5.4.jar
axis2-transport-http-1.5.4.jar
axis2-transport-local-1.5.4.jar
backport-util-concurrent.jar
broker-provider.jar
bsf.jar
castor-1.3.1-core.jar
castor-1.3.1.jar
com.bea.core.apache.commons.collections_3.2.0.jar
com.bea.core.apache.commons.net_1.0.0.0_1-4-1.jar
com.bea.core.apache.commons.pool_1.3.0.jar
com.bea.core.apache.log4j_1.2.13.jar
com.bea.core.apache.regexp_1.0.0.0_1-4.jar
com.bea.core.apache.xalan_2.7.0.jar
com.bea.core.apache.xml.serializer_2.7.0.jar
com.oracle.ws.orawsdl_1.4.0.0.jar
commons-cli-1.1.jar
commons-codec-1.4.jar
commons-compress-1.5.jar
commons-configuration-1.5.jar
commons-dbcp-1.4.0.jar
commons-discovery-0.4.jar
commons-el.jar
commons-fileupload-1.2.jar
commons-httpclient-3.1.jar
commons-io-1.4.jar
commons-lang-2.3.jar
commons-validator-1.3.1.jar
cpld.jar
css.jar
cssimportexport.jar
ctg.jar
ctg_custom.jar
dms.jar
epml.jar
epm_axis.jar
epm_hfm_web.jar
epm_j2se.jar
epm_jrf.jar
epm_lcm.jar
epm_misc.jar
epm_stellant.jar
epm_thrift.jar
essbaseplugin.jar
essbasestudioplugin.jar
ess_es_server.jar
ess_japi.jar
fm-actions.jar
fm-adm-driver.jar
fm-web-objectmodel.jar
fmcommon.jar
fmw_audit.jar
glassfish.jstl_1.2.0.1.jar
hssutil.jar
httpcore-4.0.jar
identitystore.jar
identityutils.jar
interop-sdk.jar
jacc-spi.jar
jakarta-commons.jar
javax.activation_1.1.jar
javax.mail_1.4.jar
javax.security.jacc_1.0.0.0_1-1.jar
jdom.jar
jmxspi.jar
jps-api.jar
jps-common.jar
jps-ee.jar
jps-internal.jar
jps-mbeans.jar
jps-unsupported-api.jar
jps-wls.jar
js.jar
json.jar
jsr173_1.0_api.jar
lcm-clu.jar
lcmclient.jar
LCMXMLBeans.jar
ldapbp.jar
ldapjclnt11.jar
libthrift-0.9.0.jar
log4j-1.2.14.jar
lucene-analyzers-1.9.1.jar
lucene-core-1.9.1.jar
lucene-spellchecker-1.9.1.jar
neethi-2.0.4.jar
ojdbc6dms.jar
ojdl.jar
opencsv-1.8.jar
oraclepki.jar
org.apache.commons.beanutils_1.8.3.jar
org.apache.commons.digester_1.8.jar
org.apache.commons.logging_1.1.1.jar
osdt_cert.jar
osdt_core.jar
osdt_xmlsec.jar
quartz.jar
registration_xmlBeans.jar
registry-api.jar
resolver.jar
saaj.jar
scheduler_ces.jar
servlet-api.jar
slf4j-api-1.5.8.jar
slf4j-log4j12-1.5.8.jar
sourceInfo.jar
stax-api-1.0.1.jar
wf_ces_utils.jar
wf_eng_agent.jar
wf_eng_api.jar
wf_eng_server.jar
wldb2.jar
wlpool.jar
wlsqlserver.jar
wsplugin.jar
xbean.jar
xmlparserv2.jar
xmlpublic.jar
xmlrpc-2.0.1.jar
XmlSchema-1.3.1.jar

Restart the ODI agent and it should be ready to execute any HFM Java code inside of ODI.

I know that this is a lot of jars and will take some time to find all of them but at least you’ll be able to upgrade you HFM and still use the same interfaces you have today in ODI to manage HFM (just remember to use the new data store objects reversed from the new RKM).

The KM usage is very similar to the old ones and we had the instructions in all its options so we’ll not explain then here (just in the article). The only important difference is on how to setup the “Cluster (Data Server)” information on Data Server (Physical Architecture). For the new HFM API, we need to inform two new settings: Oracle Home and Oracle Instance Paths. Those paths are related to the server where your HFM application is installed. These settings will be used internally in HFM API to figure out all HFM information related to that specific HFM instance.

Due to these two new settings and in order to continue to accommodate all connection information within a single place (ODI Topology), “Cluster (Data Server)” was overloaded to receive three settings instead of just one, separating them by colon. So now “Cluster (Data Server)” receives “dataServerName:oracleHomePath:oracleInstancePath” instead of just dataServerName.

data-server

Having those considerations in mind, it is just a matter to create a new Data Server and set the overloaded “Cluster (Data Server)” information and the user/password that ODI will use to access the HFM application. After that, we just need to create a Physical Schema with the name of the HFM application, a new Logical Schema and associate that to a context.

And that is it, you guys are ready to upgrade your HFM environment and still use your old ODI interface to maintain HFM. If you guys have any doubts/suggestions about the KMs please few free to contact us.

If you guys are having errors with our KMs, please check our troubleshooting post here.

I hope you guys enjoy these KMs. See you soon!

Remotely Ziping files with ODI

Posted in 11.1.1.9.0, ACE, Configuration, EPM, Essbase, ETL, Hacking, Hyperion Essbase, InfraStructure, ODI, ODI 10g, ODI 11g, ODI 12c, ODI Architecture, ODI Architecture, OS Command, Performance, Remotely, Tips and Tricks, Zip Files with tags , , , , , , , , , , , on April 5, 2016 by RZGiampaoli

Hi guys how are you? It has been a long time since last time I wrote something but it was for a good reason! We were working in our two Kscope sessions! Yes, this year we will have 2 sessions and I think they will be great!

Anyway, let us get to the point!

Today I want to talk about something that should be very simple to do it but in the end, it is a nightmare…. Zip a file in a remote server…

A little bit of context! I was working in a backup interface for one client and, because their cubes are very big, I was trying to improve the performance as much as I can.

Part of the backup was to copy the .ind and .pag files and the data extract files as well. For an app we are talking in 30 gb of .pag and 40 gb of data extract files.

Their ODI infrastructure is like this:

Infrastructure

Basically I need to extract/copy data from Essbase server to the disaster recovery server (DR Server). Nothing special here. The problem is, because the size of the files I wanted to Zip the files first and then send it to the DR server.

If you use the ODI tools to Zip the file, what it does is bring all the files to the ODI Agent server, zip everything and the send it back. I really do not want all this traffic in the network and all the time lost in this process (also, the agent server is a LOT less powerful then the Essbase server).

Regular odi tools zip process

Then I start to research how I could do that (and thank you my colleague and friend Luis Fernando Cairo that help me a lot doing a lot of tests on this)

First of all we have three main options here:

  1. Create a .bat file and run it remotely: I did not like it because I do not want a lot of .bats all over the places
  2. Use windows invoke command: I need a program in the server like 7 zip or so and I don’t have access to install freely and I do not want to install zip’s program all over the places too
  3. Use Psexec to execute a program in the server: Same as the previous one.

Ok, I figure out that in the end I’ll need to create/install something in the server… and I rate it. Well, let’s at least optimize the problem right!

Then I was thinking, what I have in common in all Hyperion servers? The answer is JAVA.

Then I thought, I can use the JAR command to zip a file:

jar cfM file.zip *.pag *.ind

Where:

c: Creates a new archive file named jarfile (if f is specified) or to standard output (if f and jarfile are omitted). Add to it the files and directories specified by inputfiles.

f: Specifies the file jarfile to be created (c), updated (u), extracted (x), indexed (i), or viewed (t). The -f option and filename jarfile are a pair — if present, they must both appear. Omitting f and jarfile accepts a “jar file” from standard input (for x and t) or sends the “jar file” to standard output (for c and u).

M: Do not create a manifest file entry (for c and u), or delete a manifest file entry if one exists (for u).

Humm, things start to looks better. Now I had to decide if I would use the Invoke command or Psexec.

I started trying the Invoke command, but after sometime I figure out that I can’t execute the jar command using invoke.

Then my last alternative was Psexec.

The good thing about it is that is a zip file that you need just to unzip in the agent server, set it in the Environment Variables (PATH) and you are good to go.

It works amazingly.

You can run anything remotely with this and it’s a centralized solution and non-invasive as well (what I liked).

You just need to:

psexec \\Server  -accepteula  -w “work dir” javapath\jar cfM file.zip *.pag *.ind

Where:

-w: Set the working directory of the process (relative to remote computer).

-accepteula: This flag suppresses the display of the license dialog.

There’s one catch, for some unknown reason, the ODI agent does not get the PATH correctly then you need to use the complete path where it was “Installed”. The ODI is like this:

OdiOSCommand “-OUT_FILE=Log_Path/Zip_App_Files-RUM-PNL.Log” “-ERR_FILE =Log_Path /Zip_App_Files-RUM-PNL.err”

D:\Oracle\PSTools\psexec \\server -accepteula -w \\arborpath\APP\RUM\PNL\ JAVA_PATH\jdk160_35\bin\jar cfM App_Files-RUM-PNL.zip *.pag *.ind

With this, we will have a process like this:

Remotly Zip Process

This should not be something that complicate but it is and believe me, I create a very fast process and the client is very happy.

I hope you guys enjoy it and see you soon.

Using templates to create dynamic rules in Calcmanager 11.1.2.4

Posted in 11.1.2.4, ACE, BSO, Business Rules, CalcScript, Calculation Manager, Calculation Script, EPM, Essbase, Hacking, Hyperion Essbase, Hyperion Planning, New Features, Oracle, Performance, Templates, Tips and Tricks with tags , , , , , , , , , , , on January 1, 2016 by RZGiampaoli

Hi guys and happy new year!!!

And to start well the new year what’s best then a post?

Today I want to talk about the new version of Calculation manager (11.1.2.4). I know that it is out for a while now but still I think it has some cool features that are not explored.

In all Planning project, sooner or later, we come to a time that we need to create a currency conversion Rule (at least I like to create a custom Rule for performance reasons). Also some companies uses a lot of currencies.

Before continue I need to say that in our case I find out that less code is equal a less performance. What I mean by that is that for the forecast horizon range period for example, instead of use “IF” and test my 15/18 months horizon I triplicate the code using “FIX” and using “SET EMPTYMEMBERSETS ON ;”.

This set command ignores the “FIX” if it returns an empty set. This approach increases the performance a lot, some times more than 8 times (In this currency example, if I ran it at channel level with “IF”, toke 8 hours, with “FIX” takes 1 hours).

Ok that means I rarely use “IF” in my Rules.

Well, you can already imagine the size and row boring and prone error is the Rules if I use only “FIX” right? However, with the “Template” feature in calcmanager and the ability to call any template or rule using a script this nightmare turns in to a dream!

Let us see how it works!

A Currency conversion for forecast applications normally has two parts:

First parts is a period range part.

Second part is the currency conversion itself.

With calcmanager, we can create two template, one for the period and the other for call the currency conversion part.

Then for the Currency conversion calculation, I create a simple core template with just a formula and a script on it:

UDA Loop Template

The “dtp_Quote_UDA”  is a DTP (design time prompt) variable with a function that will insert double quotes in every value that comes from the “dtp_UDA” DTP variable (this will be used to get values from the outside template), this way we can have use just one variable to do two papers, currency name and UDA value. The code is:

@QUOTE([dtp_UDA])

The inside the Currency calculation script we will have:

Currency Script

As we can see inside the script, I used the “dtp_Quote_UDA” as well the “dtp_UDA”. This simplify the amount of parameters I need to pass and the maintenance as well. Let’s think, we need the same information, one with double quotes, for the UDA values and other without quotes, for the Rate name.

With this technic we need to pass just once the value, let’s say BRL, and in the code Calcmanager will replace before the execution in all places, and we’ll have @UDA(Entity,”BRL”) as well HSP_Rate_BRL.

This is awesome because now I have just 8 line of code that will be transformed in any amount of times I want. The best thing is, or everything is right or everything is wrong J

Because calcmanger now we have a layer between the code written and the code generated, and this is pretty cool because opens a huge windows for creativity. You can even generate the entire code dynamically.

Ok, the next step is to loop this template once for each currency we have. For this, I created another template. This one will be used for the Forecast horizon period range as well for loop the currencies.

Period loop template

Again, the code is pretty simple, just  two fixes and one script.

For the “Period FIX” we use two DTP variable to get the value of Year and period from the outside rule ([dtp_Period] and [dtp_Year]).

The product fix is just something related with our architecture and we do not need to bother about it.

Now the “Loop Currency” is a script that will call N number of times our first template. How can we do that with a script?

Basically every time you drag and drop a template inside a rule or to another template behind the graphic design calcmanager generate a command line. This code exists thanks to its API, and you can use it to manipulate and generate almost any kind of code inside calcmanger.

Currency loop template

As we can see, inside the script we have a “Fix” for the USD currency, (that is the only different conversion) and one row for each currency.

Each row is calling a template “%Template(name:=Currency Conversion – 2 – UDA Loop” from an application “application:=”WWOPS””, a plan type, “plantype:=”Pnl””, and is passing two DTS values, one for the UDA and other for the Entity, “dtps:=(“dtp_UDA”:=[[AED]],”dtp_Entity:=[[dtp_Entity]])”.

As you can see, you can pass a DTP variable using the variable itself (dtp_Entity:=[[dtp_Entity]]).

If you want to create this API code and don’t know how to write the right syntax you can just drag your template to a rule/template, set everything and change your view to “Edit Script” or “View Script”.

Edit script

Now we just need to create the rule that will call this template for the three range of periods we have:

Currency rule

Again a simple design with a small amount of components. Here we have our SET commands, a main fix and the three templates, each one calling the previous template for a different period of range.

Period Range

The final result is a Rule with 1213 rows generated from a 8 rows template. This is the magic of calcmanager and templates. You can simplify everything, you can create dynamic aggregations, that will change depending of the application and cube, you can create codes that changes depending of the member that is coming from the forms, everything with small set of code that is reusable anytime we want!

Rule code 1Currency code 2

…….

A dynamic way to build a currency rule in calcmanager. A lot faster to build and a lot easier to maintain, since if a new currency start to be used you just need to copy and paste one line in “Currency Loop” script, change the currency and it’s done.

Build Rules using templates looks more work and some time a little bit complicate but I remember well how much time I expend changing BRs and I can guarantee that this way is much faster and easier to develop and mainly to maintain.

In the end we just create a Rule and two templates that contains just one core calculation, in my case a script calling 47 times this core, some fixes, and that’s all. It was less than 60 rows of written code to generate 1213 rows. Pretty good for me 🙂

Rules ante templates

Hope you guys enjoy and I wish a happy new year for all and you dears ones.

Happy new year!!!! A new year full of surprises!

 

Stopping ODI sessions in an automated way

Posted in Hacking, ODI, ODI 11g, ODI Architecture with tags , , , on September 4, 2015 by radk00

Hi all! In today’s post we will talk about how you may create automatic processes to stop ODI sessions. But first let’s think why/when we should automate this kind of task.

Everybody knows that the basic way to stop an ODI session would be to go to ODI Operator, right click in a running Session and select “Stop Normal”/”Stop Immediate”. Obviously it works just fine, but it requires someone to log in, select the jobs and stop them. There will be cases that you will want to stop those scenarios without any human intervention.

So let’s imagine that you have a critical ODI job that must make sure that some other secondary ODI jobs are not running before it actually starts. Maybe you could add an OdiSleep object in the critical ODI job, wait a little bit, check if the secondary jobs are still running, sleep again and so on. It is safe approach, but sometimes this critical ODI job is also top priority and it could have permission to stop all other secondary ODI jobs before it actually starts.

Or maybe you could have an execution window that must be respected and all ODI jobs that crosses a specific range of time should be stopped no matter what. I could write some other examples, but you already got the idea. So, how do we accomplish that in ODI?

If we take a look on ODI Toolbox panel, we are going to find things like OdiStartScen and OdiStartLoadPlan, but nothing related to stop, cancel or kill a session.

Toolbox

I’m not sure why Oracle didn’t put this kind of objects in the Toolbox, but if we go to the ODI agent bin folder (oracledi\agent\bin) we are going to see some interesting .bat (.sh on linux) jobs there:

  • Restartloadplan.bat
  • Restartsession.bat
  • Startloadplan.bat
  • Startscen.bat
  • Stoploadplan.bat
  • Stopsession.bat

We may right click/edit each of those to get more information about them. Today we are interested in the last one “Stopsession.bat”. Its syntax is pretty simple:

stopsession <session_number> “-AGENT_URL=<agent_url>” [“-STOP_LEVEL=<normal(default)|immediate>”]

Pretty cool and easy to use. We may just add this call to an ODI procedure and create some logic to stop all sessions that we want. Let’s build an example and see how it would look like. Imagine that you have an ODI_JOB_A that needs to stop ODI_JOB_B and all its children (if there is any of those running) before it continues its tasks. To accomplish that, we would need to create an ODI procedure and use the command on source/target technique to select ODI_JOB_B and its children that are currently running. It would look like this:

Command on Source

On “Command on Source” we would write a SQL against the ODI metadata repository checking for all ODI_JOB_B sessions (and its children) that are currently running. Of course that this SQL is just an example, you may tweak it to fit your own requirements. Here we are just querying the SESS_NO that belongs to a running session of ODI_JOB_B and UNION that to all running children of a running ODI_JOB_B session.

On “Command on Target” it should be just a matter to add the stopsession cmd on “Operating System” technology, but it is not that easy. Let’s analyze the stopsession cmd again:

stopsession <session_number> “-AGENT_URL=<agent_url>” [“-STOP_LEVEL=<normal(default)|immediate>”]

session_number is a value that will return from our “Command on Source” tab, so we are good.

stop_level may be set as normal or immediate, so we are also good here.

The problem that we have is the AGENT_URL. A valid AGENT_URL would look like this:
-AGENT_URL=http://ODISERVER:9001/oraclediagent
This URL is composed with the information that is set in our Topology information, like the one below:

Agent

The problem here is that we don’t have any ODI substitution API that return this kind of information. The closest that we have is <%=odiRef.getSession(“AGENT_NAME”)%> that just returns its name, nothing more. To get around this situation, we will need to query ODI metadata repository again and compose this URL using a SQL against SNP_AGENT table. Let’s create one ODI variable for that like the one below:

Variable

Here we using the AGENT_NAME API function to get the right information for the running agent. Now we are able to finish our procedure with the “Command on Target” command:

Command on Target

And that’s it! Just add the AGENT_URL refresh variable and this procedure in the very beginning of ODI_JOB_A package and you will have it stop session ODI_JOB_B and its children before it moves on.

Hope you liked it! See you soon!