Archive for EPM

Oracle SQL for epm tips and tricks S01EP07

Posted in ACE, DEVEPM, EPM, Oracle, RANK, SQL with tags , , , , , on August 15, 2019 by RZGiampaoli

Continuing our Oracle SQL for EPM series, today we’ll start to talk about analytic functions and how can we use them for more than “just” analytics.

To start with, let’s talk about RANK(). As the name suggest, RANK() is used to rank our data based in something. It’s very useful to find out each data is more relevant than others. Let’s see a example:

Here we have a small table with 2 currencies and a few products. Let’s first start with the basic function of RANK() and see each product generated more income:

The basic syntax is RANK() OVER (ORDER BY COLUMN). Basically what you are saying to oracle is, rank my data based by a column (or multiple columns). Since I just ordered by data, the values of the RANK() got duplicated everything oracle finds the same value. This is because we have 2 currencies and they are both USD.

To fix data we can do 2 things: Or we can include currency in side the order by or we can use another more advanced use of RANK() that is OVER PARTITION.

Let’s see how it works:

If I just add another column in the ORDER BY, it’ll basically create the Rank based in the order of these 2 columns. It’s the same as do a ORDER BY and then follow the order of the data that returns. Then in this case, you can see that the products PR235 for Functional Data got Rank 1 and for USD rank 11, even both having the same value. By the way, you also can see that the Ranks is ordering in the opposite order that we would like to have. This was intentional to show you how the Rank is produce. To fix that we just need to put a DESC in the ORDER BY clause, like we would do in a normal ORDER BY.

Ok then let’s see the more advanced way to write this query:

Instead of inserting new columns in the ORDER BY we can use PARTITION BY instead. The results here is the same, but this can be used in other ways as well and I would say that this would be the best way to used it since is more clear what you want to do.

The PARTITION BY does exactly what the name says, it partition the data by the content of one or more columns. In fact, the PARTITION BY clause can be used in most off the analytics functions like MAX, SUM, MIN, AVG…. then it’s very powerful and the best thing is that, if you use it, you don’t need to use a group by (we’ll see that in the future).

Now, as I said before, we can have other uses for RANK than just ranking data. Let’s say that you have this table without the CURRENCIES column:

Without the CURRENCIES column we end up with duplicate data in the table right? In this case we could do just a distinct and use the data as is, but let’s say you want to create the CURRENCIES column based in the data that we have, and the rule would be, the first data you find is USD and the second (if exists) would be Function. We can use Rank for that too:

Since here the data is the same for the same product, the only thing that could differentiate them was the ROWNUM (or ROWID, that would be better to make sure each one was the first one, but harder to see the example) I used it to create a Rank that shows each row has the lowest ROWNUM and that would have the Rank 1, the second one will be 2 and with this information, I just did a decode to make the 1 USD and the 2 Functional (Also a NA in case we have more than 2 duplicated rows).

This can be used in exactly the same way if you have a metadata table without the datastorage information and you want to create it. Then the first member you find (Trough our friend CONNECT BY PRIOR) will be the Prototype (Store or never Share or Dynamic Calc and Store) and the other would be Shared members.

Of coarse there’s way more ways to use this function, and we’ll see more of them with the other analytics functions that we’ll going to see here.

See you soon guys.

Let’s Join DEVEPM @ KSCOPE 16

Posted in ACE, EPM, Essbase, ETL, Hyperion Essbase, Hyperion Planning, InfraStructure, Kscope 16, ODI, ODI 10g, ODI 11g, ODI 12c, ODI Architecture, ODTUG, Oracle Database, OS Command, Performance, Tips and Tricks with tags , , , , , , , , , , , , on April 5, 2016 by RZGiampaoli

Hi Guys how are you?

Just a quickly post about this year KSCOPE. This year we’ll have 2 excellent sessions:

Take a Peek at Dell’s Smart EPM Global Environment:

Ricardo Giampaoli , TeraCorp

Co-presenter(s): Rodrigo Radtke de Souza, Dell

When: Jun 27, 2016, Session 2, 10:15 am – 11:15 am

Topic: EPM Applications – Subtopic: Planning

In a fast-moving business environment, finance leaders are successfully leveraging technology advancements to transform their finance organizations and generate value for the business.
Oracle’s Enterprise Performance Management (EPM) applications are an integrated, modular suite that supports a broad range of strategic and financial performance management tools that help business to unlock their potential.

Dell’s global financial environment contains over 10,000 users around the world and relies on a range of EPM tools such as Hyperion Planning, Essbase, Smart View, DRM, and ODI to meet its needs.

This session shows the complexity of this environment, describing all relationships between those tools, the techniques used to maintain such a large environment in sync, and meeting the most varied needs from the different business and laws around the world to create a complete and powerful business decision engine that takes Dell to the next level. 

Incredible ODI Tips to Work with Hyperion Tools

Ricardo Giampaoli , TeraCorp

Co-presenter(s): Rodrigo Radtke de Souza, Dell

When: Jun 27, 2016, Session 6, 4:30 pm – 5:30 pm

Topic: EPM Platform – Subtopic: EPM Data Integration

ODI is an incredible and flexible development tool that goes beyond simple data integration. But most of its development power comes from outside-the-box ideas.

  • Did you ever want to dynamically run any number of “OS” commands using a single ODI component?
  • Did you ever want to have only one data store and loop different sources without the need of different ODI contexts?
  • Did you ever want to have only one interface and loop any number of ODI objects with a lot of control?
  • Did you ever need to have a “third command tab” in your procedures or KMs to improve ODI powers?
  • Do you still use an old version of ODI and miss a way to know the values of the variables in a scenario execution?
  • Did you know ODI has four “substitution tags”? And do you know how useful they are?
  • Do you use “dynamic variables” and know how powerful they can be?
  • Do you know how to have control over you ODI priority jobs automatically (stop, start, and restart scenarios)?

If you want to know the answer to all this questions, please join us in this session to learn the special secrets of ODI that will take your development skills to the next level.

Join us in KSCOPE 16 and book our 2 sessions in schedule. They will be very good sessions and I’m sure that you’ll learn some new stuff that will help you in your EPM Environment!

SpeakerSquare (1)

Remotely Ziping files with ODI

Posted in 11.1.1.9.0, ACE, Configuration, EPM, Essbase, ETL, Hacking, Hyperion Essbase, InfraStructure, ODI, ODI 10g, ODI 11g, ODI 12c, ODI Architecture, ODI Architecture, OS Command, Performance, Remotely, Tips and Tricks, Zip Files with tags , , , , , , , , , , , on April 5, 2016 by RZGiampaoli

Hi guys how are you? It has been a long time since last time I wrote something but it was for a good reason! We were working in our two Kscope sessions! Yes, this year we will have 2 sessions and I think they will be great!

Anyway, let us get to the point!

Today I want to talk about something that should be very simple to do it but in the end, it is a nightmare…. Zip a file in a remote server…

A little bit of context! I was working in a backup interface for one client and, because their cubes are very big, I was trying to improve the performance as much as I can.

Part of the backup was to copy the .ind and .pag files and the data extract files as well. For an app we are talking in 30 gb of .pag and 40 gb of data extract files.

Their ODI infrastructure is like this:

Infrastructure

Basically I need to extract/copy data from Essbase server to the disaster recovery server (DR Server). Nothing special here. The problem is, because the size of the files I wanted to Zip the files first and then send it to the DR server.

If you use the ODI tools to Zip the file, what it does is bring all the files to the ODI Agent server, zip everything and the send it back. I really do not want all this traffic in the network and all the time lost in this process (also, the agent server is a LOT less powerful then the Essbase server).

Regular odi tools zip process

Then I start to research how I could do that (and thank you my colleague and friend Luis Fernando Cairo that help me a lot doing a lot of tests on this)

First of all we have three main options here:

  1. Create a .bat file and run it remotely: I did not like it because I do not want a lot of .bats all over the places
  2. Use windows invoke command: I need a program in the server like 7 zip or so and I don’t have access to install freely and I do not want to install zip’s program all over the places too
  3. Use Psexec to execute a program in the server: Same as the previous one.

Ok, I figure out that in the end I’ll need to create/install something in the server… and I rate it. Well, let’s at least optimize the problem right!

Then I was thinking, what I have in common in all Hyperion servers? The answer is JAVA.

Then I thought, I can use the JAR command to zip a file:

jar cfM file.zip *.pag *.ind

Where:

c: Creates a new archive file named jarfile (if f is specified) or to standard output (if f and jarfile are omitted). Add to it the files and directories specified by inputfiles.

f: Specifies the file jarfile to be created (c), updated (u), extracted (x), indexed (i), or viewed (t). The -f option and filename jarfile are a pair — if present, they must both appear. Omitting f and jarfile accepts a “jar file” from standard input (for x and t) or sends the “jar file” to standard output (for c and u).

M: Do not create a manifest file entry (for c and u), or delete a manifest file entry if one exists (for u).

Humm, things start to looks better. Now I had to decide if I would use the Invoke command or Psexec.

I started trying the Invoke command, but after sometime I figure out that I can’t execute the jar command using invoke.

Then my last alternative was Psexec.

The good thing about it is that is a zip file that you need just to unzip in the agent server, set it in the Environment Variables (PATH) and you are good to go.

It works amazingly.

You can run anything remotely with this and it’s a centralized solution and non-invasive as well (what I liked).

You just need to:

psexec \\Server  -accepteula  -w “work dir” javapath\jar cfM file.zip *.pag *.ind

Where:

-w: Set the working directory of the process (relative to remote computer).

-accepteula: This flag suppresses the display of the license dialog.

There’s one catch, for some unknown reason, the ODI agent does not get the PATH correctly then you need to use the complete path where it was “Installed”. The ODI is like this:

OdiOSCommand “-OUT_FILE=Log_Path/Zip_App_Files-RUM-PNL.Log” “-ERR_FILE =Log_Path /Zip_App_Files-RUM-PNL.err”

D:\Oracle\PSTools\psexec \\server -accepteula -w \\arborpath\APP\RUM\PNL\ JAVA_PATH\jdk160_35\bin\jar cfM App_Files-RUM-PNL.zip *.pag *.ind

With this, we will have a process like this:

Remotly Zip Process

This should not be something that complicate but it is and believe me, I create a very fast process and the client is very happy.

I hope you guys enjoy it and see you soon.

Oracle ACE Program Second Level up!

Posted in ACE, EPM, Oracle, OTN with tags , , on February 24, 2016 by Rodrigo Radtke de Souza

Hi all!

Quick post today just to announce that Oracle ACE Program just promoted me to Oracle ACE status!!!

O_ACELogo_clr
Now DEVEPM has the only two Oracle ACEs in the BI track for the entire Brazil 🙂

Capture
As we said when we first got awarded as ACE Associates, we are extremely happy and honored to be part of this team. Thanks for all of our readers for your support! This award just give us more motivation to keep working and bring more cool stuff for you all!

See ya!

Reviewing “Developing Essbase Applications – Hybrid Techniques and Practices” from an ETL Architect perspective

Posted in Book, EPM, Review with tags , , , , , on January 8, 2016 by Rodrigo Radtke de Souza

Hi guys! Today we are very happy to be reviewing “Developing Essbase Applications – Hybrid Techniques and Practices” for you. We will review the book not once, but twice and the reason behind that is that Ricardo and I (Rodrigo) have different backgrounds and this creates very different opinions around the book topics.

My professional career comes much more strongly from an ETL Architect position, so although I know the basics and principals around Essbase, for me it was always one more target/source system to pull/retrieve data from. Ricardo in the other hand has his major years working with EPM space and he knows/uses Essbase at its fullest. So based on that, let’s see how our reviews will differ from one another!

51QGgxNE-vL._SX340_BO1,204,203,200_

ETL Architect perspective review

Developing Essbase Applications – Hybrid Techniques and Practices” may be considered the “second” book of the “Developing Essbase Application” series (“Developing Essbase Applications: Advanced Techniques for Finance and IT Professionals” is the first one and it is a must to have!) but it is completely different from the first book, since it cover different topics with some different authors. Speaking on topics, this is what you will find in “Hybrid Techniques and Practices”:

  • Exalytics
  • Hybrid Essbase
  • Young Person’s Guide to Essbase Cube Design
  • Essbase Performance and Load Testing
  • Utilizing SQL to Enhance Essbase
  • Integrating OBIEE and Essbase
  • Managing Spreadsheets with Dodeca
  • Smart View

The books is edited by Cameron Lackpour along with John Booth, Tim German, William Hodges, Mike Nader, Martin Neuliep, and Glenn Schwartzberg. Those names are more than a reference in the EPM community and it is great to see great minds putting their effort to create this book. We know that anything in life (it can be a book/work/presentation/project) will be great when the people that are involved in that process are passionate about what they do and this is well represented here. All these people just love EPM/Essbase and this passion alongside with their great knowledge about the topics created a fantastic book.

The chapters in the book are well separated, so you may read any topic in any order that you prefer. First topic will talk about Exalytics and its “Secret Sauce”. Hardware aficionados will love this chapter, but “not-hardware guys” like me may fell kind of lost. I got that Exalytics is great and do great things for Essbase, but all metrics and hardware comparison tables I’ll just leave for the hardware/environment guys 🙂

The second chapter for me is the shiny gem of the book and it alone is already worth the cost of the book (it couldn’t be different, since this chapter is also the book’s title). Hybrid is a very new concept in the Essbase world so you will read what it is, its architecture, what you can/can’t do with Hybrid right now and so on. For someone to understand what Hybrid means, he must also understand the concepts of ASO/BSO and these are well covered in this chapter as well. In resume, this is fantastic overview what Hybrid currently is and how to get along with it.

I read the third chapter before I got the book because it is available for free on OTN!  You may download it and read it entirely! This chapter I would recommend to everyone that will have any kind of iteration with Essbase, especially those people that ask requirements to be implemented :). Although the information may be considered “basic” Essbase design, it is the core foundation to have a good Essbase implementation. Fantastic chapter and as I said, it is free, so go there and read it (after you finish to read the rest of the post, of course)!

I was extremely glad to read chapter 4 because it talks about Essbase performance and load test. This kind of subject is very rare to be found and this chapter contains a very good overview on some techniques that could be applied to have a performance test on Essbase. This is a complicated topic since testing Essbase is not a trivial thing. There are too many factors to consider but this chapter does a great job explaining those details and how you could accomplish a good test scenario.

On the very beginning of chapter five, Glenn states that his chapter contains very basic information about SQL and if you know SQL already you may just skip the first part of it. Since SQL is in my blood for some years now, I skipped almost the entire chapter, just reading the end of it when it talks about how to use SQL within Essbase. This chapter is for people that uses Essbase but are now aware of what SQL could do for them. My opinion is that everyone that “works with numbers” should know at the least the basic concepts of SQL and it heavily applies for Essbase users.

Chapter 6 will give you a very good perspective of what you can do when reporting Essbase using OBIEE, its limitations, its good practices and the workarounds that may be used to implement some of the most common requirements that are “not natively” supported by the tool. A great chapter that I’ll just keep coming back to it whenever I have some doubt about the do’s and don’ts around Essbase and OBIEE integration.

I’ll talk about chapter 7 and 8 together since they talk about tools that can be used to interact with Essbase data. I must say that I just use Smart view to check if something got loaded or to do some basic tie out checks, so my iteration with those tools are pretty basic. Chapter 7 talks all about Dodeca and I got some pretty interesting information that I was not aware of, like its architecture, how it differs from other tools and so on. Chapter 8 goes deep into Smart View and tell us how it can be customized to allow the users to retrieve more value from the tool. From my personal perspective I don’t know how much I’ll be able to use it in my daily work but at least now I know where to find some answer if some more advanced Essbase user comes to talk to me about those tools.

Verdict

In resume, this is a great book to have and it will for sure bring you some great information about Essbase and the things that goes around it. My “personal award” goes to chapter two since I was really interested to learn what Hybrid was all about and it got accomplished reading it. One thing that I (of couse) would love to see would be a chapter around data integration but I get why there was none. The first book already talked about ODI integration with Essbase and since then there wasn’t anything new around this topic. At the last Oracle Open World, Oracle gave some insight around new ODI KMs for Essbase and Planning, as well as some new ODI components like Dimensions and Cubes, so maybe we may have something new to write about in a third volume of the book??? DEVEPM will be very willing to even contribute for this writing if that would be the case 🙂

And that’s it folks, I hope you have enjoyed this review. By the next days we will be posting about a more “Essbase experienced” kind of review, so let’s see how it goes!

See ya!

DEVEPM 2015 in review

Posted in 2015, ACE, EPM, KScope 15 with tags , , , , , , on December 31, 2015 by Rodrigo Radtke de Souza

Hi all readers! Another year has gone and it was a great (if not the greatest) year for DEVEPM! We have done a lot of cool stuff this year like get nominated for the Oracle ACE Program, being at Kscope15, 2MTT for Oracle OTN, created articles for ODTUG’s Techinical Journal and Oracle OTN, got selected (twice) for Kscope16 and even Ricardo got married this year! An of course a lot of other EPM related topics 🙂

I would like to thank each one of you for accessing/reading/sharing our blog posts! I hope that we could positively impact your lives somehow with any of our post. That’s the reason why we blog: we want to somehow share our knowledge back to the community that always helped us in difficult times!

And 2016 will be no different! We will work even harder to bring new content about EPM world for you! Thanks again and see you soon!!!!

Follow us on Twitter: @RodrigoRadtke @RZGiampaoli @DEVEPM

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 25,000 times in 2015. If it were a concert at Sydney Opera House, it would take about 9 sold-out performances for that many people to see it.

Click here to see the complete report.

DEVEPM will be at KScope 16!!!

Posted in ACE, EPM, Kscope, Kscope 16, ODI, ODI Architecture, ODTUG, Tips and Tricks with tags , , , , , on December 14, 2015 by Rodrigo Radtke de Souza

Hi all, how are you doing? We are very happy to announce that, not one, but TWO presentations were approved for KScope 16! Here they are:

1) Incredible ODI tips to work with Hyperion tools that you ever wanted to know
“ODI is an incredible and flexible development tool that goes beyond simple data integration. But most of its development power comes from outside the box ideas.
Did you ever wanted to dynamically run any number of “OS” commands using a single ODI component?
Did you ever wanted to have only one datastore and loop different sources without the need of different ODI contexts?
Did you ever wanted to have only one interface and loop any number of ODI Objects with a lot of control?
Did you ever need to have a “Third Command Tab” in your procedures or KMs to improve ODI powers?
Did you still use an old version of ODI and miss a way to know the values of the Variables in a scenario execution?
Did you know that ODI has 4 “Substitution Tags”? And do you know how useful they are?
Do you use “Dynamic Variables” and know how powerful they can be?
Do you know how to have control over you ODI priority jobs automatically? (Stop, Start and Restart scenarios)
If you want to know the answer of all this questions please join us in this session to learn the special secrets of ODI that will take your development skills to the next level.”

The idea behind this presentation is to show the main secrets we use daily basis to improve code quality and re-use, as well explain why and what more we could do with each of the tips we will present. It’ll be a extreme helpful presentation with a lot of cools stuff and real life example.

2) Take a peek at a smart EPM global environment
“In a fast-moving business environment, finance leaders are successfully leveraging technology advancements to transform their finance organizations and generate value for the business.
Oracle’s enterprise performance management (EPM) applications are an integrated, modular suite that supports a broad range of strategic and financial performance management tools that helps business to unlock their potential.
A global financial environment contains over 10000 users around the world and rely on a range of EPM tools like Hyperion Planning, Essbase, SmartView, DRM and ODI to meet its needs.
This session shows all the complexity of this environment, describing all the relationship between those tools, the technics used to maintain such a large environment in sync, and meeting the most varied needs from the different business and laws around the world to create a complete and powerful business decision engine that takes a global company to the next level.”

The idea in this presentation is to show the design we uses in one big client and why we use it, the gains and how it works. In fact for the ones that follow our blog and our presentations, this will be the tie point of everything we talk about. It’ll be a excellent presentation for people looking for ideas of integrated environments.

We are very excited about it since we’ll be talking about how to improve EPM tools potential using ODI and also how EPM tools connects with each other in a global financial environment. We’ll be very pleased if you guys show up in our presentation. It’ll be great to meet everyone there and talk about EPM and other cool stuff!

Kscope is the largest EPM conference in the world and it will be held on Chicago, Illinois on June 2016. It will feature more than 300 technical sessions, five symposiums, deep dive sessions, and hands on labs over the course of five days.

Got interested? If you sign up by March 25th you’ll take advantage of the Kscope early bird rates, then don’t waste more time and let’s be part of the greatest EPM event in the world. If you are still unsure about it, read our post about how Kscope/ODTUG changed our lives! Kscope is indeed a life changer event!

kscope16

Thank you very much everybody and we’ll be waiting for you at Kscope 16!

Tips and Tricks: Working with ODI Variables and Global Parameters

Posted in ODI Architecture with tags , , , , , , , , , , , , on September 7, 2015 by RZGiampaoli

Hi guys, today we’ll talk about some very simple but powerful technic that we always use in our integrations. Its joins two concepts together and make our lives a lot easier and our integration a lot more dynamic. We are talking about variables and the concept of “Global” parameters.

In our integrations we never, ever have anything hard coded. Every time you hard code something it will come back to bite you in the future that is for sure.

Then the first thing we do in a project is to create a table that we call ODI_PARAMETER. This table will contain all configuration and parameters that needs to be validated, hard coded and so one.

I like to create this table in our work schema (to make easier to use) and its look like this:

ODI_PARAMETER Table

The “SESSION_NM” is used to make the variable reusable in all scenarios that we want, meaning we’ll have only one variable for packages in the project or even for all projects (if we make this variable global in ODI).

How it works? First of all we need to get the “Session Name” for our “Scenario/Package”. Why did I say “Scenario/Package”? Because the result could change depending if you are running a Scenario or a Package. Let me explain this.

To get the “Session Name” in ODI we use an ODI Substitution method called “odiRef.getSession”. This method has other parameter that could return the Session ID, and other stuff but what matters for us is the “SESS_NAME” parameter, that will return the name of the session, the same thing that appears in the operator when we run any object in ODI.

Why I said object? Because if you run a variable the session name will be the variable name. If you run an interface, the session name will be the interface name, it goes to procedure, package and scenario, and that is why I separate the “Scenario/Package” because if we do not pay attention, the name of the package would be different of the name of the scenario, causing a problem when we run one of them.

Let me show how it works. First of all, we’ll create a Global ODI variable called SESSION_NM (could be whatever you want, I just like to call it like this) and we’ll put this code inside of it:

SESSION_NM Variable

After that, we will run this variable to see the results:

SESSION_NM Results

As we can see, the value of the variable was the name of the Variable itself. Now, let us create a package, put this variable inside it, and see what’s happens:

Package test 1

Here is what the interface looks like and above its results:

Package test 1 results

As we can see the result of the variable is the same as the session but in UPPER case since I create the variable like this. But why I did that? Let me create a scenario of this package to show you why:

Scenario Creation

And this is why I create in the variable getting the result and put in UPPER and why I said we need to worry about some peculiarity regarding Scenarios and Packages. When you create a scenario will have the name of the interface in UPPER case and also, NO SPACES. Now, if we run the just created scenario we will have:

Scenario results

Meaning, if we will use the result of this variable as a way to return data from a table, we’ll have a problem because it’ll not find the same result if you run the package or the scenario of that package.

The easiest way to resolve that is to have the name of the main scenario (the scenario that will contain all the other scenarios) with no spaces and no special characters (ODI also transform special characters like % in to _).

Doing that and we are good to continue as we can see below:

Package results

Now we have the same results if we run the package or the scenario.

Ok next let us create another variable to return the LOG_PATH, the path where we will store all our logs from our integrations. The code that we will use for this variable is:

Query ODI_PARAMETER

As we can see we are using the result of the “SESSION_NM” variable in this “LOG_PATH” variable. This is what’ll make this variable reusable in all “Packages/Scenarios/Procedures”. Let us insert a value inside our ODI_PARAMETER Table and run the Package to see the results:

Insert Test 1

Package 1 Results

Now let us create a new package with a different name, use the same variable as above, and insert a new line in our ODI_PARAMETER table for the new interface:

Package 2 results

See, same code, two different results. That means, 90% of the interfaces needs just to be duplicated and the parameters in ODI_PARAMETER needs to be inserted for the new interface and it is done. Also, we don’t need a ton of variables to get different results. And there is more.

The code of the variable also does not change that much. For a new variable, we just need to duplicate the LOG_PATH variable and change the PARAMETER_TYPE, PARAMETER_NAME and PARAMETER_VALUE to get any other information from the ODI_PARAMETER. That means, easier to maintain.

However, let us not stop here. In this example, we are getting the LOG_PATH for our logs in our integrations. Normally this path does not change from integration to integration. What changes is the name of the integration that we are logging right? In addition, with our SESSION_NM variable we could just put in our LOG_PATH variable the root of our LOG folder and then use like this:

#LOG_PATH\#SESSION_NM

This would make the LOG_PATH equal for all integration right. Nevertheless, in the way we create our variables we will need to insert one line for each integration in our ODI_PARAMETER table right.

Well, we just need to change a little bit our code in our variable to create the concept of GLOBAL parameters. How it will work:

First, we will delete the two lines we just created and then we will insert just one line in ODI_PARAMETER table:

Insert Global

Now we just need to change the code from our LOG_PATH variable to this:

Query ODI_PARAMETER global

And here we go:

Global results

We have one global parameter that can be used for all integrations. And the cool thing is that the code above tests if we have a parameter for the actual SESSION_NM and if not it’ll get the parameter from the GLOBAL parameter, meaning if any integration needs a special LOG_PATH or something you just need to insert a new line in the ODI_PARAMETER to get the value just for that integration:

Global results exceptions

This will guarantee that you never ever needs to touch your code again to test or change anything that the business ask you for.

As I said, is a simple but very powerful tool to use.

Hope you guys enjoy and see you soon.

Stopping ODI sessions in an automated way

Posted in Hacking, ODI, ODI 11g, ODI Architecture with tags , , , on September 4, 2015 by Rodrigo Radtke de Souza

Hi all! In today’s post we will talk about how you may create automatic processes to stop ODI sessions. But first let’s think why/when we should automate this kind of task.

Everybody knows that the basic way to stop an ODI session would be to go to ODI Operator, right click in a running Session and select “Stop Normal”/”Stop Immediate”. Obviously it works just fine, but it requires someone to log in, select the jobs and stop them. There will be cases that you will want to stop those scenarios without any human intervention.

So let’s imagine that you have a critical ODI job that must make sure that some other secondary ODI jobs are not running before it actually starts. Maybe you could add an OdiSleep object in the critical ODI job, wait a little bit, check if the secondary jobs are still running, sleep again and so on. It is safe approach, but sometimes this critical ODI job is also top priority and it could have permission to stop all other secondary ODI jobs before it actually starts.

Or maybe you could have an execution window that must be respected and all ODI jobs that crosses a specific range of time should be stopped no matter what. I could write some other examples, but you already got the idea. So, how do we accomplish that in ODI?

If we take a look on ODI Toolbox panel, we are going to find things like OdiStartScen and OdiStartLoadPlan, but nothing related to stop, cancel or kill a session.

Toolbox

I’m not sure why Oracle didn’t put this kind of objects in the Toolbox, but if we go to the ODI agent bin folder (oracledi\agent\bin) we are going to see some interesting .bat (.sh on linux) jobs there:

  • Restartloadplan.bat
  • Restartsession.bat
  • Startloadplan.bat
  • Startscen.bat
  • Stoploadplan.bat
  • Stopsession.bat

We may right click/edit each of those to get more information about them. Today we are interested in the last one “Stopsession.bat”. Its syntax is pretty simple:

stopsession <session_number> “-AGENT_URL=<agent_url>” [“-STOP_LEVEL=<normal(default)|immediate>”]

Pretty cool and easy to use. We may just add this call to an ODI procedure and create some logic to stop all sessions that we want. Let’s build an example and see how it would look like. Imagine that you have an ODI_JOB_A that needs to stop ODI_JOB_B and all its children (if there is any of those running) before it continues its tasks. To accomplish that, we would need to create an ODI procedure and use the command on source/target technique to select ODI_JOB_B and its children that are currently running. It would look like this:

Command on Source

On “Command on Source” we would write a SQL against the ODI metadata repository checking for all ODI_JOB_B sessions (and its children) that are currently running. Of course that this SQL is just an example, you may tweak it to fit your own requirements. Here we are just querying the SESS_NO that belongs to a running session of ODI_JOB_B and UNION that to all running children of a running ODI_JOB_B session.

On “Command on Target” it should be just a matter to add the stopsession cmd on “Operating System” technology, but it is not that easy. Let’s analyze the stopsession cmd again:

stopsession <session_number> “-AGENT_URL=<agent_url>” [“-STOP_LEVEL=<normal(default)|immediate>”]

session_number is a value that will return from our “Command on Source” tab, so we are good.

stop_level may be set as normal or immediate, so we are also good here.

The problem that we have is the AGENT_URL. A valid AGENT_URL would look like this:
-AGENT_URL=http://ODISERVER:9001/oraclediagent
This URL is composed with the information that is set in our Topology information, like the one below:

Agent

The problem here is that we don’t have any ODI substitution API that return this kind of information. The closest that we have is <%=odiRef.getSession(“AGENT_NAME”)%> that just returns its name, nothing more. To get around this situation, we will need to query ODI metadata repository again and compose this URL using a SQL against SNP_AGENT table. Let’s create one ODI variable for that like the one below:

Variable

Here we using the AGENT_NAME API function to get the right information for the running agent. Now we are able to finish our procedure with the “Command on Target” command:

Command on Target

And that’s it! Just add the AGENT_URL refresh variable and this procedure in the very beginning of ODI_JOB_A package and you will have it stop session ODI_JOB_B and its children before it moves on.

Hope you liked it! See you soon!