Archive for HFM

Loading and Extracting HFM Data with ODI Knowledge Modules (ODTUG Article)

Posted in, HFM, Knowledge Models, ODI 11g, Technical Journal with tags , , , on August 8, 2017 by Rodrigo Radtke de Souza

Hi all!

ODTUG just released our article about “Loading and Extracting HFM Data with ODI Knowledge Modules“. This second article shows all the details behind the construction of ODI IKM for data load and also a procedure to extract data from HFM and it also explain all its options and functionalities.

Please feel free to download and use our KMs. They do not have official Oracle Support, but we try our best to answer and fix any issues that you may find (don’t forget to take a look on our “debug” post here).

Thanks everyone!


Troubleshooting connectivity issues between ODI and HFM

Posted in, HFM, Knowledge Models with tags , , on April 13, 2017 by Rodrigo Radtke de Souza

Hi all! We are very happy with the feedbacks that we are having about HFM KMs for ODI. People are downloading them and giving them a try, which is awesome! However, we know that this ODI and HFM integration process is not as simple and straight forward as we would like it to be and we generally fall into environment issues to setup the jar files, ODI agent, connect to HFM using the new Java API and so on. In order to make it easier for people to troubleshoot their issues, we are creating this post to hold all known issues that people are having with our KMs and try to help the best we can. We will keep updating it, so please keep checking.

If you have any issues with our KMs usage, please send us an email ( so we may try to help you and all the others that may be facing the same issue. Thanks all!

Error WSSERVLET11: failed to parse runtime descriptor: java.lang.NullPointerException

As we wrote in our blog post, we have two options to setup the necessary HFM Jar files:

1) Install the ODI agent on the HFM server;
2) Copy the necessary jar files to the agent folder;

Some people are getting the above WSSERVLET11 error when performing the first option, which is to install the ODI agent on the HFM server and just point the jar file locations on ODI_ADDITIONAL_CLASSPATH. They change the ODI parameters to point to the right location, but when they start the agent it fails with this error.

We are not sure exactly why it happens, but we suspect that the “absolute file path” gets too big for ODI to handle and then the path gets “truncated” at some point, making ODI to throw this error. Our suggestion is to go with our second approach and add all the jar files to oracledi\agent\drivers folders. If are not sure how to locate the correct jar files, send us an email and we will provide them for you.

Also, if you are not sure if the problem resides on your ODI agent or the Jar files, you may do a test using ODI Local (No Agent) by copying the HFM jar files to C:\Users\\AppData\Roaming\odi\oracledi\userlib (don’t forget to restart the ODI client after that). Then you may try to reverse a HFM model using Local (No Agent). If it works, the Jar files are good to go and your problem resides in the ODI agent itself.

HFMException: EPMHFM-65536

This error is happening when people want to reverse the ODI objects using the new ODI RKM, more specifically when the ODI code tries to connect/get the connection token from your HFM application. EPMHFM-65536 is a very generic error message and can be caused by a number of different factors; ranging from an improper install to application processes crashing.

One of the users (thanks Kevin!) solved his issue with the following:

I needed to copy the file “” from Oracle\Middleware\user_projects\config\foundation\ to path Oracle\Middleware\user_projects\epmsystem1\config\foundation\ and then the KM is running with success.

This blog post gives some other suggestions for the issue:

Since this error is too generic, each HFM application may have its own kind of fix.

IKM Data Load does not load any data (and does not throw any error either)

One user found a bug where the data interface would run without any error but would not load any data. This happened due to a bug in the RKM that may create the datastore with a wrong column order in some cases. As workaround, please change your HFMData datastore to the following order before loading data to HFM:


We will work to fix the RKM code to avoid this issue and to create the columns in the correct order.

I get a parse error when I have more than one filter in the interface

We noticed that if you add more than one filter in the ODI interface, the IKM throws a parse error. As a workaround, just group all filter criteria in one single filter component. We will fix the IKM code to avoid this issue and we will let you know once it is released.

Slashes vs Backslashes (Token Parsing Error: Lexical error)

Some people stated that they were getting some errors like this” Token Parsing Error: Lexical error at line 34, column 34.  Encountered: “O” (79), after : “\”C:\\”: “. This error is probably due to the backslashes. All the paths that you add in your KM options needs to be using / (slashes). e.g: C:/Temp/<%=snpRef.getTable(“L”,”TARG_NAME”,”A”)%>_Load.log

Isolate the components to see what may be wrong

We are seeing that generally the errors may happen on the ODI agent (that does not load the Jar files correctly or does not have access to the HFM application) or on the HFM installation installation. In order get it easier to identify where the issue may be happening, it is a good advice to install a Java IDE (like Eclipse) in the HFM server and try to create a small code just to connect to your HFM app. If it connects, then the problem is likely to be in the ODI agent. If it does not connect at all, then you have a problem in you HFM application that is not accepting Java API calls.

Here you may find some examples on Java codes that you may copy from and try to create a sample app:

Running a simple connect Java code first against your HFM application “eliminates” ODI from the possible points of failure. In other words, if you are not able to connect using standard Java code, ODI will not be able to connect as well.

Thanks all!


Posted in, ACE, Configuration, DEVEPM, ETL, Hacking, HFM, Knowledge Models, ODI, ODI 11g, ODI Architecture, Uncategorized with tags , , , , , on March 3, 2017 by RZGiampaoli

Hi guys how are you? Today we are proud to announce that we are making available the ODI KMs for HFM

—- EDITED on June/17 —-

We developed these KMs around 6 months ago, but we were waiting to release them together with an article that we wrote for Oracle.

Since OTN had some “Priority changes”, our article was postponed to later this year. As we had some people asking for these KMs we decide to release the KMs now and when the article is published we will let you guys know as well.

The article is live here! And if you guys are having errors with our KMs, please check our troubleshooting post here.

—- EDITED on June/17 —-

Prior to version, ODI could be easily used for HFM integration processes. ODI used its KMs with specific HFM drivers (HFMDriver.dll) provided by Oracle that were used to access and manipulate HFM applications. However, on HFM’s latest version, Oracle decided to remove its support for ODI, meaning that all HFM integrations would have to move from ODI to either manual iteration with HFM, usage of another integration tool (Like FDMEE) or create custom code using the new Java HFM API.

Since we didn’t want to re-write all our ODI environment and also none of the above options are robust enough, we decided to recreate the ODI KMs using Java HFM API. For these KMs to work we need to do two things: import them from ODI Java Net and do some setup in the ODI agent.

In the article we explain all options and how do we came up with this solution, but here we will not talk about it since we want you guys to read our article as well and we can’t use the content of the article here since we already signed an exclusivity agreement with Oracle.

The first part is easy and you just need to download the files from the link below


The second one is more difficult. We need to make the new HFM Jars available to the ODI Agent and in order to do so we have two options:

Install the agent in the HFM machine OR copy the necessary jar files to the agent drivers folder (oracledi\agent\drivers).

If your architecture allows to have both HFM and ODI agent in the same server, then you may use this approach, which is very simple. The only thing to do is to change odiparams file (oracledi\agent\bin\odiparams.bat file in a standalone agent) and add the location of those three HFM jar files. Open odiparams.bat file and search for “ODI_ADDITIONAL_CLASSPATH”. On that setting, just set the location of the HFM jar files, as below (this is just an example. Please adjust the path accordingly to your environment):





Save the file, restart the ODI agent and it is done

If you decide to go with the second option, we’ll provide a list of all the necessary jars (be prepared… it’s huge). In the article we explain how to identify all the necessary jar files in a systematic way but here this is not an option as explained before.

Search for all the Jars in the below list and copy all of them under oracledi\agent\drivers folder.


Restart the ODI agent and it should be ready to execute any HFM Java code inside of ODI.

I know that this is a lot of jars and will take some time to find all of them but at least you’ll be able to upgrade you HFM and still use the same interfaces you have today in ODI to manage HFM (just remember to use the new data store objects reversed from the new RKM).

The KM usage is very similar to the old ones and we had the instructions in all its options so we’ll not explain then here (just in the article). The only important difference is on how to setup the “Cluster (Data Server)” information on Data Server (Physical Architecture). For the new HFM API, we need to inform two new settings: Oracle Home and Oracle Instance Paths. Those paths are related to the server where your HFM application is installed. These settings will be used internally in HFM API to figure out all HFM information related to that specific HFM instance.

Due to these two new settings and in order to continue to accommodate all connection information within a single place (ODI Topology), “Cluster (Data Server)” was overloaded to receive three settings instead of just one, separating them by colon. So now “Cluster (Data Server)” receives “dataServerName:oracleHomePath:oracleInstancePath” instead of just dataServerName.


Having those considerations in mind, it is just a matter to create a new Data Server and set the overloaded “Cluster (Data Server)” information and the user/password that ODI will use to access the HFM application. After that, we just need to create a Physical Schema with the name of the HFM application, a new Logical Schema and associate that to a context.

And that is it, you guys are ready to upgrade your HFM environment and still use your old ODI interface to maintain HFM. If you guys have any doubts/suggestions about the KMs please few free to contact us.

If you guys are having errors with our KMs, please check our troubleshooting post here.

I hope you guys enjoy these KMs. See you soon!

DEVEPM on Kscope15! – Day 3, 4, 5 and 6: OMG!

Posted in EPM, KScope 15 with tags , , , on June 30, 2015 by Rodrigo Radtke de Souza

Hi all! I told you that it would be extremely hard to do a living blog of Kscope15 and this is because we don’t get any free timing when we are there. So many things to do, so many people to talk, so many presentations to see… I really envy Cameron… He is able to attend the conference, participate in its organization, go to parties, speak his sessions and yet blog about it in the speed of light. I think that he doesn’t sleep at all 🙂

Of course that it would be an extremely long post if we decided to write down the details of 4 days of Kscope in here (and we want to encourage you to be there next time to see with your own eyes!) so we will just do a resume of it all and hope that you get the essence of the greatest EPM conference in the world. Saturday and Sunday was already covered in the past posts, so let’s get straight from Monday to Thursday. I’ll not do it in a chronological way, but instead I’ll divide it by topics, so I can group things together.

DEVEPM’s session

Monday was one of the most intense for us because that was the day of our presentation. I think it was our best Kscope presentation so far! It had a good amount of people, they asked very good questions and the feedback was pretty awesome. It was great to present on Monday morning because then we could relax and enjoy the conference. Past years it was on Wednesday and Tuesday, so that “pressure” of having a session to present keeps lingering until you actually present it. Thanks for all that attended our session! It was an honor to have you there!


Sessions that DEVEPM has attended

We will not describe one by one here, but basically we were looking for presentations about the new version of EPM and mainly because this new EPM version and ODI have some problems to work together. Essbase and ODI are still working fine. Planning on the other hand had its support cancelled by Oracle last year (the announcement was on Kscope14), but thanks to the community public petition, Oracle reconsidered this topic and said that the ODI KM was going to be supported in the future. And guess what we found at Kscope15? Oracle is already supporting ODI and Hyperion Planning!!! See ODI patch 20957183 (UNABLE TO USE ODI TO LOAD PLANNING on Oracle Support site for more details about it. We will be applying this patch in our environment in order to test it, so you should expect a blog post about it in the next few days!!!

Another thing to figure out was about HFM Data and Metadata integrations. Oracle has definitely cancelled ODI support for it, so we needed to see how is the best way to load data and metadata in HFM from now on. Honestly, all options that were shown are not good compared to ODI (sorry, but that’s true). ODI was very straight forward: you have data or metadata sitting in some Oracle table (probably already altered using some ETL logic on it), then you would just need to create one ODI interface to load it to HFM and that was it. Simple, clean, straight forward approach. Now, for this new version of HFM, things gets way harder, especially if your environment does not contain FDMEE, EPMA or DRM.

When we talk about data loads to HFM, it seems that FDMEE is the way to go. In the majority of the cases, where the data goes thought an ETL process before it is sent to HFM, you will need to load this data to FDMEE open interface table, use FDMEE open interface adaptor to load it to FDMEE and then have FDMEE loading the final data to HFM. Yeah, it’s a long chain of events that needs to be done now, a new tool to figure out how to use it and so on. The “good” news that we heard in one of the FDMEE presentations was that in the next release of FDMEE, Oracle will enhance its open adaptor interface, so you won’t need to load the open interface table first and then call FDMEE. FDMEE will be able to go straight to the Oracle tables and load it to FDMEE, reducing one loading step. Anyway, it still seems over-complex to do something that was very easy to do. We will also post some blogs about FDMEE integration in the near future. We also participated in a Hands-on lab provided by Kscope/Peloton (yep! Kscope is a full package and it also contains hands on training!) about FDMEE. Of course that no one can master a tool in 2 and a half hours, but at least you can get to know the basics to start playing with the tool.

HFM Metadata integration gets even worse. If your metadata comes directly from EBS, than you are good and you may also use FDMEE for that. But this won’t be the case in the majority of cases in the real world. If metadata comes from an Oracle table for example, then you will need to go either EPMA, or do your own custom code using the new HFM API, or use DRM. Those that already uses EPMA may be thinking that it’s not that bad, but Oracle also announced at Kscope15 that EPMA will not have any further enhancement from now on. Oracle will still support it, but no new enhancements. Oracle’s recommendation is that, if you are implementing something new and this would require a new implementation of EPMA, maybe this is a perfect time to reconsider that. Nobody will want to create a new project based on a tool that will not evolve anymore (and obviously, will eventually be decommissioned). Our options then reside on custom development using HFM API or DRM to export the metadata in a format that HFM is able to understand and import it. We will definitely be working on this topic and post about it as soon as we get some results.

There were several other fantastic sessions that we attended and we could keep writing in here (and you may be sure that they will have some sort of influence in our future posts), but then this blog post would become almost a book. So I’ll just talk about one more session that was about Drill Bridge. Those that don’t know yet what this tool is, PLEASE take a look at its pageJason Jones is the creator of Drill Bridge tool and he was able to explain the tool, install it, configure a drill definition, show to the audience it working and reply to the audience answers in less than 60 minutes!!! That was really impressive. Today we go over a big complexity, generally using Essbase Studio to be able to create drill reports for Essbase, but Drill Bridge make things extremely (I really mean extremely) easy to install, setup and run. Congrats Jason for your great work, we will definitely try it out and see how it goes in our environments.

Networking + fun = a world of possibilities

The best place for having both networking and fun at the same time is definitely Kscope. Here are some of the things that we did during our stay there (sorry if I forgot something) apart from the sessions themselves:

  • General Session with Don McMillan (Technically Funny): extremely fun session and also we found out that Kscope16 will be held on Chicago!!! It will be awesome as always!
  • EPM Olympics at Monday Community Night;
  • Cameron’s and Natalie’s meetup;
  • ODTUG interview: Yeah, they were crazy enough to interview us 🙂 We will share the link once it is available;
  • Oracle ACE and Speakers Reception;
  • Saturday Real Deep Dive and Thursday Deep Dive;
  • Bunch of Happy Hours with fantastic networking;
  • Exhibitor’s square;
  • Special Event at Nikki Beach: This was awesome, just awesome party;

And I think that’s it guys. Kscope15 just finished and we all are missing it already. It is just too good to be able to participate in such a great conference. Thanks ODTUG for organizing it! It’s great to see such a perfect work and dedication that you guys put to make it all happen. DEVEPM will definitely work hard to get selected again for Kscope16 🙂 !

See ya!