Archive for the ODI Architecture Category

Is my ODI version certified for that technology?

Posted in ODI, ODI Architecture, Versions with tags , on January 14, 2021 by Rodrigo Radtke de Souza

Hi all, today’s post is a quick one, but I hear this question very often. Is ODI version XXXXX certified to XXXXX technology version? It is very easy to check it since Oracle keeps it all in one place called “Oracle Fusion Middleware Supported System Configurations” page, but sometimes people get a little bit confused about it.

Once you reach the website, select your ODI version (remember, ODI is inside Oracle Fusion Middleware). My ODI is 12.2.1.4, so let’s check it out:

There will be multiple tabs there. If you want to check in which system your ODI Agent is certified to run for example, you may click on the System tab:

It gives us a lot of information regarding which system, version, java version is supported for your agent. If you want to check which DB version is certified, just click on Database tab:

One last import thing that is always checked is which technology and versions are supported for Source-Target ETL. This can be easily checked on “ODI Source-Target” tab:

That is its folks. Very simple, but useful post. See you soon!

Advertisement

KScope 18 Speaker Award

Posted in ACE, Career, DEVEPM, EPM, Kscope, Kscope 18, ODI, ODI Architecture, ODTUG, PBCS with tags , , , , , , on September 17, 2018 by RZGiampaoli

Hey guys how are you?

It has been awhile since last time I wrote anything here…. and surprise, surprise, it’s because I’m crazy working in a project that was sized small but turn out huge and the size didn’t change…. 🙂 never happened before heheheh 😉

This is just a small post to tell how grateful and happy we are for receiving the EPM Data Integration Speaker Award in Kscope 18 with the presentation: How to Use Your ODI On-Premise to Seamlessly Integrate PBCS.

We start this blog in 2012 and we have been presenting at Kscope since 2013 and it has been very rewarding, not only because we become Oracle ACEs because of this, but because every single post or presentation we learn a lot with it.

When you do a presentation you need to stop to think in a solution for a specif project and start to thinking in a solution that can be used to all projects. This alone is a challenge, but the amount of thing we learn is a great deal. We can easily said that our code improved a lot since 2012 when we began with this blog and it’s in great part because of this blog and our presentations.

Then we thank you all of you that read our blog (even if we don’t post as much as we would like), to everybody that goes to KScope and decide to watch our presentations and to ODTUD that provide this bi-lateral learning platform.

Thank you all of you for supporting us and see you soon.

ODI 12c Standalone Agent Install for an ODI 11g guy

Posted in InfraStructure, Install, ODI, ODI 11g, ODI 12c, ODI Architecture with tags , , , , , on July 17, 2017 by Rodrigo Radtke de Souza

Hi everybody! Today’s post is about installing an ODI 12c standalone agent. This is not a “new” topic and the steps to perform it can also be found at the Oracle site, however it got me a little bit “off guard” when I was requested to install one and the reason is that it changed considerably comparing to ODI11g (and yeah, we still work A LOT with ODI11g, so installing ODI12c agent was “new” for us).

Prior to ODI 12 version, the ODI agent was configured by simply editing a file called odiparams.bat (odiparams.sh in Linux), which would contain all the necessary agent configuration parameters. It was a simple step, where you would enter the ODI master/work configuration, DB/ODI connection users and so on. After that, you would simply run the agent program and that was it, very short and easy to do. However, in ODI 12 version, it changed considerably and now we need to go through two wizard setups, one for creating the necessary pre-requisite DB schema for ”Common Infrastructure Services” and the other one to configure the ODI Standalone agent for us.

This change added some extra complexity to an architecture that was (talking exclusively about ODI Standalone Agent here) very simple to setup in the old days. Although Oracle provides wizards for us to minimize this effort, nothing was easier than simply configuring a parameter file and running a java program. But enough grumbling, let’s see how we may accomplish this task on ODI 12.

The first wizard that we need to run is the Repository Creation Utility (RCU) that is located here at ORACLE_HOME/oracle_common/bin/rcu.bat. Before we run it, we must understand what RCU is and what it can do for us. As its name suggests, it is a utility that may be used to create any repository component required for Oracle Fusion Middleware products, including the ODI Master/Work repository.

In our project, we did not create ODI Master/Work repository with RCU, but instead we got two empty Oracle DB schemas and installed ODI directly there. The reason why we did not use RCU in this situation is because RCU will force you to create one single Oracle DB schema that will store both ODI Master and Work repositories and this is not a good approach when dealing with large environments. We think that Oracle’s rational on this subject was to simplify certain ODI installs by unifying all in a single place, but again, this removes some of the ODI’s architecture flexibility and complicates the use of complex architectures in the future, like using multiple Work repositories attached to one Master.

So, if we already have ODI Master/Work repositories created, why do we still need RCU? This is because, from ODI 12 version on, we need a third Oracle DB schema that will be used to store the “Common Infrastructure Services” tables that are required for the ODI Standalone agent and the only way to create these tables are using the RCU utility.

Now that we have set our expectations around RCU, let’s run it. The first screen is just a welcome screen explaining what RCU is about, so just click Next.

1

Now let’s select “Create Repository” and “System Load and Product Load”. Just notice that you will be asked for a DBA user in the next steps, since this DBA user will be used to create the necessary database objects (including the DB schema itself) in the new “Common Infrastructure Services” schema. Click Next.

2

Add the database and DBA information and click next.

3

ODI installer will check your information and if everything is ok, all tasks will be green. Select Ok to proceed.

4

In the next screen is where we may select which components we want RCU to install. We may notice that RCU is able to create several schemas for different components, from ODI to WebLogic. Since we already have our Master and Work repositories created, we just need to select “AS Common Schemas”/”Common Infrastructure Services”. Note here that, for this schema, RCU will create it using what is added in the “Create new prefix” option plus a “_STB” postfix. Click Next.

5

The installer will check the pre-requisites to install and if it is ok, a green check will appear. Click OK.

6

In the next screen you will identify which schema password will be used on the new created DB schema. Add a password and click next.

7

Define the Default and Temp table spaces that will be used by the new schema and click Next.

8

If the table spaces does not exist, they will be created for you. Click Ok.

9

The installer will check once more if everything is okay and also create the necessary table spaces. Click Ok.

10

On the next page, we are going to have a Summary on what the installer will do. If everything looks correct, click Create to create the necessary DB objects.

11

Check the Completion Summary, click close and that’s it! You have successfully created the “Common Infrastructure Services” schema, which is a pre-requisite for the ODI Agent install.

12

The next step is to run the wizard setup that will configure the ODI Standalone agent for us. Run the Config program on ORACLE_HOME/oracle_common/common/bin/config.cmd. In the first screen let’s create a new domain. In this domain folder is where the ODI Agent batch programs will reside, such as Start/Stop agent. Select a meaningful folder and click next.

13

In the next screen you will select “Oracle Data Integrator – Standalone Agent – 12.2.1.2.6 [odi]” and click next. This step will also install some basic Standalone components required for the ODI Agent.

14

Select a valid JDK location and click next.

15

Since we did not create our Master and Work repositories using RCU, we won’t be able to use the “RCU Data” option for Auto Configuration here. It is not a big deal, since we may select “Manual Configuration” and click next.

16

Here we will need to input all the information related to two schemas: The ODI Master and the “Common Infrastructure Services“. The way that this screen works is tricky and confusing, since there are options that may be typed for all schemas at once. The best way to do it without any mistake is by selecting one of them, add all information, then uncheck and check the other one and add all the information again. Click next.

17

The installer will check the information that was added here and if it is okay, two green marks will be showed in the Status column. Click next.

18

The next screen will be used to define our ODI Agent name. Create a meaningful name here, since this will be used by the ODI users to select on which ODI agent they will run their ETL processes. Click next.

19

Add the server address, the port and an ODI user/password that has “Supervisor” access. On preferred Data source option, leave it as odiMasterRepository and click next.

20

Although we are not going to use our ODI Standalone Agent in a Node Manager object, which would be controlled by WebLogic, we still need to select a type for it and create a new credential. Add any name and a password for it (don’t worry, you will not use it for the ODI Standalone Agent) and click next.

21

Review the install summary and if everything is ok, just click Create.

22

Check all the steps as they turn into green checks and once completed, click next.

23

That’s the end of the configurations! You have successfully completed the ODI Standalone agent configuration and it is ready to run.

24

In order to run the ODI agent, open a CMD command, navigate to your base domain folder and run the ODI Agent start program with its name as an input argument: agent.cmd –NAME=DEV_AGENT. Wait a little bit for it to load and when its status gets to “started” it is good to go.

25

Now that the ODI agent is up and running, we may go to ODI Topology/Agent and double click the ODI agent that you have created. Now we may click on the Test button and see what happens. If everything is correct, you will see an information windows saying that the ODI agent Test was Successful!

26

Congratulations, now you have an ODI12c Standalone Agent configured. As you can see, we now have some more extra steps to do compared to ODI11g. I hope this post helps you to get prepared for this new kind of installs.

Thanks, see ya!

 

ODICS is here!!!

Posted in New Features, ODI, ODI Architecture, ODICS with tags , , on February 13, 2017 by Rodrigo Radtke de Souza

Hi all, quick but interesting post today! Oracle has just announced its Oracle Data Integrator Cloud Service (ODICS)! You may read about it here and here. We do not have much information about it yet (if it is a complete ODI solution, how it works, if it is similar to what we did in our ODI Cloud article), but we are hoping to get those answers in “Oracle Data Integration PM Webcast – Introducing Oracle Data Integrator Cloud Service (ODICS)” that will happen on Thursday, February 16, 2017 | 11:00 am Eastern Standard Time (GMT-05:00). We encourage all of you to join the webcast and have a first look on what ODICS looks like.

——- EDIT on Feb 23 ——-

Hi all, we’ve watched the ODICS webinar and in the end we were right: ODICS is very similar to what we did on our ODI Cloud article. The only difference is that ODICS is installed directly in JCS instead of a DBCS machine as we did in the article.

In resume, ODICS is nothing more, nothing less than the current ODI 12c installed in a JCS machine that is maintained by Oracle. There is only one restriction (that may not be a big issue for some projects), that the ODI agent needs to be running in the cloud, so you may not deploy it on premise. You may watch Oracle’s ODICS webinar here. You may also know more about ODICS at Christophe blog post.

 

odics

That’s it folks, exciting times are ahead of us!

ODI 12c new features: Dimension and Cubes! Part 2 (Loading using Natural Keys)

Posted in Cubes, Dimensions, ETL, New Features, ODI 12c, ODI Architecture with tags , , , , , on September 14, 2016 by Rodrigo Radtke de Souza

Hi all, let’s continue with our posts regarding “ODI 12c new features: Dimension and Cubes”. As stated in the previous post, we can have two ways to build our new objects: with natural keys or with surrogate keys. Today’s post will focus on loading the dimensions and fact tables that where created using natural keys (please see our previous post for all the settings required for those objects).

Let’s begin loading our TIME dimension (which was mapped to our TIME Oracle table). This dimension will have information from three different source tables: SRC_YEAR, SRC_QUARTER and SRC_MONTH. Each of them has information regarding each TIME hierarchy level, so all of them needs to be loaded in order to have a complete hierarchy in our final table.

The load process is very easy and intuitive: first create a new mapping and drag and drop the TIME dimension to it. Then, just add the three source tables, map to its correspondent level in the TIME dimension and that’s it. A very cool thing here is that ODI understands each level as a “separate” table/process, so you don’t need to join your source tables before actually loading it to the target dimension. In other words, ODI allows you to have any kind of complex ETL to each dimension level and each level will be treated as “separate” data loads that will be glued together by the hierarchy setting that you mapped in the TIME dimension object. Here is what it looks like:

blog1

blog2

blog3

blog4

When you execute the mapping we are going to see that the first “MAP_BEGIN” section will try to create and truncate our stage tables that were set in our dimension object. Here is an odd thing (as we also mentioned in the last post): We could not understand yet why ODI “forces” you to have the stage tables created prior to execution (so you can select them in the Dimension object), as it could very well create them for you (like it does for C$ and I$ tables). I know that Oracle may had a reason behind it, but as for now, the entire “stage tables” thing seems an unnecessary setup. Anyway, the important thing here is that ODI will truncate the stage tables before any new execution.

blog5

In the “MAP_MAIN” section is where it gets interesting. We can see here how ODI threats this new dimension object: each level has its own ETL, as we can see that it is loading YEAR, QUARTER and MONTH separately. First YEAR step will load its source to its stage table STG_YEAR, then QUARTER step will join the information from its source table plus STG_YEAR to its STG_QUARTER table. Finally, MONTH step, that is our leaf/grain level, will join its source table plus STG_QUARTER table (which is already joined with YEAR source) and merge it all together in our final table TIME. The result will look like below:

blog6

Since we are not using Surrogate keys here, our Dimension table will contain only the grain/leaf members with all Natural Keys and its attributes for all levels that exists in the dimension. So one row will contain all information regarding all levels that it belongs to. When we create the mappings for the other two dimensions (they’re very similar, so I’m not adding them here) and execute them, we will get the following results:

blog7

blog8

Let’s to go our Fact table load. This one is way too simple, since our source table already contains all the Natural Keys that will be the ones that will also exist in our FACT table (remember, we are not dealing with Surrogate Keys in this example). Here we just need to map each NK to its respective dimension column and also our Measure data and execute the mapping.

blog9

blog10

When we take a look in Operator, we are going to see a single merge command in our Fact table, where ODI will use all dimensions to search if that row already exists in our FACT table. If it exists, the measure column is update, otherwise it is inserted.

blog11

The final result is below: as expected, all Natural Keys from our dimensions were inserted in the Fact table, together with our measure.

blog12

Now you may be wondering, why should I use these new features if it seems a lot of work (settings) for a little gain? Well, using ODI for Natural Key’s only is really not worth it, since the only benefit here seems to be ODI loading the dimensions levels all at once, with different sources/ETL, in a single mapping object, which is a very cool feature, since it enables us to better organize our DW objects and have a clear view on our ETL logic. But again, this is too little for the amount of work that we need to do to get there. But don’t worry, it will get way better when we start to work with Surrogate Keys, since ODI will be able to abstract all the Surrogate Key management and you will start to feel that all the necessary settings will finally be worth the work.

That’s it for today folks! We will be releasing the Surrogate Key settings and load posts very soon, so stay tuned in our blog! See ya!

PBCS, BICS, DBCS and ODI!!! Is that possible???

Posted in 11.1.1.9.0, 11.1.2.4, ACE, BICS, DBCS, EPM, EPM Automate, ODI, ODI 10g, ODI 11g, ODI 12c, ODI Architecture, ODI Architecture, Oracle, OS Command, PBCS, Performance, Uncategorized with tags , , , , , , , on August 15, 2016 by RZGiampaoli

Hey guys, today I’ll talk a little bit about architecture, cloud architecture.

I just finished a very exciting project in Brazil and I would like to share how we put everything together for a 100% cloud solution that includes PBCS, BICS, DBCS and ODI. Yes ODI and still 100% cloud.

Now you would be thinking, how could be 100% cloud if ODI isn’t cloud yet? Well, it can be!

This client doesn’t have a big IT infrastructure, in fact, almost all client’ databases are supported and hosted by providers, but still, the client has the rights to have a good forecast and BI tool with a strong ETL process behind it right?

Thanks to the cloud solutions, we don’t need to worry about infrastructure anymore (or almost), the only problem is… ODI.

We still don’t have a KM for cloud services, or a cloud version of ODI, them basically we can’t use ODI to integrate could tools….

Or can we? Yes we can 🙂

The design is simple:

  1. PBCS: Basically we’ll work in the same way we would if it was just it.
  2. BICS: Same thing here, but instead of use the database that comes with BICS, we need to contract a DBCS as well and point the DW schema to it.
  3. DBCS: here’s the trick. Oracle’s DBCS is not else then a Linux machine hosted in a server. That means, we can install other things in the server, other things like ODI and VPN’s.
  4. ODI: we just need to install it in the same way we would do in an on premise environment, including the agent.
  5. VPN’s: the final touch, we just need to create VPN’s between the DBCS and the client DB’s, this way ODI will have access to everything it needs.

Yes you read it right, we can install ODI in the DBCS, and that makes ODI a “cloud” solution.

cloud solution

The solution looks like this:

BICS: It’ll read directly from his DW schema in the DBCS.

PBCS: There’re no direct integration between the PBCS and DBCS (where the ODI Agent is installed), but I found it a lot better and easy to integrate them using EPM Automate.

EPM Automate: With EPM Automate we can do anything we want, extract data and metadata, load data and metadata, execute BR and more. For now the easiest way to go is create a script and call it from ODI, passing anything you need to it.

VPN’s: For each server we need to integrate we’ll need one VPN created. With the VPN between the DBCS and the hosts working, use ODI is extremely strait forward, we just need to create the topology as always, revert anything we need and work in the interfaces.

And that’s it. With this design you can have everything in the cloud and still have your ODI behind scenes! By the way, you can exactly the same thing with ODI on premise and as a bonus you can get rid of all VPN’s.

In another post I’ll give more detail about the integration between ODI and PBCS using EPM Automate, but I can say, it works extremely well and as far I know is a lot easier than FDMEE (at least for me).

Thanks guys and see you soon.

 

Remotely Ziping files with ODI

Posted in 11.1.1.9.0, ACE, Configuration, EPM, Essbase, ETL, Hacking, Hyperion Essbase, InfraStructure, ODI, ODI 10g, ODI 11g, ODI 12c, ODI Architecture, ODI Architecture, OS Command, Performance, Remotely, Tips and Tricks, Zip Files with tags , , , , , , , , , , , on April 5, 2016 by RZGiampaoli

Hi guys how are you? It has been a long time since last time I wrote something but it was for a good reason! We were working in our two Kscope sessions! Yes, this year we will have 2 sessions and I think they will be great!

Anyway, let us get to the point!

Today I want to talk about something that should be very simple to do it but in the end, it is a nightmare…. Zip a file in a remote server…

A little bit of context! I was working in a backup interface for one client and, because their cubes are very big, I was trying to improve the performance as much as I can.

Part of the backup was to copy the .ind and .pag files and the data extract files as well. For an app we are talking in 30 gb of .pag and 40 gb of data extract files.

Their ODI infrastructure is like this:

Infrastructure

Basically I need to extract/copy data from Essbase server to the disaster recovery server (DR Server). Nothing special here. The problem is, because the size of the files I wanted to Zip the files first and then send it to the DR server.

If you use the ODI tools to Zip the file, what it does is bring all the files to the ODI Agent server, zip everything and the send it back. I really do not want all this traffic in the network and all the time lost in this process (also, the agent server is a LOT less powerful then the Essbase server).

Regular odi tools zip process

Then I start to research how I could do that (and thank you my colleague and friend Luis Fernando Cairo that help me a lot doing a lot of tests on this)

First of all we have three main options here:

  1. Create a .bat file and run it remotely: I did not like it because I do not want a lot of .bats all over the places
  2. Use windows invoke command: I need a program in the server like 7 zip or so and I don’t have access to install freely and I do not want to install zip’s program all over the places too
  3. Use Psexec to execute a program in the server: Same as the previous one.

Ok, I figure out that in the end I’ll need to create/install something in the server… and I rate it. Well, let’s at least optimize the problem right!

Then I was thinking, what I have in common in all Hyperion servers? The answer is JAVA.

Then I thought, I can use the JAR command to zip a file:

jar cfM file.zip *.pag *.ind

Where:

c: Creates a new archive file named jarfile (if f is specified) or to standard output (if f and jarfile are omitted). Add to it the files and directories specified by inputfiles.

f: Specifies the file jarfile to be created (c), updated (u), extracted (x), indexed (i), or viewed (t). The -f option and filename jarfile are a pair — if present, they must both appear. Omitting f and jarfile accepts a “jar file” from standard input (for x and t) or sends the “jar file” to standard output (for c and u).

M: Do not create a manifest file entry (for c and u), or delete a manifest file entry if one exists (for u).

Humm, things start to looks better. Now I had to decide if I would use the Invoke command or Psexec.

I started trying the Invoke command, but after sometime I figure out that I can’t execute the jar command using invoke.

Then my last alternative was Psexec.

The good thing about it is that is a zip file that you need just to unzip in the agent server, set it in the Environment Variables (PATH) and you are good to go.

It works amazingly.

You can run anything remotely with this and it’s a centralized solution and non-invasive as well (what I liked).

You just need to:

psexec \\Server  -accepteula  -w “work dir” javapath\jar cfM file.zip *.pag *.ind

Where:

-w: Set the working directory of the process (relative to remote computer).

-accepteula: This flag suppresses the display of the license dialog.

There’s one catch, for some unknown reason, the ODI agent does not get the PATH correctly then you need to use the complete path where it was “Installed”. The ODI is like this:

OdiOSCommand “-OUT_FILE=Log_Path/Zip_App_Files-RUM-PNL.Log” “-ERR_FILE =Log_Path /Zip_App_Files-RUM-PNL.err”

D:\Oracle\PSTools\psexec \\server -accepteula -w \\arborpath\APP\RUM\PNL\ JAVA_PATH\jdk160_35\bin\jar cfM App_Files-RUM-PNL.zip *.pag *.ind

With this, we will have a process like this:

Remotly Zip Process

This should not be something that complicate but it is and believe me, I create a very fast process and the client is very happy.

I hope you guys enjoy it and see you soon.

Dynamically exporting objects from ODI

Posted in 11.1.1.9.0, ACE, ODI, ODI Architecture, Tips and Tricks with tags , on March 1, 2016 by Rodrigo Radtke de Souza

Hi all!

Today we will be talking about how we can export any object from ODI in a dynamic way. But first, why would we want to do that? One good example to do this is to figure out which ODI objects changed during a period range and export their xml to be stored in a code versioning repository. Another one could be to export all ODI scenarios with a certain marker, or from specific projects/folders in an automated way. Exporting Load Plans: Few people realize it, but there is no easy way in ODI to export several Load Plans at once (you may move the desired load plans to a folder and then export the entire folder with “Child components export” selected, but that would be considered cheating 🙂 ). Or maybe you just want to do it for the sake of doing something in a dynamic way (if you already read some of our posts, you already know that we like dynamic coding!).

First, let’s take a look on the OdiExportObject object from the Toolbox.

1

From Oracle Documentation:
Use this command to export an object from the current repository. This command reproduces the behavior of the export feature available in the user interface.

Great, that’s what we want: export any object (even Load Plans, that it’s not listed in the Oracle documentation) from the current repository. You may read about all its parameters here:

https://docs.oracle.com/middleware/11119/odi/develop/appendix_a.htm#ODIDG805

An example of a Load Plan export would look like this:

2

Two important parameters here: First we have the Object ID, that indicates which object you are about to export. This ID can be found by double clicking the ODI object and checking its Version tab:

3

The other parameter is the Classname. This one you may check on Oracle documentation but, as I said before, there may be some class names missing in the documentation, like SnpLoadPlan. So, the easiest way to check the correct Classname for any ODI object is to export it using the user interface, like below:

4

5

Go to the folder and open the xml file in a text editor. The Classname will be the Object Class right in the beginning of the xml file:

6

Ok, but it does not seem very dynamic, since we need to pass the object ID/Classname in order to export the correct object. So here we will use two of our favorite techniques to make it dynamic: Command on Source/Target and ODI metadata repository SQL. This is how it works: we will create an ODI procedure that will contain a SQL that queries the ODI metadata repository in the “Command on Source” tab (returning all the objects that we want to export) and OdiExportObject command on the “Command on Target” tab to actually export the objects.

Let’s begin with the “Command on Source” tab. First create a connection to you ODI work repository and define a Logical Schema to it. In the Command, add the SQL that will meet your requirement (in this example, retrieve all Load Plans that were created/modified since last week):

7

Our query needs to return three columns: the OBJECT_ID, OBJECT_CLASS and FILE_NAME. This information will be passed to the “Command on Target” to identify which objects needs to be exported.

Now, we need to add the OdiExportObject to the “Command on Target” tab and this is pretty simple to do. Every ODI object found in the Toolbox can be added to an ODI procedure and be called as “ODI Tools” Technology. If you are not sure how to do it, a good tip here is to add the ODI object that you want to add in the ODI procedure in an ODI package, set its parameters as you would normally do and click at the “Command” tab, like bellow:

8

9

Now just copy the command text and add it to your procedure in the “Command on Target” tab, selecting “ODI Tools” as its Technology:

10

As you can see, we have added three # variables here that will receive the information from the “Command on Source” tab. When you run this procedure, if 10 load plans were created/modified since last week, those will be exported to the EXPORT_DIR folder.

In this example we queried SNP_LOAD_PLAN table in order to get all load plan information. Luckily, the ODI table names are very similar to its Classname, so they should not be hard to find. Here is a list of the most common objects that you will likely export from ODI:

Capture

That’s it guys. I hope you liked it! See ya!

DEVEPM in “I want my ODI” OTN Podcast!!!

Posted in ACE, ArchBeat, EPM, ODI, ODI 12c, ODI Architecture, ODI Architecture, OTN, PodCast on January 15, 2016 by RZGiampaoli

Hi guys how are you doing? It’s a pleasure to announce that DEVEPM was invited by our friend Oracle ACE Michael Rayne to be part of “I want my ODI” OTN podcast.

You can expect forty minutes of a open conversation between integration experts talking about the new features and the future of ODI.

Please stay a while and listen 🙂

“I Want my ODI”

Starring:otn archbeat podcast

  • Oracle ACE Director Stewart Bryson
  • Jerome Francoisse
  • Oracle ACE Associate Rodrigo Radtke de Souza
  • Holger Freidrich
  • Oracle ACE Ricardo Giampaoli
  • Oracle ACE Michael Rainey

What happens when you gather a group of business intelligence experts who are passionate about Oracle Data Integrator? You’re about to find out.

This OTN ArchBeat podcast series was suggested by Oracle ACE Michael Rainey, who took on the guest producer and guest host roles for this program, selecting the topic and the panel.

As you’ll hear, the result is a wide-ranging, free-wheeling discussion of all things ODI. Take a listen!

Thanks guys and see you soon.

 

 

DEVEPM will be at KScope 16!!!

Posted in ACE, EPM, Kscope, Kscope 16, ODI, ODI Architecture, ODTUG, Tips and Tricks with tags , , , , , on December 14, 2015 by Rodrigo Radtke de Souza

Hi all, how are you doing? We are very happy to announce that, not one, but TWO presentations were approved for KScope 16! Here they are:

1) Incredible ODI tips to work with Hyperion tools that you ever wanted to know
“ODI is an incredible and flexible development tool that goes beyond simple data integration. But most of its development power comes from outside the box ideas.
Did you ever wanted to dynamically run any number of “OS” commands using a single ODI component?
Did you ever wanted to have only one datastore and loop different sources without the need of different ODI contexts?
Did you ever wanted to have only one interface and loop any number of ODI Objects with a lot of control?
Did you ever need to have a “Third Command Tab” in your procedures or KMs to improve ODI powers?
Did you still use an old version of ODI and miss a way to know the values of the Variables in a scenario execution?
Did you know that ODI has 4 “Substitution Tags”? And do you know how useful they are?
Do you use “Dynamic Variables” and know how powerful they can be?
Do you know how to have control over you ODI priority jobs automatically? (Stop, Start and Restart scenarios)
If you want to know the answer of all this questions please join us in this session to learn the special secrets of ODI that will take your development skills to the next level.”

The idea behind this presentation is to show the main secrets we use daily basis to improve code quality and re-use, as well explain why and what more we could do with each of the tips we will present. It’ll be a extreme helpful presentation with a lot of cools stuff and real life example.

2) Take a peek at a smart EPM global environment
“In a fast-moving business environment, finance leaders are successfully leveraging technology advancements to transform their finance organizations and generate value for the business.
Oracle’s enterprise performance management (EPM) applications are an integrated, modular suite that supports a broad range of strategic and financial performance management tools that helps business to unlock their potential.
A global financial environment contains over 10000 users around the world and rely on a range of EPM tools like Hyperion Planning, Essbase, SmartView, DRM and ODI to meet its needs.
This session shows all the complexity of this environment, describing all the relationship between those tools, the technics used to maintain such a large environment in sync, and meeting the most varied needs from the different business and laws around the world to create a complete and powerful business decision engine that takes a global company to the next level.”

The idea in this presentation is to show the design we uses in one big client and why we use it, the gains and how it works. In fact for the ones that follow our blog and our presentations, this will be the tie point of everything we talk about. It’ll be a excellent presentation for people looking for ideas of integrated environments.

We are very excited about it since we’ll be talking about how to improve EPM tools potential using ODI and also how EPM tools connects with each other in a global financial environment. We’ll be very pleased if you guys show up in our presentation. It’ll be great to meet everyone there and talk about EPM and other cool stuff!

Kscope is the largest EPM conference in the world and it will be held on Chicago, Illinois on June 2016. It will feature more than 300 technical sessions, five symposiums, deep dive sessions, and hands on labs over the course of five days.

Got interested? If you sign up by March 25th you’ll take advantage of the Kscope early bird rates, then don’t waste more time and let’s be part of the greatest EPM event in the world. If you are still unsure about it, read our post about how Kscope/ODTUG changed our lives! Kscope is indeed a life changer event!

kscope16

Thank you very much everybody and we’ll be waiting for you at Kscope 16!