Dynamic topology configuration in ODI without using contexts


— EDITED on Aug 12, 2015 —

Great news! Oracle just released “Patch 21156142: Parametric connections in ODI 12c don’t work anymore” which enables this technic to be used in ODI 12.1.3! Thanks Marius for this great tip!!!!

— End EDIT —

— EDITED on May 6, 2015 —

Unfortunately, this technic only works on ODI 11 version. ODI 12c does not support dynamic password settings. We will let you know if we figure out a workaround for it.

— End EDIT —

Hi all, how are you doing? I came across this interesting situation during a project some time ago. Imagine the following scenario: You have a transactional system that was implemented in several places around the word. The system and its tables are exactly the same, but they are distributed across different databases, with different user names, database schemas, passwords, etc. Your task is to load data from this system with all these different database regions into your data warehouse, unifying all this data into one single place. Since all the tables have the same structures, how can you accomplish that with ODI without creating a lot of duplicate interfaces?

Everybody that already worked with ODI will easily answer that it may be accomplished using different contexts and they are totally right as we can see below:

image1

In resume, you create one interface object loading a table from the source system into your data warehouse and each context that you have created will point to a different database. It is just a matter to run that interface for each context and that’s it. However, there are situations where we cannot create that many contexts in a real production environment. In my case I couldn’t create those contexts because production execution repository is used for several other projects where those new contexts would just not make any sense at all. In production, the users have just one context called “Production” and they run any ODI job using that context, independently of the project which that ODI scenario belongs to. We could explain and teach the users to use multiple contexts, but my experience as a developer taught me that we need to let things as simple as possible to the end users, because there will be a day (for sure) that someone will execute a job using a wrong context and a mess may be created.

So, how can we accomplish this task without duplicating one interface per database that you need to load? Here is the technique:

Create one physical data server for each database that you need to load plus a dynamic data server:

image2

ADB_AMERICAS, ADB_APJ, ADB_EMEA and ADB_ZZ are the different databases that you need to connect and load data from. Those data servers are created as you normally create any data server in ODI and they will contain their own information regarding connection, data and work schemas and so on. In ADB_DYNAMIC you will create as the following:

Data Server:
User: #ODI_PROJECT.CONN_USER
Password: #ODI_PROJECT.CONN_PASS
JDBC URL: #ODI_PROJECT.CONN_URL

Physical Schema:
Schema: #ODI_PROJECT.CONN_DATA_USER
Work Schema: #ODI_PROJECT.CONN_WORK_USER

image3

image4 image5

As you can see, we have added five ODI variables in the topology information. These variables will be used to receive all the dynamic information from the different databases that we need to load. Now create one logical for each physical schema that you have created and point them accordingly in your context. Notice that we are also creating a logical schema called ADB_DYNAMIC that is going to point to our dynamic physical schema.

image13

Now this is an important step. Your models in ODI need to be pointing to ADB_DYNAMIC logical schema to make this work, but if you did not develop your interfaces yet and you also need to reverse your models and so on, you will not be able to do it pointing to this dynamic topology, since it has just ODI variables and not actual values there. So you will need to point your ADB_DYNAMIC logical schema temporally to a “normal” physical schema, let’s say ADB_AMERICAS to develop your interfaces. If you do this way, you will be able to reverse your models using ADB_DYNAMIC logical schema and you will be able to test all your interfaces pointing to one “valid” database without any additional work. After you complete your entire development, just point ADB_DYNAMIC logical schema back to ADB_DYNAMIC physical schema and proceed to the following step.

Now comes the tricky part. In order to make this technique to work, you will need to have a “control/parent” scenario that will call any “interfaces/child” scenarios passing as parameters all the connection information for each different database that you need to load. Let’s see an example:

image6

This child scenario contains all interfaces that will load your DW and it is receiving as parameters all connection information that will be used in ADB_DYNAMIC data server/physical schema. Remember that at this point, all interfaces are pointing to a physical schema that has only ODI variables on it, not actual values, so if you try to execute this package alone, it will not work. Now it is just a matter to call this scenario multiple times, one for each database that you need to load. First create a parent scenario with a procedure that will call those child scenarios as below:

image7

This procedure will have two steps for each database that you need to load. The first step will get one specific database connection and the second step will call the child scenario passing that connection information. It will look like this:

image8

If we open the first step, we will have the following:

image9

This step is creating Java variables that will temporally hold the connection information. These API commands are getting the connection information from the Command on Source tab, so in that tab we will have the following:

image10

We don’t need any code here. We just need the Schema combo box pointing to the desired database, in this case ADB_AMERICAS. This will allow all Java variables to get ADB_AMERICAS information that will be used in the next step which is the following:image11

This step will call the child scenario passing as parameters all the connection information that is needed to be used in ADB_DYNAMIC connection. When this child scenario runs, all interfaces inside of it will use ADB_AMERICAS connection information and will load data from it dynamically. Cool isn’t it? You are going to do the same configuration for the other steps in this procedure and the only thing that is going to change is the logical schema that you need to point in the “Command on Source” table. So “Get ADB_APJ Connections” will get ADB_APJ connection information and so on.

If you notice, this example is calling the child scenarios using SYNC_MODE=2, which means that they will execute the data load in all different databases in parallel, so if your architecture allows such parallelism, you may gain a great performance boost using this technique as well. We also added a “Wait for child scenarios” step as the last step, so we may control when all of the child scenarios have completed or not:

image12

So here is the end of this post. I hope you all enjoy!

Advertisements

24 Responses to “Dynamic topology configuration in ODI without using contexts”

  1. Great work, thanks.

  2. Great work indeed, as we are working on a similar solution here are some comments that may help other readers:

    Another way to deploy the solution above is to call LoadPlans instead of calling Scenarios, for example:
    OdiStartLoadPlan -LOAD_PLAN_NAME=LoadPlanDynamicServers” “-PROJECT1.SOURCESERVER=#SRC_SERVE”
    “-PROJECT1.DESTSERVER=#DEST_SERVE” …etc
    this will avoid adding WaitForChildren as the LoadPlan orchestrate the timing / parallelism and serializations of the packages and procedures within it

    Since packages don’t allow processing Interfaces in “parallel” you may have to create few packages and group few interfaces per package then use the Package generated scenarios with the LoadPlans.

    just a thought
    Thanks
    Emine

  3. I have a similar requirement and was looking for a pattern that could have been used with ODI. It is really appreciable that you spend enough time to share this scenario in detail. It just saved me hours of work and probably a bad design
    Thanks
    Manjit

  4. Hi,
    In the section where we define the source as a oracle database …”We don’t need any code here. We just need the Schema combo box pointing to the desired database, in this case ADB_AMERICAS. This will allow all Java variables to get ADB_AMERICAS information that will be used in the next step”..
    will this approach also work on a file dataserver?
    Let me elaborate on a solution I was developing taking a cue from your blog.I need to dynamically map a network drive using the command:
    NET USE t: #VarPath /USER:
    under traget and OS command as technology. In the source, technology is file and a schema is FOLDER
    Logical schema FOLDER-> Physical schema FOLDER -> Folder Dataserver
    Data server FOLDER is mapped under file technology and uses a network location defined by the variable #VarPath as Host(Data Server) user -> domain Userid with read only access to #VarPath and correspondingly Password as the password.
    The Phyasical Data server Folder connection test is success. However the Net Use step fails:
    and the generated code is also strange:
    NET USE t: #Project_SS00122343.VarPath /USER:APAC\SYSTEMODIID
    #VarPath is “//cdere0992-ddgt/APAC” and is set right before the step where the net use is used executing a scenatio of the variable defined with keep all history:
    OdiStartScen “-SCEN_NAME=VARPATH” “-SCEN_VERSION=-1″ “-SESSION_NAME=GettingAPACInbound”
    Could you help?

    • Somehow the following was ommitted:
      NET USE t: #VarPath /USER: Lessthan_percentage_equalto odiRef.getInfo(“SRC_USER_NAME”) percentage_greaterthan Lessthan_percentage_equalto odiRef.getInfo(“SRC_PASS”)percentage_greaterthan
      where the command was explained and it got translated while execution to:
      NET USE t: #Project_SS00122343.VarPath /USER:APAC\SYSTEMODIID Lessthan_at_therate_equalto snpRef.getInfo(“SRC_PASS”) at_the_rate_greaterthan

      • Hi Suprio,

        Yes, this approach also works on file dataserver. I just did a small test here and it works as expected. Let me show what I did, then you may compare and see where it may gone wrong in your code.

        First I’ve created a logical called DIN_FOLDER in File technology. Then I have created a physical on File, created a data server and added the var name in the USER property as the following:

        #DEV_SANDBOX.DIN_FOLDER_VAR

        REMEMBER: You must prefix the project code here, or else it will not work.

        So far, it is the same as you did, right? Then I created a package and added the variable as the first step of the package. To simplify, I just change its type to SET and added the value D:\ to it. Then I created a procedure with the source tab pointing to DIN_FOLDER, File technology and no code here. In Target tab I’ve added the following in ODI Tools technology:

        OdiOSCommand “-ERR_FILE=D:\err.txt”
        dir <%=odiRef.getInfo(“SRC_USER_NAME”)%> >> d:\test.txt

        Note that I used % in this case because it is evaluated BEFORE @. If you use @, it will not work. Then I executed and got the following in Operator:

        OdiOSCommand "-ERR_FILE=D:\err.txt"
        dir D:\ >> d:\test.txt

        Can you test this example and see if it works for you? Please let me know the results and if you have any errors, please send it, so I can try to help you out.

        Thanks.

  5. […] can further change your process to accept dynamic topology information as seen in our previous post here, but in this example we will keep things simpler to demonstrate the technique. After you add this […]

  6. Jojibabu Says:

    Excellent
    But Can we do the other way.I mean Single Source and Multiple Targets.Targets are different DB’s. Any clue?

    • Hi! Yes, for sure. The logic will be the same, but instead of you adding variables to the source topology connection, you will be adding them to the target topology connection. This way you may have one single source loading to multiple target databases with a single interface component. In order to increase performance, you may even create a scenario with this interface and call this scenario multiple times in parallel, one for each target DB. You may also have a situation where your target database connection is the same, but with multiple target schemas, so in this case it is even simpler, because you may just change the knowledge module to get the schema from an ODI variable, instead of the default topology information.

  7. Hi,
    I am simply attempting to use a variables in the schema name and password fields of a Data Server (Oracle DB). The user name gets resolved correctly, but the password variable does not seem to get replaced correctly.
    I am getting: invalid username/password; logon denied
    Any ideas about what could be going wrong.
    Thank you!

    • Hi Gabriel,

      A very commun “mistake” when using this technique is to forget to prefix the variable’s project code in the topology setting that you are setting the variable. For example, in Topology, the schema name and password needs to contain #PROJECT_CODE.VARIABLE. If you add just #VARIABLE, it will not work. You can double click your project in Designer tab to get the correct project code for your variable.

      You can also test if the values are being passed correctly to the variables using an ODI procedure. Just create a new procedure and add one step to it. Then select Jython as your technology and type the following:

      raise ‘#PROJECT_CODE.VARIABLE1 , #PROJECT_CODE.VARIABLE2’

      Add this procedure in your ODI package before the interface that you are going to run and verify the results.

      Hope it helps 🙂

  8. Hi, can you please let me know how you created the variables which you used in the JDBC URL ? Appreciate every info you provided, and is of great help, but am unable to implement the variables in JDBC URL.Thanks again

    • Hi Kaushik. It is a “normal” project variable. Just create a variable at your project and use it at JDBC URL option. Just note that you have to prefix your project code before the variable name at the JDBC URL, or else it will not work properly. Thanks!

  9. Prosenjit Says:

    I tried to do in your way but having problem.I am getting error.At physical schema level,ODI is not able convert the dynamically through variable.It is just picking up variable name as constant so not able to complete the database schema connection.
    Any help…….

    Thanks & regards,

    Prosenjit

    • Hi Prosenjit,

      Please see my response to Gabriel and try to “raise” the variables to see if they have the correct values before the connection.

      “A very commun “mistake” when using this technique is to forget to prefix the variable’s project code in the topology setting that you are setting the variable. For example, in Topology, the schema name and password needs to contain #PROJECT_CODE.VARIABLE. If you add just #VARIABLE, it will not work. You can double click your project in Designer tab to get the correct project code for your variable.

      You can also test if the values are being passed correctly to the variables using an ODI procedure. Just create a new procedure and add one step to it. Then select Jython as your technology and type the following:

      raise ‘#PROJECT_CODE.VARIABLE1 , #PROJECT_CODE.VARIABLE2′

      Add this procedure in your ODI package before the interface that you are going to run and verify the results.

      Hope it helps”.

  10. Prosenjit Says:

    Hi,
    I found out that password variable is not working at data server password setting.It is giving password mismatch error.How to proceed.Otherwise if i set password manually..it is working fine..how to go forward..with password variable.

    Thanks & Regards,

    Prosenjit

  11. Prosenjit Says:

    Hi,
    I am using ODI 12c.In ODI 11g, data server password setting through variable is possible.All other setting in Data Server possible through variable.I tested that.But setting password through variable is raising error in ODI 12c.

    Kindly advice me how I can do it in ODI 12c

    Thanks & Regards,

    Prosenjit

    • Hi Prosenjit. I did some research and unfortunately this “password” technic does not work on ODI 12c…. all other fields works fine in ODI 12c but the password. As for now, I don’t know a workaround for it, but we will let you know if we discover something about it.

      Thanks!

      • Patch 21156142 has just been released for solving parametric connection on ODI 12.1.3
        Now it works on 12.1.3 too.

        Regards,
        Marius

  12. Arun Batta Says:

    Hi,

    I applied patch Patch 21156142 but still the password issue is not resolved.Any help on this.

    Thanks,
    Arun

  13. Hi, parametric connections in ODI 12c problem is solved?

    • It seems so, but we didn’t test it in our environments yet. Oracle released a patch just to fix this issue, so it should be working by now.

      • I applied patch Patch 21156142 and 21550807 but still the password issue is not resolved 😦

      • Unfortunately we did not test the patch because we are heavily working in ODI 11g yet. If the patch is not working, the best option is to go to Oracle support and open another SR with them. Maybe will they create a patch to fix a patch? 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: