Archive for the Tips and Tricks Category

ORACLE SQL for EPM tips and tricks S01EP04!

Posted in ACE, Connect By, Oracle, Oracle Database, Performance, REGEXP, SQL, Tips and Tricks with tags , , , , , , on April 9, 2019 by RZGiampaoli

Continuing the Oracle SQL for EPM series, today we’ll extend a little bit the “Connect by” post to see some neat thing we can do when we mix regular expression with connect by.

First of all, let’s start explaining the basic of this approach. We can use Connect By and Level to generate data in Oracle like this:

This is very useful when you need to generate 2 rows for each row we have in a query, then you can do a Connect by Level <=2 and use it as a query without join, duplicating everything in the source query. Then you can use a Decode or Case to say, Case when level = 1 then x when level =2 then y….

But for us this is just the basic concept that we need to understand for our trick. let’s get back to our metadata table, the one with parent child relationship and with all metadata from all apps and all dims that we have. Let’s say we need to create an hierarchical table based in a parent child table. Some people does one union for each level that we have in the source table (too much work), another will do filter all over the places, I don’t, I just use connect by with regular expression. Let’s take a look.

As we can see, we have 17 levels or generations for the account dimension. quite big. First of all, let’s do a connect by path to flat the entire hierarchy in one row and remove the parent child relationship.

Now, as we can see, if we just do a connect by prior with the sys_connect_by _path, the query will generate 1 row for each level of data. We don’t want that, unless you want to have the ability to input data in the upper level of the hierarchy, that is usually not the case. What we want is to have one the entire path for each leaf member, then we need to filter just the leafs in our queries. In my case my table already have a column that says if a member has children or not, but I’ll proceed as if I don’t have it.

If we don’t have the leaf information we can generate it by using Connect_By_IsLeaf that basically return 0 if is a parent or 1 if is a leaf. Again, we are filtering this outside the connect by prior and now we have just the leafs and the entire hierarchy above him. Next step is to split this back into rows.

REGEXP_SUBSTR is the perfect tool to split the string we have into new columns. As you can see, we can use the follow expression: REGEXP_SUBSTR(Column, ‘[^|]+’, 1, 1) Basically the regexp uses what is in [] to find what yo want and split it in the way you want. In our case I’m looking for the first | (that is defined by the 1, 1) and get the first word after it. Then I have another regexp and I changed the 1, 1 to 1, 2 to get the second word and keep doing this until I get to the 17 generation (as we saw in the beginning).

Basically if you have 17 generation you need to have 17 columns in the table, then you need to have 17 REGEXP, one for each column, you just need to change the 1, 1, 1, 2, 1, 3… until the 1, 17.

Here’s when ODI come in handy because if you use ODI you just need to do a select in the source to figure out how many generations you will have and then generate dynamically the 17 REGEXP and pass this to the target dynamically. It’s very easy since the code itself is always the same and the only thing that changes is the second parameter, and it’s a sequential parameter.

I hope this can be as useful for you as it’s for me and see you soon.

Advertisements

ODI “Command on Source” buffered behavior

Posted in ODI, Tips and Tricks with tags , on March 28, 2019 by radk00

Hi all! This post was created based on a friend’s question to me a couple of days ago. He asked me the following:

  • I know that we may create a procedure with a SQL in “Command on Source” that would return N rows and trigger an OS command in the “Command on Target” tab for each of those rows, passing the results as a parameter. My process takes a while for each row, so I was wondering if I could insert more rows in the table that is being read on “Command on Source” while it is still executing, so it would pick the new rows as well in the same execution?

In other words, he wanted to trigger the ODI procedure once and keeps “feeding” the “Command on Source” table many times, so all his OS Commands would get executed in one procedure run. Instinctively I said no, because ODI needs to somehow “buffer” the “Command on Source” results (which may be a result of a SQL statement with different tables) and then start to run the “Command on Target” commands. He agreed with me and moved on. However, that keep in my head: what would happen if the source table somehow changes while the ODI procedure is running? What if more rows were inserted or if the table was truncated/deleted? I did some tests to make sure I gave him the right answer.

I created a procedure that contains only an ODI sleep command in the Target that will be executed for each row that comes from the source, like this:

1

Then in the Source I added the following:

2

I populated this table with 20 records and I executed the proc. As expected, it took 20 seconds to complete:

3

Then I did the following test: I executed the proc and right away I inserted 20 more rows in the target. As I thought, the procedure ended again in 20 seconds, not 40, which means that ODI really buffers the results before executing the Command on Target:

4

But this is a small number and maybe ODI can buffer all of it right away and maybe that’s why it worked. I looked in the Topology and the Array Fetch Size for this connection was set to 250:

5

I did another test with 1000 rows and decreasing the wait time from 1000 to 100 to see how it goes. Both executions (with 1000 rows and then with adding more 1000 rows in between) ended in 104 seconds (the four extra seconds may be a delay due to network, ODI usage and so on):

6

So, I changed my approach and tried something different. What if I add many rows (100,000), change the delay to ‘1’ and in the meanwhile I truncate the table? The result kind of surprised me:

7

Truncate is considered a DDL command, so that’s why the error is saying that the object does not exists anymore, although it’s still there in the DB (it does not exist in that session, as it was modified by a DDL command in another session). This test was not exactly what I wanted to test and yet it does surprises me because I was expecting that ODI would have already buffered all the results (the error happened after 51 seconds only) and it would not be sensitive to certain DDL commands. However, this may indicate that ODI does not buffer it all at once and when it tried to read the table again, it was already truncated. So, let’s do another test.

Next test I doubled the row amount and inserted 200,000 and kept the delay to ‘1’ and ran the procedure. While it was running, I tried to delete the rows instead of truncating the table. The delete ran fine, it took 18 seconds to delete (less than 51 seconds from truncate) and no errors happened this time. So, it seems that ODI buffered all the results (which were doubled) before the 18 seconds.

8

After the deleting and committing, the proc continued to run and finished around 200 seconds, as expected.

9

I did one more test to see if the time that it took to fail in the truncate test would increase if I increase the number of rows. I inserted again 200K rows (again, double amount from the previous truncate test), ran the process and truncated the table. It took the same 51 seconds. So, I believe that, although ODI can buffer all the results before 51 seconds, Oracle somehow tells the process that the table was changed by a DDL command and sends a “stop” signal to the connection from a specific amount of time. I don’t see any other explanation for this behavior.

I could run some other tests, especially to see how large is the ODI buffer size, but as for now I’m ok with the results. In resume we figure out that:

  • ODI does buffer the results in the “Command on Source/Target” and we cannot modify them once it starts;
  • Although the results are buffered, the ODI procedure may fail if DDL commands are issued to the source tables;
  • DML commands does not seem to affect the buffered results, as expected;

If any of you has done some tests like this, please share with us! This kind of things are never documented, so we need to keep testing to see how they work behind the scenes.

Thanks! See you soon!

ORACLE SQL for EPM tips and tricks S01EP03!

Posted in Query, SQL, Tips and Tricks with tags , , on March 26, 2019 by radk00

Hi all! Continuing the Oracle SQL for EPM series, today’s post is quite simple, but it may consume an extreme amount of time when we are requested to troubleshoot “why these numbers does not match” type of scenarios. Its related to UNION and UNION ALL operations. Let me describe what happened to me in one of those situations.

The client had a table with several columns that would calculate some metrics related to their

business. It was a “cumulative” type of table, where metrics were being aggregated by each previous period’s numbers. In a very resumed way, lets use the following example:

1

So, for Feb-19, the SUM would be 150 for Account 1 and 60 for Account 2. Next month, he would get the following:

2

His logic was summing the March period in Account 1 correctly (30) and summing it to previous 150. However, since Account 2 was not coming in March, his SQL was not reporting Account 2 in March. To make the calculations easier, he decided to add a “dummy” metric for all existing Accounts as 0, so his logic would calculate it correctly even it the record did not exist for that period. Something like that:

3

The process would still give his correct value of 30 in Account 1 for March and 0 for Account 2, which would then sum against the previous periods. It all worked fine, until someday someone complained that the numbers could not be right and some numbers were missing. When I checked the code, I quickly realized his mistake: he created his “dummy” metrics using a UNION in Oracle against his periodic metric and his “dummy” metric. But why it was giving the wrong numbers? Oracle explains:

  • UNION combines the results of two queries, which eliminates duplicate selected rows. The UNION operator returns only distinct rows that appear in either result.

Let’s picture the problem. His logic worked fine for Feb and Mar, but in Apr, something like this happened:

4

If you sum Apr period for Account 1, the number should 80, but he was getting only 60 as below:

5

This is due to UNION’s behavior: It will run an implicit distinct in the combined dataset, which in this case is eliminating good data. I went ahead and changed the UNION to UNION ALL, which Oracle states:

  • The UNION operator returns only distinct rows that appear in either result, while the UNION ALL operator returns all rows. The UNION ALL operator does not eliminate duplicate selected rows.

The result is the following:

6

Now it looks correct: 80 for Account 1 and 0 for Account 2.

That’s it folks! Simple things that may give us enormous headaches and wrong numbers, so please always check out when you see an UNION in the queries! It may be implicitly omitting some good data there.

See ya!

ORACLE SQL for EPM tips and tricks S01EP02!

Posted in ACE, Connect By, EPM, Oracle Database, Performance, Query, SQL, Tips and Tricks, WITH Clause with tags , , , , , , , on March 21, 2019 by RZGiampaoli

hey guys how are you? Let’s continue the SQL for EPM series. Today I’ll continue to talk about With with a small bonus of Connect by :). let’s start.

A lot of people uses Connect By in a daily bases but as far I having seeing, most of then don’t know how to use it properly. I already lost count with the amount of people complaining about performance issue with Connect By.

The thing is, Connect By works a little different than everything else in Oracle. We can say that Connect By has 2 stages and we’ll see why I’m saying that with this example. Let’s get back to our metadata table and let’s do a Connect By to extract the Balance Sheet Hierarchy from the Juno application:

As we can see, inside this table we have more than one application and more than one hierarchies for each application. That’s ok, we just need to filter it in our SQL right?

If we filter the APP_NAME and the HIER_NAME we’ll get all accounts for that Application and this will generate 12,622 rows. By the way, this table has all metadata from all our applications and we always filter by APP_NAME and HIER_NAME to select what we want (the table is also partitioned and sub-partitioned by these 2 columns). It’s important to know that without filtering anything this table has:

Ok, now, if we want to get just the BS hierarchy we just need to do the Connect By right?

That works… perfect… or not? Well in fact, this the wrong way to use Connect by because what I said before, the 2 stages.

As you can see, this query took 25 sec just to return the first 50 rows. In a integration this will take way more time, in fact, if you join this table to a data table to do a SUM in the BS level, this will take ages to return.

The reason is that for the Connect by, first Oracle does everything that is after the word Connect by and after the word Start with and then, and only then, it does what is in the where condition. That means, first he did the connect by in those 2.260.372 rows (and they are all repeated) and then after all the processing, it filtered what we wanted, that is the APP_NAME and the HIER_NAME. Then the right way to use it is:

Now it looks way better. 0.375 seconds to do exactly the same thing as before, and the only thing I did was to move our filters to the right place. Now Oracle is filtering and doing the Connect by at same time.

Now, if you do a SYS_CONNECT_BY_PATH and want to get just the leaf (to have the complete path that the hierarchy does, you can filter the leafs in the where clause (and need to be there otherwise it’ll not have the entire hierarchy during the connect by). This is how:

Now you see that the connect by filtered what needs to be filter during the Connect by execution and afterwards, it filtered just the leafs (using the CONNECT_BY_ISLEAF that returns if a member is a leaf or not).

Also, i used the CONNECT_BY_ROOT to generate the Root member used in this query (BS) and the SYS_CONNECT_BY_PATH to generate the entire path of the metadata (Very useful to transform parent/child tables in generation tables using this Technic and a regexp [we’ll see this in another post]).

Ok, now that the “Bonus” is written, let’s talk about the WITH that was the main subject here. Even with this Connect by write in the right way with the filters in the right place, we can still improve the performance using WITH.

That’s right, the idea is to prepare our subset of data using WITH before we ask Oracle to do the Connect by and leave it as simple as possible. Let’s take a look:

This is by far the best way to use a Connect by clause. You can, instead of using WITH use a sub-query but I think this way is easier and more organised as well. Also, I know the time difference doesn’t look to big between the previous example and this one but when you join this with data and start to SUM everything, you’ll see a huge difference between this method and the previous one.

Also, some times Oracle get lost with the previous method making everything slower but with the WITH method, it never happens then I advise you start to use this.

I hope you guys enjoy this little tip and see you next time.

ORACLE SQL for EPM tips and tricks S01EP01!

Posted in DEVEPM, ETL, Oracle, Oracle Database, Performance, SQL, Tips and Tricks, Uncategorized, WITH Clause with tags , , , , , , on January 21, 2019 by RZGiampaoli

Hey guys how are you? I decide to start a new series called ORACLE SQL for EPM tips and tricks. The idea here is to show the most useful SQL commands for EPM, how to improve performance, tips, tricks and everything that can be useful from a SQL point of view!

And to start well, I’ll show something very old but very useful that I don’t see too many people using these days. “WITH” clause.

I love to use “WITH” in my code. It helps organize the code, helps to optimize it and more important, to make it more efficient.

When you use “WITH” Oracle treats your query inside it as an inline view or resolved as a temporary table, making it easier and faster for Oracle to access that data if you need it again.

Simply putting, every time you needs to right a query that uses the same table over and over, it’ll probably be way more efficient if you use “WITH”.

The “WITH”clause works a little bit different from a regular SQL. We can say that we split the query in 2, one is the “WITH” declaration (That will behave like a table) and the other is the SQL that will Query the “WITH”.

WITH name_of_temp_table_here AS
(
    YOUR QUERY HERE
),
   name_of_temp_table_here2 AS
(
   SELECT *
   FROM name_of_temp_table_here, another_table...
)
SELECT *
FROM name_of_temp_table_here, name_of_temp_table_here2 

In the “WITH” you can have any kind of query you want. You can do joins, group by, you can also have more than one “WITH”, you can use the result of one “WITH” in the next “WITH”, you can do a lot of things.

But for now, lets take a look in a more real example. Let’s say that you have a table like I do, that contains all metadata from all yours applications:

Let’s say you want to get the Parent of a attribute that is associated with your Entity dimension. You probably will be doing something like this:

In the “FROM” we call the table 2 times and we join and filter everything we need. Since we don’t have attribute association in all levels we do a “Left Join” to make sure all Entities comes in the query. If we run a Explain Plan now we’ll get something like this:

As you can see, Oracle is querying the METADATA_EXTRACT table twice and each time it’s doing a FULL in one Partition (‘ENTITY’ and ‘PHYSICAL_GEOGRAPHY’ partitions).

Now, if we change the query (and we can do it in different ways, this is just one of them) to a “WITH” clause we ‘ll have something like this:

As you can see, we achieved the same results with the code a little bit different. Now I have all my filters in the “WITH” query and in the bottom I just call the “WITH” query 2 times and do what needs to be done.

If we run a Explain Plain now we will have:

As you can see, Oracle now is querying the METADATA_EXTRACT table just once and then his queries the SYS.SYS TEMP table twice. The only problem with this query and the way I did is that since we are creating a temporary table filtering 2 partitions and then later I’m filtering again, it’s basically doing 2 FULL scan in the same TEMP table, and even so, it’s a few seconds faster then the original query.

But this is just an example on how we can reduce the amount of times that Oracle needs to query a table. WITH is not a miracle clause or anything like that, is just another tool that we can use to optimize our code, and its performance needs to be evaluated in a case-by-case basis.

And even if the performance doesn’t change, I believe using “WITH” clause makes any query easier to ready, to test, to update and to right since you can divide your huge query in small bits and then join
everything in the bottom query.

“WITH” is a huge subject and we’ll be talking more about it in the next post, and this time we’ll be improving performance as well using “WITH” with “CONNECT BY”.

Guest Post – Automatic data type conversion for different technologies in ODI

Posted in Guest Post, ODI, Tips and Tricks with tags , , on January 14, 2019 by radk00

/* This is a guest post written by Eduardo Zancanella, one of our friends at DEVEPM. Enjoy and thanks Eduardo for the content!*/

Hello folks,

Hope you are having a great day today.

We would like to share a quick tricky when it comes to transport data between different technologies.

Let’s suppose we were requested to create an ETL process to migrate data from PostgreSQL to Oracle.

Our source has some columns set as TEXT, which does not exist in Oracle. To be able to perform the ETL we would need to translate it to CLOB or VARCHAR2 for instance. But how would ODI knows it to create the temporary tables accurately?

Easy peasy, go to your Physical Architecture, select PostgreSQL and check if the data type is in there. If not, create it following the steps below (if it already exists, go straight to the step 3!)

1) Right click on Data Types, New Datatype

figure01

2) Fill it up the information as below, special attention to what is highlighted:

figure02

3) Click on Converted To and set to which datatype you want it to match in your target, in this case we have chose VARCHAR2.

figure03

After getting all this setup done, let’s run through our example really quick.

Firstly, let’s reverse our source and check if the TEXT fields are in there, keep in mind that we are trying to simplify, so don’t expect to see a full picture of the tool.

figure04

Secondly, let’s create our target table, be aware that here you must use the datatype you chose on the step 3 above for any fields that will be converted. After reversing it you will see as below:

figure05

At this time, our mapping is created, LKM SQL to Oracle and IKM Oracle Incremental Update have been chosen. A quick check on how the CUSTOM_2716 field looks like:

figure06

The hint SOURCE is an extra tip, the transformation for this case has to happen before the data is inserted into the temporary tables, always keep that in mind.

Time to run!

As a first step, ODI will create the C$ and here is where the magic happens:

figure07

C$ was successfully created!

After, ODI will follow its flow and everything should be fine.

That is how we can automatically convert different datatypes among different technologies.

Thank you everyone.

Cheers!

Comparing ODI Scenario versions using SQL

Posted in 11.1.1.9.0, ODI 11g, Tips and Tricks with tags , , on October 26, 2017 by radk00

Hi all, it has being a while that we don’t post! Busy days you know…. Anyway, let’s see what we got today.

The situation that I’m about to describe often happens in large/old ODI projects. Imagine the following: you received a task to change an ODI component that was created one year ago by someone else that is not even in the company anymore. The code is running fine in PROD and the business want a small fix to it. You open the ODI package and it contains a lot of interfaces, procedures, variables, etc. You need to change the code in one single interface, which seems very simple. You change it, save it, generate a new scenario and move it to PROD. When it gets there, the job fails due to an error in another interface that you did not touch! You start to troubleshoot and figure out that someone else changed something in DEV, saved it, but did not move the code to PROD. Unfortunately this ”unwanted” code change was included by you when you generated your scenario and now the mess is already created. If you already passed through this situation, than this post may help you.

Code versioning and code migration processes in general are things that everybody knows that are necessary, but sometimes they are overlooked by the companies because people think they are too complicated or does not work very well. ODI is a big example of this, since its native versioning system is not very intuitive and most of the times does not work in the way that we want to. There are companies out there that even build their own code versioning system (outside of ODI) to manage ODI code versions. I dare to say that most of the companies don’t even have any kind of code versioning or formal code migration process for ODI at all, which causes some big headaches on similar situations as the one that I just described.

The technique that I’ll explain here is not about code versioning itself. I’ll describe something that we may use when we do not have any other way to guarantee that the scenario that we are generating was not changed by someone else during a time period. Just to let you know, all the following SQL was done in ODI 11.1.1.9 version.

Let’s begin with the basics. Everything that you create in ODI is stored in SNP tables in its WORK and MASTER repositories. For this post we will focus in two main tables from the WORK repository:

  • SNP_SCEN: contains the basic information about the scenarios that exists in that WORK repository (like name, version, creation date and so on);
  • SNP_SCEN_TASK: the “main” scenario table that contains all the steps/tasks that are performed by a scenario. You may query this table and see exactly which tasks (like SQL commands, variables, flows) that scenario will perform when you run it in Operator;

So now let’s get back to our problem. There is a scenario that is running fine in Production for one year now (let’s say it calls ODI_SCENARIO Version 1_00_00) and this scenario is also in Development. I’ll make a change in only one interface (I’ll add a simple SUBSTR in a column named PACK_SLIP) of this scenario in Development and create a new version of it (ODI_SCENARIO Version 2_00_00). How do I guarantee that my code change was the only thing that changed in this new scenario and that it does not contain any other code from other developers? The answer lies on the SNP_SCEN_TASK table.

If you go to ODI WORK repository in Production, you may run the following query to get all the steps that the scenario is currently executing on its 1_00_00 Version:

SELECT NNO,
SCEN_TASK_NO,
TASK_TYPE,
TASK_NAME1,
TASK_NAME2,
TASK_NAME3,
EXE_CHANNEL,
DEF_CONTEXT_CODE,
DEF_LSCHEMA_NAME,
DEF_CONNECT_ID,
DEF_IND_COMMIT,
DEF_ISOL_LEVEL,
DEF_PLAN_COMP,
COL_CONTEXT_CODE,
COL_LSCHEMA_NAME,
COL_CONNECT_ID,
COL_ISOL_LEVEL,
COL_IND_COMMIT,
COL_PLAN_COMP,
ORD_TRT,
IND_ERR,
LOG_LEV_DET,
IND_LOG_NB,
DEF_TECH_INT_NAME,
COL_TECH_INT_NAME,
IND_LOG_METHOD,
COL_TXT,
COL_IND_ENC,
COL_ENC_KEY,
DEF_TXT,
DEF_IND_ENC,
DEF_ENC_KEY,
IND_LOG_FINAL_CMD
FROM SNP_SCEN_TASK
WHERE SCEN_NO IN
(SELECT SCEN_NO
FROM SNP_SCEN
WHERE SCEN_NAME = 'ODI_SCENARIO'
AND SCEN_VERSION = '1_00_00')
ORDER BY SCEN_TASK_NO,NNO;

2017-10-25_17-17-51

There are a lot of important columns in this table that can give you a lot of valuable information. However, COL_TXT and DEF_TXT are generally the most important ones since they contain the code that is generated in the “Source and Target tabs” inside the procedures and interfaces. After you run this SQL in Production environment, you may export it to whatever you like. In this example here, I’ll export it as “Text” using Oracle SQL Developer as the following (Right click on any row and select “Export”):

2017-10-25_17-21-19

Save it somewhere in your computer:

2017-10-25_17-22-04

The result will be something like this:

2017-10-25_17-24-01

Now let’s run the SQL in the Development ODI WORK repository. The only thing that we will change now is our filter that will go from SCEN_VERSION = ‘1_00_00’ to SCEN_VERSION = ‘2_00_00’, which is the new scenario version that we just generated. Do the same steps as the Production SQL and you should end up with something like this:

2017-10-25_17-27-19

Now you need to compare both codes. I like to go simple and use Notepad ++ with “Compare” plugin. You may use any other tool for comparing txt files (Beyond Compare is awesome as well). In Notepad ++ you just need to open both files, click on the Production file and “Set the First Compare”, then click on de Development file and “Compare”.

2017-10-25_17-31-42

2017-10-25_17-32-54

You will have something similar to this when you compare:

2017-10-25_17-34-24

The “Compare NavBar” shows a lot of differences, way more than the one that I just did. However, we need to analyze it calmly to verify what do they really mean. You may navigate thought the changes using the “Next” button in the tool bar.

2017-10-25_17-43-00

There will be some blocks of code that contains “similar differences” due to the nature of ODI. For example, when you change one single thing in one column of one interface, it will be reflected in several steps within the Knowledge Module (in C$/I$/E$ creation for example). This is one example of it:

2017-10-25_17-49-46

This change is saying that we changed the order of PACK_SLIP column (which was the column that we added a SUBSTR command). Actually we didn’t change the order, but we changed its content. However, when ODI create its temporary tables (like C$, I$ and E$) we cannot control the order that they are going to be created as the code is generated automatically by ODI. So we don’t need to worry about this change, as it was somehow “expected”. When we click “Next”, we are going to have similar ones where the column just changed its order. Continuing further down, we will get to the place where our change occurred:

2017-10-25_17-58-43

Cool, this is the place that we changed our code and it looks good. Let’s keep going to see what else has changed. Now we will get something weird. A lot of “1” changes just appeared until the end of the file (explaining why we had a lot of changes in the comparison Navigation Bar):

2017-10-25_17-59-56

This “1” comes from IND_LOG_FINAL_CMD column, which identifies if the step should “Log Final Command” or not. This does not affect the code itself, but for the sake of my analyses I went to the KM to see if someone had changed this option:

2017-10-25_18-06-42

My suspicious was right and someone changed this option in one of the KMs, which got reflected in a lot of places in my ODI scenario. There was no more changes in my comparison, so I could conclude that:

  • PACK_SLIP changed the order in some temporary tables creation, which is ok;
  • I saw my PACK_SLIP mapping change (SUBSTR) in the Development code;
  • There was a change in the KM to “Log Final Command” in a specific KM step, which is also ok and does not affect the code itself;

No more differences were found between the scenarios, so I may safely deploy it to production. If someone else had changed something more critical, the compare method would have catch that and we could revert it back before moving to Production.

There are other ways for you to get and compare the codes, like if both scenarios are in the same DB you could just run two SQLs and compare them or you could export both XML scenario files and compare those, but this post here gives you a generic way that can be done in most of the cases and it is fairly easy to be used.

That’s it guys, I hope you have enjoyed!