Archive for the InfraStructure Category

Oracle Ramps Up Free Online Learning and Certifications for Oracle Cloud Infrastructure and Oracle Autonomous Database

Posted in Certification, InfraStructure, Oracle, Oracle Database with tags , , , , on April 14, 2020 by RZGiampaoli

Hey guys how are you?

Just a quick one today, Oracle is offering free access to online learning content and certifications for a broad array of users for Oracle Cloud Infrastructure and Oracle Autonomous Database, and will be available until May 15, 2020.

This is a great opportunity and if you want to learn more, you can find it here.

Thank you guys and see you soon.

Advertisement

ODI 12c Standalone Agent Install for an ODI 11g guy

Posted in InfraStructure, Install, ODI, ODI 11g, ODI 12c, ODI Architecture with tags , , , , , on July 17, 2017 by Rodrigo Radtke de Souza

Hi everybody! Today’s post is about installing an ODI 12c standalone agent. This is not a “new” topic and the steps to perform it can also be found at the Oracle site, however it got me a little bit “off guard” when I was requested to install one and the reason is that it changed considerably comparing to ODI11g (and yeah, we still work A LOT with ODI11g, so installing ODI12c agent was “new” for us).

Prior to ODI 12 version, the ODI agent was configured by simply editing a file called odiparams.bat (odiparams.sh in Linux), which would contain all the necessary agent configuration parameters. It was a simple step, where you would enter the ODI master/work configuration, DB/ODI connection users and so on. After that, you would simply run the agent program and that was it, very short and easy to do. However, in ODI 12 version, it changed considerably and now we need to go through two wizard setups, one for creating the necessary pre-requisite DB schema for ”Common Infrastructure Services” and the other one to configure the ODI Standalone agent for us.

This change added some extra complexity to an architecture that was (talking exclusively about ODI Standalone Agent here) very simple to setup in the old days. Although Oracle provides wizards for us to minimize this effort, nothing was easier than simply configuring a parameter file and running a java program. But enough grumbling, let’s see how we may accomplish this task on ODI 12.

The first wizard that we need to run is the Repository Creation Utility (RCU) that is located here at ORACLE_HOME/oracle_common/bin/rcu.bat. Before we run it, we must understand what RCU is and what it can do for us. As its name suggests, it is a utility that may be used to create any repository component required for Oracle Fusion Middleware products, including the ODI Master/Work repository.

In our project, we did not create ODI Master/Work repository with RCU, but instead we got two empty Oracle DB schemas and installed ODI directly there. The reason why we did not use RCU in this situation is because RCU will force you to create one single Oracle DB schema that will store both ODI Master and Work repositories and this is not a good approach when dealing with large environments. We think that Oracle’s rational on this subject was to simplify certain ODI installs by unifying all in a single place, but again, this removes some of the ODI’s architecture flexibility and complicates the use of complex architectures in the future, like using multiple Work repositories attached to one Master.

So, if we already have ODI Master/Work repositories created, why do we still need RCU? This is because, from ODI 12 version on, we need a third Oracle DB schema that will be used to store the “Common Infrastructure Services” tables that are required for the ODI Standalone agent and the only way to create these tables are using the RCU utility.

Now that we have set our expectations around RCU, let’s run it. The first screen is just a welcome screen explaining what RCU is about, so just click Next.

1

Now let’s select “Create Repository” and “System Load and Product Load”. Just notice that you will be asked for a DBA user in the next steps, since this DBA user will be used to create the necessary database objects (including the DB schema itself) in the new “Common Infrastructure Services” schema. Click Next.

2

Add the database and DBA information and click next.

3

ODI installer will check your information and if everything is ok, all tasks will be green. Select Ok to proceed.

4

In the next screen is where we may select which components we want RCU to install. We may notice that RCU is able to create several schemas for different components, from ODI to WebLogic. Since we already have our Master and Work repositories created, we just need to select “AS Common Schemas”/”Common Infrastructure Services”. Note here that, for this schema, RCU will create it using what is added in the “Create new prefix” option plus a “_STB” postfix. Click Next.

5

The installer will check the pre-requisites to install and if it is ok, a green check will appear. Click OK.

6

In the next screen you will identify which schema password will be used on the new created DB schema. Add a password and click next.

7

Define the Default and Temp table spaces that will be used by the new schema and click Next.

8

If the table spaces does not exist, they will be created for you. Click Ok.

9

The installer will check once more if everything is okay and also create the necessary table spaces. Click Ok.

10

On the next page, we are going to have a Summary on what the installer will do. If everything looks correct, click Create to create the necessary DB objects.

11

Check the Completion Summary, click close and that’s it! You have successfully created the “Common Infrastructure Services” schema, which is a pre-requisite for the ODI Agent install.

12

The next step is to run the wizard setup that will configure the ODI Standalone agent for us. Run the Config program on ORACLE_HOME/oracle_common/common/bin/config.cmd. In the first screen let’s create a new domain. In this domain folder is where the ODI Agent batch programs will reside, such as Start/Stop agent. Select a meaningful folder and click next.

13

In the next screen you will select “Oracle Data Integrator – Standalone Agent – 12.2.1.2.6 [odi]” and click next. This step will also install some basic Standalone components required for the ODI Agent.

14

Select a valid JDK location and click next.

15

Since we did not create our Master and Work repositories using RCU, we won’t be able to use the “RCU Data” option for Auto Configuration here. It is not a big deal, since we may select “Manual Configuration” and click next.

16

Here we will need to input all the information related to two schemas: The ODI Master and the “Common Infrastructure Services“. The way that this screen works is tricky and confusing, since there are options that may be typed for all schemas at once. The best way to do it without any mistake is by selecting one of them, add all information, then uncheck and check the other one and add all the information again. Click next.

17

The installer will check the information that was added here and if it is okay, two green marks will be showed in the Status column. Click next.

18

The next screen will be used to define our ODI Agent name. Create a meaningful name here, since this will be used by the ODI users to select on which ODI agent they will run their ETL processes. Click next.

19

Add the server address, the port and an ODI user/password that has “Supervisor” access. On preferred Data source option, leave it as odiMasterRepository and click next.

20

Although we are not going to use our ODI Standalone Agent in a Node Manager object, which would be controlled by WebLogic, we still need to select a type for it and create a new credential. Add any name and a password for it (don’t worry, you will not use it for the ODI Standalone Agent) and click next.

21

Review the install summary and if everything is ok, just click Create.

22

Check all the steps as they turn into green checks and once completed, click next.

23

That’s the end of the configurations! You have successfully completed the ODI Standalone agent configuration and it is ready to run.

24

In order to run the ODI agent, open a CMD command, navigate to your base domain folder and run the ODI Agent start program with its name as an input argument: agent.cmd –NAME=DEV_AGENT. Wait a little bit for it to load and when its status gets to “started” it is good to go.

25

Now that the ODI agent is up and running, we may go to ODI Topology/Agent and double click the ODI agent that you have created. Now we may click on the Test button and see what happens. If everything is correct, you will see an information windows saying that the ODI agent Test was Successful!

26

Congratulations, now you have an ODI12c Standalone Agent configured. As you can see, we now have some more extra steps to do compared to ODI11g. I hope this post helps you to get prepared for this new kind of installs.

Thanks, see ya!

 

OTN Article: Building a 100% Cloud Solution with Oracle Data Integrator

Posted in ACE, ArchBeat, BICS, DBCS, DEVEPM, EPM, EPM Automate, InfraStructure, ODI, ODI 11g, ODI Architecture, Oracle, Oracle Database, OS Command, OTN, PBCS, Tips and Tricks with tags , , , , , , , , , , , , on January 23, 2017 by RZGiampaoli

Hi guys how are you? Today I want to share our new OTN article Building a 100% Cloud Solution with Oracle Data Integrator.
The article will cover how to integrate BICS, PBCS, DBCS and ODI and will explain step by step how to create a 100% cloud solution using ODI (everything on the cloud including ODI :)).

This is a perfect article for companies that are thinking to go cloud and have some doubts or even are thinking how you can integrate/use your actual infrastructure with the cloud services.

I hope you guys enjoy and see you soon.

Let’s Join DEVEPM @ KSCOPE 16

Posted in ACE, EPM, Essbase, ETL, Hyperion Essbase, Hyperion Planning, InfraStructure, Kscope 16, ODI, ODI 10g, ODI 11g, ODI 12c, ODI Architecture, ODTUG, Oracle Database, OS Command, Performance, Tips and Tricks with tags , , , , , , , , , , , , on April 5, 2016 by RZGiampaoli

Hi Guys how are you?

Just a quickly post about this year KSCOPE. This year we’ll have 2 excellent sessions:

Take a Peek at Dell’s Smart EPM Global Environment:

Ricardo Giampaoli , TeraCorp

Co-presenter(s): Rodrigo Radtke de Souza, Dell

When: Jun 27, 2016, Session 2, 10:15 am – 11:15 am

Topic: EPM Applications – Subtopic: Planning

In a fast-moving business environment, finance leaders are successfully leveraging technology advancements to transform their finance organizations and generate value for the business.
Oracle’s Enterprise Performance Management (EPM) applications are an integrated, modular suite that supports a broad range of strategic and financial performance management tools that help business to unlock their potential.

Dell’s global financial environment contains over 10,000 users around the world and relies on a range of EPM tools such as Hyperion Planning, Essbase, Smart View, DRM, and ODI to meet its needs.

This session shows the complexity of this environment, describing all relationships between those tools, the techniques used to maintain such a large environment in sync, and meeting the most varied needs from the different business and laws around the world to create a complete and powerful business decision engine that takes Dell to the next level. 

Incredible ODI Tips to Work with Hyperion Tools

Ricardo Giampaoli , TeraCorp

Co-presenter(s): Rodrigo Radtke de Souza, Dell

When: Jun 27, 2016, Session 6, 4:30 pm – 5:30 pm

Topic: EPM Platform – Subtopic: EPM Data Integration

ODI is an incredible and flexible development tool that goes beyond simple data integration. But most of its development power comes from outside-the-box ideas.

  • Did you ever want to dynamically run any number of “OS” commands using a single ODI component?
  • Did you ever want to have only one data store and loop different sources without the need of different ODI contexts?
  • Did you ever want to have only one interface and loop any number of ODI objects with a lot of control?
  • Did you ever need to have a “third command tab” in your procedures or KMs to improve ODI powers?
  • Do you still use an old version of ODI and miss a way to know the values of the variables in a scenario execution?
  • Did you know ODI has four “substitution tags”? And do you know how useful they are?
  • Do you use “dynamic variables” and know how powerful they can be?
  • Do you know how to have control over you ODI priority jobs automatically (stop, start, and restart scenarios)?

If you want to know the answer to all this questions, please join us in this session to learn the special secrets of ODI that will take your development skills to the next level.

Join us in KSCOPE 16 and book our 2 sessions in schedule. They will be very good sessions and I’m sure that you’ll learn some new stuff that will help you in your EPM Environment!

SpeakerSquare (1)

Remotely Ziping files with ODI

Posted in 11.1.1.9.0, ACE, Configuration, EPM, Essbase, ETL, Hacking, Hyperion Essbase, InfraStructure, ODI, ODI 10g, ODI 11g, ODI 12c, ODI Architecture, ODI Architecture, OS Command, Performance, Remotely, Tips and Tricks, Zip Files with tags , , , , , , , , , , , on April 5, 2016 by RZGiampaoli

Hi guys how are you? It has been a long time since last time I wrote something but it was for a good reason! We were working in our two Kscope sessions! Yes, this year we will have 2 sessions and I think they will be great!

Anyway, let us get to the point!

Today I want to talk about something that should be very simple to do it but in the end, it is a nightmare…. Zip a file in a remote server…

A little bit of context! I was working in a backup interface for one client and, because their cubes are very big, I was trying to improve the performance as much as I can.

Part of the backup was to copy the .ind and .pag files and the data extract files as well. For an app we are talking in 30 gb of .pag and 40 gb of data extract files.

Their ODI infrastructure is like this:

Infrastructure

Basically I need to extract/copy data from Essbase server to the disaster recovery server (DR Server). Nothing special here. The problem is, because the size of the files I wanted to Zip the files first and then send it to the DR server.

If you use the ODI tools to Zip the file, what it does is bring all the files to the ODI Agent server, zip everything and the send it back. I really do not want all this traffic in the network and all the time lost in this process (also, the agent server is a LOT less powerful then the Essbase server).

Regular odi tools zip process

Then I start to research how I could do that (and thank you my colleague and friend Luis Fernando Cairo that help me a lot doing a lot of tests on this)

First of all we have three main options here:

  1. Create a .bat file and run it remotely: I did not like it because I do not want a lot of .bats all over the places
  2. Use windows invoke command: I need a program in the server like 7 zip or so and I don’t have access to install freely and I do not want to install zip’s program all over the places too
  3. Use Psexec to execute a program in the server: Same as the previous one.

Ok, I figure out that in the end I’ll need to create/install something in the server… and I rate it. Well, let’s at least optimize the problem right!

Then I was thinking, what I have in common in all Hyperion servers? The answer is JAVA.

Then I thought, I can use the JAR command to zip a file:

jar cfM file.zip *.pag *.ind

Where:

c: Creates a new archive file named jarfile (if f is specified) or to standard output (if f and jarfile are omitted). Add to it the files and directories specified by inputfiles.

f: Specifies the file jarfile to be created (c), updated (u), extracted (x), indexed (i), or viewed (t). The -f option and filename jarfile are a pair — if present, they must both appear. Omitting f and jarfile accepts a “jar file” from standard input (for x and t) or sends the “jar file” to standard output (for c and u).

M: Do not create a manifest file entry (for c and u), or delete a manifest file entry if one exists (for u).

Humm, things start to looks better. Now I had to decide if I would use the Invoke command or Psexec.

I started trying the Invoke command, but after sometime I figure out that I can’t execute the jar command using invoke.

Then my last alternative was Psexec.

The good thing about it is that is a zip file that you need just to unzip in the agent server, set it in the Environment Variables (PATH) and you are good to go.

It works amazingly.

You can run anything remotely with this and it’s a centralized solution and non-invasive as well (what I liked).

You just need to:

psexec \\Server  -accepteula  -w “work dir” javapath\jar cfM file.zip *.pag *.ind

Where:

-w: Set the working directory of the process (relative to remote computer).

-accepteula: This flag suppresses the display of the license dialog.

There’s one catch, for some unknown reason, the ODI agent does not get the PATH correctly then you need to use the complete path where it was “Installed”. The ODI is like this:

OdiOSCommand “-OUT_FILE=Log_Path/Zip_App_Files-RUM-PNL.Log” “-ERR_FILE =Log_Path /Zip_App_Files-RUM-PNL.err”

D:\Oracle\PSTools\psexec \\server -accepteula -w \\arborpath\APP\RUM\PNL\ JAVA_PATH\jdk160_35\bin\jar cfM App_Files-RUM-PNL.zip *.pag *.ind

With this, we will have a process like this:

Remotly Zip Process

This should not be something that complicate but it is and believe me, I create a very fast process and the client is very happy.

I hope you guys enjoy it and see you soon.

DEVEPM on ODTUG’s Technical Journal – Unleashing Hyperion Planning Security Using ODI

Posted in ACE, EPM, Hyperion Planning, InfraStructure, Kscope 14, ODI, Technical Journal with tags , , , , , , on April 22, 2015 by Rodrigo Radtke de Souza

Hi all,

ODTUG’s Technical Journal just released our article about our Kscope 14 presentation “Unleashing Hyperion Planning Security Using ODI”. In this article you will find all the details on how to dynamically setup Hyperion Planning security based on the application metadata repository information. Here is the abstract:

“In some Hyperion Planning projects, security becomes so complex that it takes more than just granting access to some security groups on the high-level members of the dimensions. Global companies often have the necessity to create multiple Planning applications to meet the diverse regions of the globe. However, what happens when the business requires a single application with a single plan type that contains cost centers from different regions around the entity hierarchy? Moreover, is that data restricted according to the region’s security group using only one attribute dimension? Furthermore, does each user need to see aggregated values correctly for your region only? This paper demonstrates how to generate and maintain leaf-level member security settings based on attribute dimension on a global Hyperion Planning application using only ODI and Hyperion Planning application metadata repository information.”

Link to the full article.

ODTUG’s Technical Journal is an excellent place where you can find awesome technical content for free! You just need to become a member (create a free login at the ODTUG’s site). If you are not a member yet, I really recommend you to become one ASAP!

See you later!

5 Steps to do an In-Place Migration of OWB/DB from 11.2.0.1 to 11.2.0.4

Posted in EPM, InfraStructure, Install, Migration, Oracle 11.2.0, Oracle 11.2.0.4, Oracle Database, OWB, Upgrade on September 1, 2014 by RZGiampaoli

 

Hi all, today I decided to get out of my comfort zone to talk about the in-place OWB migration and a little bit of infrastructure. I decided to do that because the follow reasons:

First, I hate infrastructure, probably because I am not good at it and in all clients that I’ve worked, this is always a headache. Some times because it is poorly done, some others because everything grows but the infra. But the worst part is:

Nobody knows Hyperion or the BI tools enougth and in the end when something bad happens, it ends up that I’ll need to fix the problem, and normally there’re two problems, the infra itself and the guys from the infra asking for evidences that the issue is from infra. Sorry guys, but this is a consultant point of view J

Second, because it took me 2 days to understand what was the OWB migration and how to do it.

Third, I searched the entire internet and did not find any complete step-by-step tutorial about it. Probably because the Migration happens to be the database migration not the OWB migration.

Before I start, let me talk a little about my infra. As I said, I hate infra and I am not good at it then please consider this post as a tentative to help people and not an absolute truth coming from an expert or something.

Because I hate infra, I tried to make mine the best it could be (or at least, the best I could make it).

The Server:

The_Server

This is a cheap machine (even in Brazil that everything is expensiveJ). I only upgraded it with 32 gb of memory and by now has only one HD with 1 Tb (I have more HD in my drawers But I’m not needing it by now).

The Virtualization Software:

For this server I installed in a 1 Gb pendrive the VMware ESXi 5.1.0. This is a good tip, the ESXi is by far (and I mean lightning years far) the best virtualization software in the market, and for a server with max 32 Gb memory, it’s for free.

The VMs:

The_VMs

I have six VMs up and running all the time in this server. Never had any performance issue, in fact they are a lot faster than the servers in some of my clients are. Of course, I have a clean installation of everything.

By the name of the machines you guys could figure out that I like old mythology J and I’ll explain the infra using it.

First we have Cerberus, the hell hound that has a zentail installed on it to protect the entrance of the hell realm. Also it’s used to external access, VPN, FTP and DNS.

Then we have Tartarus, the realm of the dead’s, and also what supports the earth (or gaia that’s the name of my extra net). This is an oracle 11g Database, because all the repositories are on it. It is the base for everything.

Also I have Niflheim, that’s almost the same thing but for the Nordics. This is an Oracle 12c database. There are Golden gates running between both databases for replication and test proporses.

After that, I have Olympus, where all the foundation, planning, eas server and reports are installed.

Then we have Pantheon, which is on top of Olympus, where OBIEE and Endeca is installed.

All these machines are running on top of Oracle unbreakable Linux OS.

In the end, we have Hyperion, which is a Windows server 2008 R2 machine where I have only Essbase and ODI agent installed. Why I did that, because both tools heavily use the file system and using windows make everything a lot easier to access, configure, and work with.

Also, I found in an old version of Essbase (11.1.1.3) that some ESSCMD commands doesn’t exists in Linux, like “deploy studio” command. I had a client that had to migrate the Essbase to windows because this and some performance issues.

However, the thing was the file system and I installed some clients in this machine this way my friends needs only my VPN and a terminal Server to access this machine and make some tests.

Enough about my infra and let us talk about the migration. First of all there’s no OWB migration. OWB is not like ODI that you need just to get/patch of a new version and update the repository.

For OWB you need to upgrade the entire database, and then you can connect using your new client (and if you use windows, you will not have the client in the same version of the repository. However, we’ll talk about this later).

Ok, sinceI do not like to have many oracle homes in my server I decided to do an “In-Place” upgrade. Upgrade is a strong word for this because it is an installation from the scratch not a patch or something.

If you have any type of Virtualization now is an excellent time to create a snapshot from your database. Thanks God I did that because he knows, I needed.

Ok, after try a lot of stuff and commands that never worked I did it in my way, and worked, then please if someone see something incredibly ugly (and maybe I’ll not know that it was an ugly thing) please let me know.

Basically for the “In-Place” installation we need to deceive Oracle installer to believe that your server has no other oracle instance. For this, you need to detach the oracle home from your inventory.xml. Oracle give us a command that I could never make it work:

On Windows:

%ORACLE_HOME%\oui\bin\setup.exe -detachHome ORACLE_HOME=D:\oracle\product\11.2.0.2\dbhome_1

On Linux:  

$ORACLE_HOME/oui/bin/runInstaller -detachHome ORACLE_HOME=11.2_ORACLE_HOME_LOCATION

I found other “Versions” of this command but I couldn’t make any of them works as well. Then I had to find my own version, which I can guarantee, it is a lot easier that this command. Rename the “inventory.xml” to anything you want. In my case this files is in this folder:

/u01/app/oraInventory/ContentsXML/

This will make the oracle installer believe that he will be the firstJ.

After that you need just to rename the dbhome folder or the version folder, anyone will do the trick.

/u01/app/oracle/product/

Renaming the folder will remove any need of backups because everything you need will be there.

Cool, after these two-steps you can install the DB software. Make sure to install only the software. The DB we will migrate later on.

I will not cover the installation here because even I did not have any issue with that, and, there are many tutorials that cover the installation in the internet. Basically you need only to click in next and remember to select software only. That is all.

Ok, after the software install we need to copy some files from the backup that we did when we renamed our previous directory.

We need to copy everything that is inside the folder “dbs” to the new “dbs” Folder. For me the path is like this:

/u01/app/oracle/product/11.2.0/db_1/dbs/

Doing that you have the initialization files from the old db in the new one.

Also copy the files in the follow folder to the respective ones in the new installation:

  • ORACLE_HOME/network/admin
  • ORACLE_HOME/hostname_dbname
  • ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole_hostname_dbname
  • ORACLE_HOME/owb/network/admin

With this, we have everything to upgrade the old db to the new version. Now we need to connect in the sqlplus as sysdba:

Go to the folder $ORACLE_HOME/rdbms/admin/ (this is the location of all sql we need to run to upgrade the repository) and then:

$ sqlplus / as sysdba

We need not to start the DB in upgrade mode. For this:

SQL> startup UPGRADE;

This will mount the database but will give access only for sysdba users. Now we just need to run the follow scripts:

Pre-Upgrade Script

SQL> spool upgrade_info.log

SQL> @utlu112i.sql

SQL> spool off

(This is a pre-upgrade script. Oracle said to copy it from the rdbms directory but I didn’t get why then.. If you have any problem please follow oracle orientations);

Upgrade Script

SQL> spool upgrade_info.log

SQL> @catupgrd.sql

SQL> spool off

(This is the upgrade script. It takes a while to run then, be patient);

Pos upgrade scripts

SQL> spool pos_upgrade_info.log

SQL> @utlu112s.sql

SQL> spool off

(This script provides a summary of the upgrade at the end of the spool log. It displays the status of the database components in the upgraded database and the time required to complete each component upgrade. Any errors that occur during the upgrade are listed with each component and must be addressed)

Recompile Invalid Objects

SQL> @utlrp.sql

(This will recompile any remaining stored PL/SQL and Java code in another session)

After this, we need just to:

SQL> shutdown immediate;

SQL> startup;

That’s it. Everything I did to upgrade the Oracle database in place. As I said, I am not a DBA and probably if a DBA read this post he will laugh, a lot, but it worked and it was the only way I found to do it.

The Client

Ok now, if you have your client in a windows machine, you will realize that oracle did not release a windows version for the OWB client 11.2.0.4. The latest client for windows is in the 11.2.0.3 version. However, Oracle said that this version is fully supported to work with the 11.2.0.4 version. You need only to:

Download the 11.2.0.3 version for windows:

Install the patch 16568042:

Change the file:

$OWB_HOME/owb/bin/admin/Preference.properties

Set:

OverrideRepositoryVersionCheck=true

OverrideRuntimeVersionCheck=true

That is it. You database is upgraded, your client is upgraded and you need only to upgrade the OWB repository. I will not cover this here because this can be found in the internet easily.

Also, if you want to change the language of your OWB client you need only to:

Open $ORACLE_HOME/ide/bin/ide.conf and add the following line to the file:

AddVMOption -Duser.language=en

I know that this is very different from what we have been posting here but it took me 2 days to make this works, and I do not want anybody else losing other 2 days to do something, that in the end was very simple. If you think of it now, for the “In-place” migration we only need to:

  1. Rename the inventory.xml file;
  2. Rename the db_home or the 11.2.0 folder;
  3. Install the new version of the software;
  4. Copy some files from the old install to the new one;
  5. Run three scripts;

That is all. I wish you guys found this useful and until next time. Thank you.