Archive for Cloud

ODI in the hybrid database world – Azure SQL Database – BCP Utility

Posted in Cloud, ODI with tags , , on May 8, 2023 by Rodrigo Radtke de Souza

Written on May 8, 2023 by Rodrigo Radtke de Souza

Hi all! Continuing the ODI in the hybrid database world series, let’s talk today about Azure SQL Database. This one is the simplest so far in our series, but it has some limitations.

The process is again like what we have been doing for some time now. We will duplicate “IKM SQL to File Append” and add some extra steps at the end of it to push the file to the cloud and load the target table. In the Azure SQL Database case, we will only need the BCP Utility installed in the ODI agent server. You can read about BCP Utility here.

The limitation of using this utility (at least I didn’t find anything different than that) is that it does not work with a compressed file, which ends up sending the entire file over the internet in batch sizes (which is configurable). I found a solution that you could first load a compressed file to an Azure Blob storage and then use Azure Data Factory to load this compressed file to Azure SQL Database, but this kind of breaks the simplicity of the examples that we are presenting here in this series, so I won’t go over that example. This is how the IKM will look like:

The first five steps are from the original SQL to File Append IKM. The only new is the one that calls BCP.  This step is very straightforward:

You need to pass the table name in Azure SQL that you want to load, the path to the text file that you created in ODI, the SQL Server name (which will look like this: <MY_SQL_SERVER>.database.windows.net), the database name and finally the Azure SQL user and password that you want to use to connect to the cloud database. There are some additional parameters that indicates some information about the file, like -t “|” that indicates that this is a pipe delimited file and -F 2, which indicates that the data rows begin in the second line of the file, since the first one is the header.

When you run the mapping, you see the following:

If you open the logs, you are going to see that it sent the data in 1000 batches:

The batch size and some more interesting parameters can be seen in the BCP documentation. You may play around with them and see which one is better for your specific data load.

See you next time!

Advertisement

ODI in the hybrid database world – Oracle ADW – OCI/CLI + DBMS_CLOUD

Posted in Autonomous Database, Cloud, ODI with tags , , on April 14, 2023 by Rodrigo Radtke de Souza

Written on April 14, 2023 by Rodrigo Radtke de Souza

Hi! In all the previous posts about ODI in the hybrid database world, I always showed two versions of data loads for each technology, one being JDBC and another with text files. In the case of Oracle ADW, I posted just the JDBC example. So, let me correct that today. Let’s see how you can also use text files to load data into Oracle ADW tables.

The process is very similar to what we have been doing for some time now. We will duplicate “IKM SQL to File Append” and add some extra steps in the end of it to push the file to the cloud and load the target table. In the ADW case, we will need a couple of things to make it work:

  • An Oracle Bucket, which is a container for storing objects in a compartment within an Object Storage namespace in the Oracle cloud. More information about it here.
  • Install OCI CLI, which ODI will call to push the file to the Oracle Bucket. The steps on how to install it can be found here.

Those two you will need to create and install before proceeding. There is a third component needed, but this one already exists in the Oracle ADW: DBMS_CLOUD package. You may read about it here.

This package will be used to push the data from Oracle Bucket to Oracle ADW table. There is a configuration that you may need to do regarding DBMS_CLOUD credentials, but I’ll talk about it later.

This is what the final IKM will look like:

First five steps are from the original KM. The last three are brand new. Let’s take a look first on GZIP:

Here we are just zipping the text file before sending it to the cloud. Just notice that Oracle accepts other kinds of compression, so you may change it if you want to. I used GZIP because it seems to be a standard that works with all Public cloud vendors out there.

Next one is OCI PUT:

Very straight forward step that gets the GZIP file and uses OCI CLI to PUT the file in an Oracle Bucket that belongs to an Oracle NameSpace. You may refer to OCI PUT documentation here.

Last one is dbms_cloud copy_data step:

COPY_DATA may have other options. A full list can be found here.

All options are self explanatory. In this case we are loading a text file with columns separated by pipes (“|”). One option that you need to notice is file_uri_list. It indicates the text file URL that resides in your Oracle bucket. It may look like this:

https://swiftobjectstorage.region.oraclecloud.com/v1/namespace-string/bucket/filename

or like this:

https://objectstorage.region.oraclecloud.com/n/namespace-string/b/bucket/o/filename

To make things simpler in this example, I set the bucket as public, so I could access it without credentials. However, in a production environment, that won’t be the case and you will need to create a credential using DBMS_CLOUD.create_credential. You will run this command only once in your Oracle ADW instance:

begin
  DBMS_CLOUD.create_credential (
	credential_name => 'OBJ_STORE_CRED',
	username => 'XXXXX',
	password => 'YYYYY'
  ) ;
end;

You can read more about how to create the credential here. After you create the credential, you will use it in the COPY_DATA command as a parameter (credential_name).

When we run the mapping, we get something like this:

Very similar to the other posts, we are generating a text file, GZIPing it, calling OCI CLI to send the file to the cloud and then calling DBMS_CLOOUD.COPY_DATA procedure to load the data from the Oracle bucket to our target table.

I hope you liked it! See you next time!

ODI in the hybrid database world – Google BigQuery – Simba JDBC

Posted in BigQuery, Cloud, ODI with tags , on November 23, 2022 by Rodrigo Radtke de Souza

Hi all! Continuing our series, today we will talk about how to load data into BigQuery using ODI. There will be two posts, one talking about Simba JDBC and another one talking about Google Cloud SDK Shell. You will notice that the first approach (using Simba JDBC) has a lot of challenges, but I think its worth looking at it, so you have options (or at least know why you should avoid it).

First, you need to download Simba JDBC and add it to your ODI client/agent (like what was done with Snowflake JDBC here). Also, you need to “duplicate” one ODI technology (I used Oracle in this example) and name it as BigQuery:

In JDBC Driver, you will add the Simba driver and in URL you will add your connection to Google BigQuery. I’ll not go over the details here, since there are a lot of ways for you to create this URL, but you may check all the ways to create it in this link.

Press “Test Connection” to see if it is all working as expected:

Create a new Physical schema and give it the name of your BigQuery Dataset:

Create a new Logical Schema and a Model. If all is correct, you will be able to Reverse Engineer as you normally do with other technologies:

Let’s go ahead and just create a new mapping. This is a simple one: getting data from an Oracle database and loading it to BigQuery:

As LKM, I’m using LKM SQL to SQL:

As IKM, I just used a Global one:

When I ran it, it fails. If we check Operator, we will see that it couldn’t insert new rows because it was not able to create the C$ temporary table. If we look further, we see that it considers “$” as an illegal character in BigQuery tables:

If we get the create table statement from ODI Operator and try to execute in BigQuery, we will see a lot of issues with that:

First, as I said before, we cannot have $ signs, so let’s try to remove it:

It also does not accept NULL. Removing them gave another error:

Varchar2 is not a thing in BigQuery, so if we change it to String, it will work fine:

So, just for us to create a temporary C$ tables, we had to do several changes to it. It means that ODI/BigQuery integration is not straightforward as we did for Oracle Cloud or Snowflake. We will have to do some changes and additions to BigQuery technology in ODI. Let’s start with $ signs. Double click BigQuery and go to Advanced tab. There, lets remove the $ sign from temporary tables (I removed only in C$ for this example):

Also, you need to remove the DDL Null Keyword:

You may need to do way more changes depending on what you need to load, but for this example, we will try to keep as simple as possible.

Next step is to create a String Datatype. You may go to BigQuery technology and duplicate and existing Datatype. For this example, I used Varchar2. Then I changed the Name/Code/Reverse Code/Create table syntax/Writable Datatype syntax.

Another step is to change our source technology, in this case Oracle. We need to do that because ODI needs to know what to do when converting one datatype from one technology to another. We do this by going to Oracle Technology and double clicking in each source datatype that we have and what they compare to the new target technology (BigData). For this example, we will only be getting Varchar2 columns to String columns, so we will just make one change. In a real-world scenario, you would need to change all the ones that you use. Double click Oracle’s VARCHAR2 Datatype, click on “Converted To” tab and select String on BigQuery Technology.

One last change is to go to our BigQuery datastore and change the column types to String:

This should be enough for us to test the interface again. When we run it, no error happens this time:

However, the data load takes forever to complete (I cancelled after some hours trying to run a 1 M rows table). The ODI job was able to create the work table and I could see data flowing in BigQuery, but the speed was just too slow:

I tried to change the Array Fetch/Batch Update size/Parallelism in the Google BigQuery Data Server ODI object, but none of the values that I tried seemed to work fast as I wish:

Not sure if the slowness was due to something in my architecture, but at this point I just thought that, even if worked reasonably fast, it wouldn’t be as fast as creating a file and send it directly to BigQuery (similarly on what we did for Snowflake). Also, the amount of customization is so high to load a simple table, that I really don’t want to think on how bad it will get when we get to some complex mappings.

Not surprisingly, my tests with text files were extremely fast and the customizations needed are minimum compared to the JDBC method, so that’s what we will be covering in the next post.

Thanks! See you soon!

How to use your existing ODI on premise to seamlessly integrate PBCS (Part 3: Inbound Jobs)

Posted in Cloud, EPM Automate, ODI, PBCS, Uncategorized with tags , , on March 18, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the how to integrate PBCS seamlessly using you existing ODI environment series (Part 2 Here), today we’ll talk about Inbound Jobs.

We already know that we’ll need 4 type of jobs for our integration then let’s see what we need to do to create an Inbound Job:

The first thing we need to do is to set the location as inbox (or outbox in case of an extract). This is what enables the button save as a job. If you choose Local, it enables you to run just for one time.

Second, source type. We have 2 choices there, Essbase and Planning. Essbase has the same format that we are used with Essbase exports, the problem is that if we select Essbase we’ll need to have one job per Plan type.

Planning in the other hand has a special format that is very handy for create dynamic objects in ODI as we can see in the table below:

As we can see, the planning format has a column for the account, then the periods, a POV column and the data load cube name (plan type).

The periods on the column is already a nice thing to have since will be way fester to load and way more optimized since we’ll have one how for the entire year.

Of course, this will only be true when we are talking about forecast because actuals normally, we load just one month per time… even so I prefer this format then have one column for the period and another for data.

The POV is also nice because doesn’t matter how many dimensions we have in one plan type, everything will be in the same column, then for integrate is simple since we just need to concatenate all columns but accounts in one column. I recommend using Regular expression and LISTAGG to do so (best function ever for dynamic components).

And to end we need to inform to each plan type that data will be loaded, very nice, you can load all your plan types at once. One job, one file and everything can be done by one generic component in ODI.

After that we need to choose the delimiter used in the file, in this case, I choose pipeline because it’s very hard to have this character in the middle of any metadata.

Finally, we just need to inform the name of the file. That’s all we need to create a job that we can call using EPMAutomate anytime we need it.

Oracle Always Free cloud offering (Part 3)

Posted in ACE, Autonomous Database, Cloud, Data Warehouse, Network, Oracle, Oracle Database with tags , , , on May 21, 2020 by RZGiampaoli

Hey guys how are you doing? Today I’ll continue with the Oracle always free cloud offering and we’ll finally start to provisioning a VM in our environment. If you want to know more about how it works (Part 1) or the overview about the Dashboard (Part 2) please check my previous posts.

The first thing we need to do is check for the best practices and see if everything in our environment is adequate.

  • IP Addresses Reserved for Use by Oracle:
    • Certain IP addresses are reserved for Oracle Cloud Infrastructure use and may not be used in your address numbering scheme (169.254.0.0/16).
    • These addresses are used for iSCSI connections to the boot and block volumes, instance metadata, and other services.
    • Three IP Addresses in Each Subnet
      • The first IP address in the CIDR (the network address)
      • The last IP address in the CIDR (the broadcast address)
      • The first host address in the CIDR (the subnet default gateway address)
    • For example, in a subnet with CIDR 192.168.0.0/24, these addresses are reserved:
      • 192.168.0.0 (the network address)
      • 192.168.0.255 (the broadcast address)
      • 192.168.0.1 (the subnet default gateway address)
    • The remaining addresses in the CIDR (192.168.0.2 to 192.168.0.254) are available for use.
  • Essential Firewall Rules
    • All Oracle-provided images include rules that allow only “root” on Linux instances or “Administrators” on Windows Server instances to make outgoing connections to the iSCSI network endpoints (169.254.0.2:3260, 169.254.2.0/24:3260) that serve the instance’s boot and block volumes.
      • Oracle recommends that you do not reconfigure the firewall on your instance to remove these rules. Removing these rules allows non-root users or non-administrators to access the instance’s boot disk volume.
      • Oracle recommends that you do not create custom images without these rules unless you understand the security risks.
      • Running Uncomplicated Firewall (UFW) on Ubuntu images might cause issues with these rules. Because of this, Oracle recommends that you do not enable UFW on your instances.
  • System Resilience
    • Oracle Cloud Infrastructure runs on Oracle’s high-quality Sun servers. However, any hardware can experience a failure:
      • Design your system with redundant compute nodes in different availability domains to support failover capability.
      • Create a custom image of your system drive each time you change the image.
      • Back up your data drives, or sync to spare drives, regularly.
      • If you experience a hardware failure and have followed these practices, you can terminate the failed instance, launch your custom image to create a new instance, and then apply the backup data.
  • Uninterrupted Access to the Instance
    • Make sure to keep the DHCP client running so you can always access the instance. If you stop the DHCP client manually or disable NetworkManager (which stops the DHCP client on Linux instances), the instance can’t renew its DHCP lease and will become inaccessible when the lease expires (typically within 24 hours). Do not disable NetworkManager unless you use another method to ensure renewal of the lease.
    • Stopping the DHCP client might remove the host route table when the lease expires. Also, loss of network connectivity to your iSCSI connections might result in loss of the boot drive.
  • User Access
    • If you created your instance using an Oracle-provided Linux image, you can use SSH to access your instance from a remote host as the opc user. After logging in, you can add users on your instance.
    • If you created your instance using an Oracle-provided Windows image, you can access your instance using a Remote Desktop client as the opc user. After logging in, you can add users on your instance.
  • NTP Service
    • Oracle Cloud Infrastructure offers a fully managed, secure, and highly available NTP service that you can use to set the date and time of your Compute and Database instances from within your virtual cloud network (VCN).
    • We recommend that you configure your instances to use the Oracle Cloud Infrastructure NTP service.
  • Fault Domains
    • A fault domain is a grouping of hardware and infrastructure that is distinct from other fault domains in the same availability domain. Each availability domain has three fault domains. By properly leveraging fault domains you can increase the availability of applications running on Oracle Cloud Infrastructure.
    • Your application’s architecture will determine whether you should separate or group instances using fault domains.
  • Customer-Managed Virtual Machine (VM) Maintenance
    • When an underlying infrastructure component needs to undergo maintenance, you are notified before the impact to your VM instances. You can control how and when your applications experience maintenance downtime by proactively rebooting (or stopping and starting) your instances at any time before the scheduled maintenance event.
    • A maintenance reboot is different from a normal reboot. When you reboot an instance for maintenance, the instance is stopped on the physical VM host that needs maintenance, and then restarted on a healthy VM host.
    • If you choose not to reboot before the scheduled time, then Oracle Cloud Infrastructure will reboot and migrate your instances before proceeding with the planned infrastructure maintenance.

When you work with Oracle Cloud Infrastructure, one of the first steps is to set up a virtual cloud network (VCN) for your cloud resources. I was thinking to do a more detail explanation here but this topic is very big. Then I decide to try do a simple step by step in how to set you Network for you to access your resources from your computer.

This is not the best way to create an complex network or anything like that, is just a way to quick start using your always free components and test your VM and DB.

To start we will click in the “Setup a network with wizard” quick link:

After you click there you have 2 options:

Select VCN with Internet Connectivity, and then click Start VNC Wizard. In the next page, just insert the name of your VCN and leave averything else as it is (unless you have a reason to change). Click Next.

In the next page, it’ll show everything that will be create by the Wizard. Note that you can create manually piece by piece of it, but for simplicity, the wizard should be enough.”Click in Create.

Next screen will show the installation of what was requested:

And that’s it for the network. Now we can start to create our databases and VM’s all inside our network, and they all going to “see” each-other.

That’s it for the network. Again, this is a very simple way to set your Network and every single step above can be setup individually with greater complexity but I’m for sure, but that will be impossible to be done in the always free since a lot of the complexity stuff needs to be paid for.

You can get a lot more information in the Jumpstart your Cloud Skills on the Start Explore. There are a lot of videos there explaining a lot of things. For simplicity, I’ll post here all links available there just for people that wants to see the videos before they subscribe to the OCI.

Module NameNumber of SubmodulesRun Time (Minutes)
Core InfrastructureGetting Started with Oracle Cloud Infrastructure113
Core InfrastructureVirtual Cloud Network L10010116
Core InfrastructureVirtual Cloud Network L200471
Core InfrastructureCompute L100660
Core InfrastructureCompute L200670
Core InfrastructureVPN Connect L100228
Core InfrastructureFastConnect L100218
Core InfrastructureVPN Connect L200215
Core InfrastructureFastConnect L200224
Core InfrastructureBlock Volume L100647
Core InfrastructureFile Storage L100455
Core InfrastructureObject Storage L100340
Core InfrastructureStorage L200341
Core InfrastructureLoad Balancing L100330
Core InfrastructureLoad Balancing L200224
Core InfrastructureHA and DR L300231
DatabaseDatabase L100445
DatabaseDatabase Capacity Planning L200466
DatabaseDatabase HA L200236
DatabaseDatabase Migration L200333
DatabaseDatabase CLI L200110
DatabaseData Safe L100115
DatabaseAutonomous Database L100552
DatabaseAutonomous Database L200579
DatabaseExadata Cloud Service Overview L3001100
DatabaseExadata API and CLI L300195
DatabaseExadata Patching L300161
DatabaseExadata Backup and Recovery L300157
Solutions and PlatformFunctions L100348
Solutions and PlatformEvents L100348
Solutions and PlatformContainer Engine for Kubernetes L100327
Solutions and PlatformRegistry L100421
Solutions and PlatformDNS Traffic Management L100326
Solutions and PlatformDNS Zone Manager L100212
Solutions and PlatformResource Manager L100122
Solutions and PlatformMonitoring L100135
Solutions and PlatformStreaming L100111
MigrationData Migration L100338
MigrationOCI-Classic to OCI Migration136
MigrationOCI-Classic to OCI Migration Tools171
Governance and AdministrationIdentity and Access Management L100565
Governance and AdministrationIdentity and Access Management L2001107
Governance and AdministrationBilling and Cost L100237
Governance and AdministrationService Requests and SLAs119
Governance and AdministrationSecurity Overview L1001061
Governance and AdministrationWeb Application Firewall L100230
Governance and AdministrationKey Management L100118

Next thing we can do is create a load balancing. To do that, we just have to click in the Create Load Balancer in the Quick Actions and then fill the new page like this:

The most important thing here is to make sure you selected the Micro in the Bandwidth selection. This one is free (you can also see the Always free Eligible logo there. Click Next after this.

In the next page we need to choose the load balance policy, and for that, depending of your application you’ll select one specific one. We have 3 options:

  • Weighted Round Robin: This one distribute the load sequentially in the servers (one each)
  • IP Hash: This one guarantee that the request from one specific client always go to the same server
  • Least Connections: this one always select the server with less connections

Next you need to add Back-ends. We don’t have any create now, but we can add this later. And finally we can change the Health Check policy, but for what we are doing we can just leave as it is. Click Next. In this screen we have to create a listener:

Here we have 3 options of traffic listener, HTTPS, HTTP and TCP. I’ll going to select TCP without SSL for simplicity, but if you select HTTPS you’ll need to have SSL certificate files and private keys. It’s safer but if you want just o play around its better to select HTTP or TCP.

For TCP we just have this options:

If you select USE SSL you also need to provide the Digital Certificate and private keys.

After you select yours, just finish the process. You’ll be taking to the Load Balance Monitoring page where’ll see something like this:

And that’s it for the network. Next time we’ll provisioning a VM and we’ll set our machine to connect into the VM.

I hope you guys enjoy this and see you soon.

Oracle Always Free cloud offering (Part 2)

Posted in ACE, Autonomous Database, Cloud, Data Warehouse, Oracle, Oracle Database with tags , , , , , on May 18, 2020 by RZGiampaoli

Hey guys how are you? Today I’ll continue to talk about the Oracle Always free cloud offering and I’ll try to summarize what you can do after your account is set up. If you want to know how to setup you account you can find it HERE.

After you receive an email saying everything is set you can login in your account and you’ll see a screen like this:

This is the main dashboard. Here’s where you’ll create your Database, your VM’s, convert your account to paid, manage your account, ask for help, etc… Let’s start with the main dashboard:

  • (1) Quick Actions: Here you’ll find the most important links as quick actions.
    • (2)Compute: This is where you can create a VM to be used with your databases. You can use it to install tools and develop whatever you want inside the your environment.
    • (3)Networking: Here’s where you set up your cloud network. This is the first step you must do to ensure your VM and databases will be in the same network and reaching each other.
    • (4)Autonomous Transaction Processing: This is where you create a transaction database.
    • (5)Autonomous Data Warehouse: This is where you can create your Data Warehouse database.
    • (6)Search: A quick way to view all your resources.
  • (7)Account Center: Here’s a quick place to manage your account and see how many credits you have and billing information
  • (8)Main Menu: This is the main menu where you have access to everything that you can do inside your Cloud.
  • (9)Top Bar: Where you can change regions, in case you have more than one region, access the Cloud Shell (for OS commands), see the help, ask for help in the chat, change language and see your profile.
  • (10)Start Exploring: Here’s a place where you can find articles to help you start setting up your environment.
  • (11)What’s new: And finally here’s where you can see news about Oracle cloud, like releases and things that will be added.

One important thing to add here is that before you add anything or create anything, look for the “Always Free Eligible” logo or description to be sure you’ll not buying anything by mistake. Now about the main menu:

  • Core Infrastructure: Here’s where you can set your VM’s, networks and storage options.
  • Database: Here’s where you can Set your databases options, backups and Servers (VM or Bare metal).
  • Data and AI: Here’s where you can set your Big Data and AI environment.
  • Solution and Platform: Here’s where you can set your Analytics cloud services, Integrations, monitoring and marketplace.
  • More Oracle Cloud Services: Here’s where you have other cloud services.
  • Governance and Administration: And here is where you can administrate your environment like provisioning security, Account Management, Identity and Governance.

As you can see there’s a lot that can be done, but we’ll concentrate in the “Always Free” content, but the following list summarizes the Oracle Cloud Always Free-eligible resources that you can provision in your tenancy:

  • Compute (up to two instances)
  • Autonomous Database (up to two database instances)
  • Load Balancing (one load balancer)
  • Block Volume (up to 100 GB total storage)
  • Object Storage (up to 20 GiB)
  • Vault (up to 20 keys and up to 150 secrets)

In the next post we’ll setup our environment. See you soon guys.

Oracle Always Free cloud offering (Part 1)

Posted in ACE, Cloud, Oracle, Oracle Database, Tips and Tricks with tags , , , on May 6, 2020 by RZGiampaoli

Hey guys how are you?

I decide to do some posts about Oracle Always free offering, how it works, how you setup things, a few things we can do with that and maybe more. I think is fair for us to start by what’s it and what you need to do to get one.

Basically Always Free is a services for anyone that wants to try the world’s first self-driving database and Oracle Cloud Infrastructure for an unlimited time. The ideas is let people explore the full functionality of Oracle Autonomous Database and Oracle Cloud Infrastructure, including Compute VMs, Block and Object Storage, and Load Balancer, all of the essentials for developers to build complete applications on Oracle Cloud. 

Oracle’s Free Tier program has two components:

  • Always Free services, which provide access to Oracle Cloud services for an unlimited time
  • Free Trial, which provides $300 in credits for 30 days to try additional services and larger shapes

The new Always Free program includes the essentials users need to build and test applications in the cloud: Oracle Autonomous Database, Compute VMs, Block Volumes, Object and Archive Storage, and Load Balancer. Specifications include:

  • 2 Autonomous Databases (Autonomous Data Warehouse or Autonomous Transaction Processing), each with 1 OCPU and 20 GB storage
  • 2 Compute VMs, each with 1/8 OCPU and 1 GB memory
  • 2 Block Volumes, 100 GB total, with up to 5 free backups
  • 10 GB Object Storage, 10 GB Archive Storage, and 50,000/month API requests
  • 1 Load Balancer, 10 Mbps bandwidth
  • 10 TB/month Outbound Data Transfer
  • 500 million ingestion Datapoints and 1 billion Datapoints for Monitoring Service
  • 1 million Notification delivery options per month and 1000 emails per month

Well, if you ask me this is far better than install an Oracle XE in your machine and configure everything there for you to learn or to create some small app. In fact, if you want to learn, it’ll far better if you start learning in an cloud environment since everyday we have more and more companies migrating to cloud.

Ok, but what do you need to do to get one? In fact is very easy, you just need to access this link and click in the Start for Free button. After that you have to fill a short form where you need to inform:

  • Your email and user information like address and cellphone
  • You need to validate your cellphone through message (Oracle will send a code to your cell)
  • You need to choose the region you’ll going to have you OCI (Oracle Cloud Infrastructure)
    • This needs to be as close as possible as your real region to decrease latency and improve network performance
    • Some regions are not available for always free (it’s written next to the region name if is available or not)
  • And you need to add a credit card to your account
    • You’ll not be charged but you may see 1 Dollar/Euro/… getting charged but it’ll be return
    • Also, Revolut card don’t work, you need a proper credit card.

And that’s it, Oracle will create your account (in fact takes around 15 minutes until you receive a email with further instructions [Bare in mind that because the COVID-19, it’s taking several days to create a new account]). After you receive your email, you can login in your dashboard and start to create your network, Disk, Database, VM’s and more.

We’ll see how to configure a database in my next post. I hope you enjoy this and see you soon.

Oracle Ramps Up Free Online Learning and Certifications for Oracle Cloud Infrastructure and Oracle Autonomous Database

Posted in Certification, InfraStructure, Oracle, Oracle Database with tags , , , , on April 14, 2020 by RZGiampaoli

Hey guys how are you?

Just a quick one today, Oracle is offering free access to online learning content and certifications for a broad array of users for Oracle Cloud Infrastructure and Oracle Autonomous Database, and will be available until May 15, 2020.

This is a great opportunity and if you want to learn more, you can find it here.

Thank you guys and see you soon.

Pushing files to GCS (Google Cloud Storage) using ODI and GSUTIL

Posted in Cloud, GCS, Google, ODI with tags , , , on January 6, 2020 by Rodrigo Radtke de Souza

Hi all, today’s post is a short one, but since I didn’t find any other post related to that in the Internet, I found it could be useful for someone else. Recently I came across a requirement to push CSV files to GCS using ODI. After some research, I saw that Google has a utility called “gsutil”, that is a tool that enables you to access Cloud Storage from the command-line. So, if something is accessible from a command-line, ODI can also do that for sure.

First we need to have gsutil installed in the server that will be used by ODI to run the OS Command (in my case it was Windows). After that, we need to configure how gsutil will connect to our GCS instance. Probably you will create a service account in GCS to be used by ODI that will contain the correct read/write permissions and from it you may download its private key file. This is a json file that contains a private key and all the login information that will be used to configure gsutil, so it may authenticate with GCS. Save the json file in the ODI agent server and run the following command there:

gcloud auth activate-service-account <ACCOUNT_NAME> –key-file=”<JSON_FILE_LOCATION>” –project=<GCS_PROJECT_NAME>

Now we are all set to start using ODI to issue commands to GCS. For us to push a csv file to the cloud, we would only need to create an ODI procedure, select “Operating System” as a Technology and write a simple CP command on it, like below:

gsutil cp “<CSV_FILE_TO_BE_UPLOADED>” gs://<GCS_BUCKET_NAME>

a

When you run the proc, the files will be pushed to the cloud. Very simple, but powerful example on what we may accomplish with ODI + gsutil. For a complete list of gsutil commands, please see its documentation page.

b

Hope you liked it. Thanks!