Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 6: Java codes)

Posted in Essbase, Java, ODI with tags , , on February 9, 2021 by radk00

Hi all, let us demonstrate some examples of Java codes that you may use to retrieve statistics out of Essbase. Since we want to store those stats in an Oracle table, let us begin with an example of how to connect to an Oracle DB, which is simple, as we just need the DB URL, username, and password. Below is how we may create the connection. We also have a prepare statement that we will use to do inserts in our Oracle table.

Since we have a generic table, we can have just one prepare statement and use it to insert all kind of stats there.

Connecting to Essbase is also easy, the only additional information is that you also need to pass the Provider Service, which may be “Embedded” or by Provider Service. Here we are basically doing a sign in to the provider service and then select the OLAP server that we want to use.

Now let us see the first code to get some cube stats. First, we get all the cubes from all the apps and for each cube we get its properties (which is an array in a Key and a Value format). This information is added to the prepare Statement and executed (inserted) into the DB.

The result will be something like this:

Let us jump to a second example. In Java we can issue any Maxl command using the “IEssMaxlSession” class. The result set contains columns and rows, similar with what we get in EAS. Then we need to loop through the rows and get the columns that we need. This information is also added to the prepare Statement and executed (inserted) into the DB.

The result will be something like below.

The last example is how we can get stats from the Essbase outline. We can get member information using IEssMemberSelection with a custom query or find the member directly in the outline using find member. The results contain a set of members that we may loop and analyze its properties. In this case we decided to categorize the results by its storage type.

Again, the result will be something like the one below.

That is it for today. Next post I will show you how you can glue it all together with ODI. See you soon!

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 5: Automating using Java Essbase API)

Posted in Essbase, Java, ODI with tags , , on February 8, 2021 by radk00

Hi all! Let us talk now about how we can automate this stat gathering using Java Essbase API. Java is the key technology here since it can easily connect and manipulate Essbase through its API. It also can connect to Oracle Database to store our results, run OS commands and more all in one single code. Java is also great since it may be easily deployed to ODI using Procedures and scheduled using ODI Operator. All in all, combining ODI and Java code creates a powerful and seamlessly integration going beyond the database boundaries.

Let us begin with some Java Essbase API basics. The main goal is to develop one single code that will connect in Essbase, retrieve the statistics information and load that in the Oracle database.

Essbase API is very similar with what we see in EAS, in the sense that the structure of the classes follows the same architecture as in a Essbase server (Server->App->Cube->Otl), which makes it easy to find out what you are looking for when looking at the API documentation.

Since we will store this information in an Oracle table, we will also need to know a little bit about Oracle Java API, but luckily this one is straight forward.

With those two sets of APIs, we are good retrieve all the information that we need from Essbase. Each stat has its own number of columns and metrics, so if we create one table for each kind of structure it will be very tricky to maintain and harder to create any kind of generic code. The best way to extract information is to have just one table where we have the properties in the rows instead of columns, this way we have just one structure for all kind of information, no matter the number of columns that returns and we may create generic code around it. Our final table would look like below.

That is it for this post. Next one I will share some examples on how to connect to Oracle DB, Essbase and how to retrieve some stats out of it. Stay tuned!

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 4: Dynamic Calculator Cache)

Posted in Uncategorized on February 4, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the Essbase Statistics DW: How to automatically administrate Essbase using ODI series (Part 3 Here), today we’ll talk about Dynamic Calculator Cache.

The Dynamic calculator cache basically it is a buffer to store all uncompressed blocks on memory to calculate the dynamic members in the dense dimensions. Expanded Blocks including: Store, Never Share, Dynamic Calc members and dynamic time series members.

This is important because show how everything is related in Essbase. If you set the CALCLOCKBLOCK to 1000 you need to be sure that the data cache can hold 2000 uncompressed blocks. If not doesn’t matter the setting you put, Essbase will put half what fits in the data cache

We also need the size of the expanded blocks is all the store members plus the dynamic calc members and dynamic time series members together plus 8.

We also need the largest expanded block across all database on the machine and the maximum number of expected concurrent users. This can be analyzed by gathering the session information into our DW as well, but for this post it’ll be a constant number.

This information can be acquired by:

EASMAXLESSCMDJAVA
Data Cache Right click in a cube edit properties->Storagequery database sample.basic list all file information;listfiles “” “sample” “basic”;maxl.execute(“query database ” + s_application + “.” + s_cube + ” list all file information”);
Amount of members Right click in a cube edit properties->DimensionsNoneNoneselection.executeQuery(dimension.getName(), IEssMemberSelection.QUERY_TYPE_DESCENDANTS, 0, null, null, null);

In resume Dynamic Calculator Cache is:

  • A buffer in memory that Essbase uses to store all the blocks needed for calculate a Dynamic Calc member in a dense dimension
  • To find the optimal size of this cache we need:
    • CALCLOCKBLOCK size: it is half the number of expanded blocks that fit into the data cache
    • Expanded Blocks including: Store, Never Share, Dynamic Calc members and dynamic time series members
    • The largest expanded block across all databases on the machine.
    • The maximum number of expected concurrent users
      • Can be analysed by gathering the session info into a table and analyse the patterns but for this presentation is a constant number based in experience
  • To calculate the Maximum Calculator Cache, we need to multiply:
    • CALCLOCKBLOCK: (Data Cache in bytes (already calculated) / Size of the expanded Block in bytes) / 2
    • The largest expanded Block in bytes on the server
    • The maximum number of expected concurrent Users (Constant)

And that’s it, we talked about the 5 most important caches in Essbase, how do we calculate them and how we get them.

So far, we talked about these caches and how they are calculated:

Cache TypeDescription
IndexA buffer in memory that holds index pages (.ind files)
Data fileA buffer in memory that holds compressed data files (.pag files). Essbase allocates memory to the data file cache during data load, calculation, and retrieval operations, as needed. (only when direct I/O is in effect)
DataA buffer in memory that holds uncompressed data blocks. Essbase allocates memory to the data cache during data load, calculation, and retrieval operations, as needed
CalculatorA buffer in memory that Essbase uses to create and track data blocks during calculation operations
Dynamic calculatorA buffer in memory that Essbase uses to store all of the blocks needed for a calculation of a Dynamic Calc member in a dense dimension

And here we have a resume of all calculations needed to have the exactly amount of cache per application we have:

Cache TypeDescription
Indexnumber of existing Blocks * 112 bytes = the size of database index
Data fileCombined size of all essn.pag files, if possible; otherwise, as large as possible
Data0.125 * the value of Data File Cache size
CalculatorBitmap size in bytes * Number of bitmaps:
Bitmap size in bytes: Max((member combinations on the bitmap dimensions / 8), 4)
Number of bitmaps: Maximum number of dependent parents in the anchoring dimension + 2 constant bitmaps
Dynamic CalculatorC * S * U:
C is the value of the appropriate CALCLOCKBLOCK setting. (Data Cache in bytes / Size of the expanded Block
S is the size of the largest expanded block across all databases on the machine. Multiply the number of members in each dense dimension together and multiply by 8 bytes
U is the maximum number of expected concurrent users

From now on we’ll going to see how we can create a process to extract and handle this information and how can we create a DW and use it to keep our Essbase apps with the right cache all the time.

I hope you guys enjoy it and see you soon.

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 3: Calculator Cache)

Posted in Uncategorized on February 3, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the Essbase Statistics DW: How to automatically administrate Essbase using ODI series (Part 2 Here), today we’ll talk about Calculator Cache.

The concept behind the Calculator cache is a little tricky but I think is very nice to understand since it gives us a very good tip about Essbase design. This cache is used to track what Essbase already calculated and what he didn’t yet.

Essbase split the sparse dimensions in 2 groups:

  • Bitmap dimensions: they are all the sparse dimension that fits inside the bitmap
    • You can think about this bitmap as a matrix that will hold all members from one or more sparse dimension, depending of the available memory. Each member occupies 1 bit of memory. (it’s like a check that says if the member was calculated or not)
  • Anchoring dimensions: For the rest of the sparse dimensions that didn’t fit in the bitmap. The anchoring dimension will define the number of bitmaps that Essbase can create at same time, if there’s enough memory.

The calculator cache controls the size of the bitmap, what means, the number of sparse dimensions that can fit in the bitmap and the amount of anchoring dimensions depends of the size calculator cache

But how it works? Essbase will start from the first sparse dimension in the outline and will try to fit as much dimensions possible inside the bitmap until he cannot put the next sparse dimensions completely inside the bitmap.

All the remain sparse dimension becomes anchoring dimensions.

The number of bitmaps that can be created at same time is determined by the maximum number of dependent parents in the anchoring dimensions.

Dependent parent is the number of parents that is above a leaf member. We need to find the member that has the biggest number of levels above him, meaning, the max number of dependent parents is the same as the number of Levels in a dimension

If the leaf member with most parents above him has 3 parents the level of that dimension will also be 3, then for dependent parents we just need to ask for the level of the dimensions.

The size of the bitmap and the number of anchoring dimensions depends of the size of the calculator cache.

What that means? Let’s consider this outline for example:

Sparse Dims# of MembersDependent Parents
S110NA
S210NA
S320NA
S430NA
S52003

In this outline we have 5 sparse dimensions, 4 flat ones (S1 to S4) and one with a 3-level hierarchy (S5).

We have 3 options of cache settings that we can choose from:

  • Single anchoring multiple bitmaps.
  • Single anchoring single bitmap.
  • And multiple anchoring single bitmap.

To use the first option, we need to get the first 4 sparse dimensions from the outline and multiply all of them, split the result by 8 to transform it in bytes and this will define the size of the bitmap.

After that we need to define how many bitmaps we will have at same time in memory. For that we need to get the level of the anchoring dimensions and sum 2 on it. (2 is a constant number). This will define the number of bitmaps.

Finally, we just need to multiply the size of the bitmap by the number of bitmaps and this will give us the size of the cache we must set to use option 1.

The other options are a smaller version of the first one. Option 2 we don’t have multiple bitmaps, we just set the anchoring to 1, and option 3 we fit in the bitmap less dimensions.

We can see the calculations for each option below:

OptionCalculator Cache
Single anchoring, multiple bitmaps(10*10*20*30)/8=7500
3+2=5
7500*5=37500 bytes
Single anchoring, single bitmap(10*10*20*30)/8=7500
1
7500*1=7500 bytes
Multiple anchoring, single bitmap(10*10*20)/8=250
1
250*1=250 bytes

The interesting thing here is to see why is important to have our outline designed as hourglass for this cache. If we put the most complex sparse dimension in other position than the last one, we will never be able to have concurrent bitmaps and the size of the bitmap will be huge.

Then after this trick explanation the only information we need to calculate the calculator cache are:

  • The amount of member in each sparse dimension.
  • The amount of levels in the biggest sparse dimension.
    • The biggest sparse dimensions should be placed in the last place on the outline.

All this information can be acquired by:

EASMAXLESSCMDJAVA
Amount of members Right click in a cube edit properties->DimensionsNoneNoneselection.executeQuery(dimension.getName(), IEssMemberSelection.QUERY_TYPE_DESCENDANTS, 0, null, null, null);
Levels Double click in outline-> count the parentsNoneNoneIEssDimension dimension = (IEssDimension) otl.getDimensions().getAt(i);     IEssIterator level = dimension.getLevels();

Then resuming everything we saw in this post:

  • Calculator cache is a buffer in memory that Essbase uses to create and track data blocks during calculation operations
  • Essbase split the sparse dimensions in 2 groups:
    • Bitmap dimensions: Any sparse dimensions that can be fit inside the bitmap. Each member combination placed in the bitmap occupies 1 bit of memory, and the entire dimension must fit inside the bitmap.
    • Anchoring dimensions: The remaining sparse dimensions that do not fit into the bitmap.
  • Because the calculator cache controls the size of the bitmap, the number of sparse dimensions that can fit in the bitmap and anchoring dimensions depends on the size of the calculator cache
  • Essbase starts with the first sparse dimension in the outline and fits as many as possible into the bitmap.
  • Essbase stops the process when it cannot fit another complete sparse dimension into the bitmap.
  • All the remaining sparse dimensions becomes anchoring dimensions
  • The number of bitmaps used is determined by the maximum number of dependent parents for any members in the anchoring dimension
  • The max number of dependent parents is the same as the number of Levels in a dimension
  • Depending of the cache setting we can have 3 options
    • Single anchoring multiple bitmaps.
    • Single anchoring single bitmap.
    • And multiple anchoring single bitmap.
  • And to calculate everything we just need 2 things:
    • The amount of member in each sparse dimension.
    • The amount of levels in the biggest sparse dimension.
      • The biggest sparse dimensions should be placed in the last place on the outline.

I hope you guys enjoy it, be safe and see you next time.

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 2: Data File Cache and Data Cache)

Posted in Uncategorized on February 2, 2021 by RZGiampaoli

Hey guys, how are you? Continuing the Essbase Statistics DW: How to automatically administrate Essbase using ODI series (Part 1 Here), today we’ll talk about Data File Cache and Data Cache.

The data file cache is deprecated, and I don’t know if much people use it, but it’s used to define the maximum size of the data cache. The data file cache is used to hold compressed data files (essn.pag) in memory and is used only when direct I/O option is in effect. To find it maximum size we need to sum all essn.pag files, if possible; otherwise, as large as possible.

We can find this information in EAS edit properties, storage or Maxl query database file information or in Esscmd list files or java using Maxl as we can see here:

EASMAXLESSCMDJAVA
Right click in a cube edit properties->Storagequery database sample.basic list all file information;listfiles “” “sample” “basic”;maxl.execute(“query database ” + s_application + “.” + s_cube + ” list all file information”);

Now let’s talk about the data cache. The name looks almost the same, but his usage is very different. The data cache is used hold uncompressed data blocks in memory. It has a direct relationship with the Data file cache and the recommended size would be 12.5 % of the summing of all essn.pag files (or 0.125 * Data File Cache size. That is why we even talked about the data file cache and calculated its size before.

The same thing that happens with the index cache happens here with the data cache. If we have too much historical data, we may decrease the optimum size to fit the data usage.

We gather this information in the same way as the data cache file:

EASMAXLESSCMDJAVA
Right click in a cube edit properties->Storagequery database sample.basic list all file information;listfiles “” “sample” “basic”;maxl.execute(“query database ” + s_application + “.” + s_cube + ” list all file information”);

Then to resume what we talked about in this post:

  • The data file cache is used to hold compressed data files (essn.pag) in memory and is used only when direct I/O option is in effect.
  • It is calculated by summing all essn.pag files, if possible; otherwise, as large as possible.
  • The data cache is used hold uncompressed data blocks in memory.
  • The recommended size would be 12.5 % of the summing of all essn.pag files (or 0.125 * Data File Cache size).
  • Same case as Index cache, just the working blocks are used (no historical).

I’ll stop here since we have a big one coming and I want to address the Calculator Cache alone in the next post.

Be safe and see you soon.

Essbase Statistics DW: How to automatically administrate Essbase using ODI (Part 1: Index Cache)

Posted in Uncategorized on February 1, 2021 by RZGiampaoli

Hey guys, how are you? Today we’ll going to talk about how to automatically administrate Essbase using ODI.

As you guys know, in order to have good performance in an Essbase cube, we must keep vigilance and follow up its growth and its data movements, so we can distribute caches and adjust the database parameters accordingly. But this is a very difficult task to achieve, since Essbase statistics are not temporal and only tells you the cube statistics is in that specific time frame.

The performance of an Essbase app has a direct relationship with the configuration setting that we set and must be based on the design and usage of each cube. To find out the optimal values for these configurations we need to look at the Essbase statistics.

The problem with this is that the Essbase statistics is a snapshot of the cube at that moment and every time we shut down a cube the run time statistics are cleared, meaning that these statistics are more or less accurate depending of the amount time the cube is online.

That also means that if we look in the statistics after a restart of the cube, we can get the wrong information and we can make a poor setting of caches and other things making the cube slow or even crashing them.

The best thing we can do to make this process more reliable is to create a historical DW with the Essbase statistics information, this way we can create a process to keep improving Essbase setting in a more pragmatically way extract, analyze, change setting and so on.

These are the most important caches that we can set, but we can do all kind of things with the information we’ll gathering from Essbase.

Cache TypeDescription
IndexA buffer in memory that holds index pages (.ind files)
Data fileA buffer in memory that holds compressed data files (.pag files). Essbase allocates memory to the data file cache during data load, calculation, and retrieval operations, as needed. (only when direct I/O is in effect)
DataA buffer in memory that holds uncompressed data blocks. Essbase allocates memory to the data cache during data load, calculation, and retrieval operations, as needed
CalculatorA buffer in memory that Essbase uses to create and track data blocks during calculation operations
Dynamic calculatorA buffer in memory that Essbase uses to store all of the blocks needed for a calculation of a Dynamic Calc member in a dense dimension

Let’s start with the Index cache. Index cache holds in memory the indexes that Essbase uses to find requested data.

Every time we retrieve or calculate any data from Essbase, first Essbase try to find that index in the index cache. If the index is not there, Essbase will find the index in the index files, get the data and store this index in the index cache memory to make it faster for the next time we need the same information. For each index that Essbase store there, it will take 112 bytes of memory.

Them to calculate the maximum size of this cache we just need to multiply the number of existing blocks by 112 bytes and that’s it.

Just to make sure everyone is in the same page: a Block is the combination of all store like members from all dense dimensions. Essbase creates a new Block when a new unique combination of data in the sparse dimensions is inserted/calculated.

This table shows the different ways to get the information we need from Essbase. As you can see, we don’t need to know java/C++ or any language to do extract this information, we can just use maxl, esscmd or even manually using EAS.

EASMAXLESSCMDJAVA
Right click in a cube edit properties->Statisticsquery database sample.basic get dbstats data_block;select sample basic; getdbstats;(IEssCube) cube.getPropertyNames();

One interesting thing about this cache (and almost every cache in Essbase) is that, as I said, it holds in memory the indexes that Essbase used at least once to find the requested data. That means, we will only find in the cache, indexes that we used at least once since the last time the cube starts.

Then, if we have a cube with historical data, the chances of these old data never hit the index are bigger then the new data. We need to take this in consideration when we set this cache.

Historical Data Usage

In this case, 70% of the cube contains historical data that are rarely requested, meaning, we need just 30% of the maximum size for the index cache.

A way to figure out if the cache is well set is looking at the hit ratio in EAS, but we still have the issue that this info reset every time when the cube restarts.

If we are resuming what we just said about Index Cache, we have the follow:

  • The index Cache is used to hold in memory all used indexes since the application is turned on.
  • The maximum index cache size is calculated by multiplying the number of existing Blocks * 112 bytes
    • A Block is the combination of all store like members from all dense dimensions. Essbase creates a new Block when a new unique combination of data in the sparse dimensions is inserted/calculated
  • The optimal index cache will vary depending of the design and use of the cube.
    • If the cube has a lot of historical data, the chances of these index never hit the cache are big
    • For that we can define a historical percentage that we can remove from the maximum index cache to find a more realistic cache.
    • This historical percentage can be obtained by analyzing the outline and see how many members are in use

That’s it for today. Next post we’ll talk about the Data File Cache. Stay safe and see you soon.

Is my ODI version certified for that technology?

Posted in ODI, ODI Architecture, Versions with tags , on January 14, 2021 by radk00

Hi all, today’s post is a quick one, but I hear this question very often. Is ODI version XXXXX certified to XXXXX technology version? It is very easy to check it since Oracle keeps it all in one place called “Oracle Fusion Middleware Supported System Configurations” page, but sometimes people get a little bit confused about it.

Once you reach the website, select your ODI version (remember, ODI is inside Oracle Fusion Middleware). My ODI is 12.2.1.4, so let’s check it out:

There will be multiple tabs there. If you want to check in which system your ODI Agent is certified to run for example, you may click on the System tab:

It gives us a lot of information regarding which system, version, java version is supported for your agent. If you want to check which DB version is certified, just click on Database tab:

One last import thing that is always checked is which technology and versions are supported for Source-Target ETL. This can be easily checked on “ODI Source-Target” tab:

That is its folks. Very simple, but useful post. See you soon!

Generate JSON objects with ODI in a very easy way

Posted in ODI, Tips and Tricks with tags , on November 9, 2020 by radk00

Hi all. I came across a requirement to create Json objects from a set of Oracle tables. We have ODI, so it was natural that the solution would be created there. However, working with JSON objects in ODI is not that easy. For those who already worked with “Complex File” technology knows what I am talking about. Too much setup and any misconfiguration will cause an error that is generally very hard to troubleshoot.

In the past, I did several mappings that would read from Complex Files (including Json), but this time it was an outbound process, so I would need to create the Json objects, not read from them. I tried to search on the internet, but nothing was clear. I was not sure if the Complex File technology would work to create outbound files and I was not in the mood to play with XSD files this time, so I needed to find some other solution.

Talking to a friend/co-worker of mine, he asked me if we could not leverage the Oracle’s JSON_OBJECT function somehow in ODI. First, I did not know what that function was about, so I researched about it:

The SQL/JSON function JSON_OBJECT takes as its input one or more property key-value pairs. It returns a JSON object that contains an object member for each of those key-value pairs.

It looked very interesting. So, I give it a try in SQL Developer. It worked very well:

Basically, you may add any number of columns to the function and it will create a valid JSON object out of it. Pretty neat, right? Now it is just a matter of how to add it to ODI, which thankfully is very easy. First, I created a model in the File technology:

It contains only one column, with 4000 characters. Then I created a mapping and mapped all the required columns to the JSON_OBJECT function, like this:

That’s it. Pretty simple. When we run the mapping, we have an outbound file like this:

Now you may send the outbound file to any application that needs to consume JSON objects. One thing that you may need to consider is that you will probably get into trouble if your Json objects ends up being larger then 4000 characters each. I didn’t test it, but either Oracle or ODI will probably complain about it.

I hope you have liked it. This is not a “traditional” way to create Json objects, but for sure it is the easiest one! See you next time!

Oracle SQL for EPM Tips and Tricks S01EP14

Posted in ODI, Oracle, Oracle Database, Performance, SQL, Tips and Tricks with tags , , , on October 15, 2020 by RZGiampaoli

Hey guys how are you? Continuing our SQL series (S01EP13), today I’ll share a very hand little query that I use very often for check data duplication. In fact, this would be an upgrade version of ODI’s pk check.

An upgrade version because in ODI, if you enable PK check, if he finds duplication, he eliminate both data. This code I’ll show you, you would choose if you want to keep the last created duplication or the oldest one, but only one will be eliminated.

I have a test table with this values:

If I want to check for duplicate PK, I can just run this query here:

The Idea here is, we have 2 queries. The first one will check if the ROWID it has is bigger or smaller (you choice) than the MIN or MAX ROWID (depending of your previous choice) than the second sub query by any joins you want to check.

In this case, we wanted to check only if the PK column had duplicated values, but we could check any other column by just replace it in the join. In fact, we could have any amount of columns in the join and that would check if there’s any duplications in all columns you inserted there.

Then you can select the first using > and MIN or the last by using < and MAX as well you can select what column you want to check in the where clause.

One important thing to mention is that this query is meant to work as a delete because it’ll keep what was not in the select. What I mean is, if you have more than one duplication, it’ll bring, in this case, all the rows that has the ROWID > then the one selected in the first query:

Then if I have multiple duplications, the query will return everything that needs to be deleted and the only one remaining was the first one inserted (3, Chuck, Giampaoli).

I hope you enjoy this little trick and see you soon.

Oracle Always Free cloud offering (Part 3)

Posted in ACE, Autonomous Database, Cloud, Data Warehouse, Network, Oracle, Oracle Database with tags , , , on May 21, 2020 by RZGiampaoli

Hey guys how are you doing? Today I’ll continue with the Oracle always free cloud offering and we’ll finally start to provisioning a VM in our environment. If you want to know more about how it works (Part 1) or the overview about the Dashboard (Part 2) please check my previous posts.

The first thing we need to do is check for the best practices and see if everything in our environment is adequate.

  • IP Addresses Reserved for Use by Oracle:
    • Certain IP addresses are reserved for Oracle Cloud Infrastructure use and may not be used in your address numbering scheme (169.254.0.0/16).
    • These addresses are used for iSCSI connections to the boot and block volumes, instance metadata, and other services.
    • Three IP Addresses in Each Subnet
      • The first IP address in the CIDR (the network address)
      • The last IP address in the CIDR (the broadcast address)
      • The first host address in the CIDR (the subnet default gateway address)
    • For example, in a subnet with CIDR 192.168.0.0/24, these addresses are reserved:
      • 192.168.0.0 (the network address)
      • 192.168.0.255 (the broadcast address)
      • 192.168.0.1 (the subnet default gateway address)
    • The remaining addresses in the CIDR (192.168.0.2 to 192.168.0.254) are available for use.
  • Essential Firewall Rules
    • All Oracle-provided images include rules that allow only “root” on Linux instances or “Administrators” on Windows Server instances to make outgoing connections to the iSCSI network endpoints (169.254.0.2:3260, 169.254.2.0/24:3260) that serve the instance’s boot and block volumes.
      • Oracle recommends that you do not reconfigure the firewall on your instance to remove these rules. Removing these rules allows non-root users or non-administrators to access the instance’s boot disk volume.
      • Oracle recommends that you do not create custom images without these rules unless you understand the security risks.
      • Running Uncomplicated Firewall (UFW) on Ubuntu images might cause issues with these rules. Because of this, Oracle recommends that you do not enable UFW on your instances.
  • System Resilience
    • Oracle Cloud Infrastructure runs on Oracle’s high-quality Sun servers. However, any hardware can experience a failure:
      • Design your system with redundant compute nodes in different availability domains to support failover capability.
      • Create a custom image of your system drive each time you change the image.
      • Back up your data drives, or sync to spare drives, regularly.
      • If you experience a hardware failure and have followed these practices, you can terminate the failed instance, launch your custom image to create a new instance, and then apply the backup data.
  • Uninterrupted Access to the Instance
    • Make sure to keep the DHCP client running so you can always access the instance. If you stop the DHCP client manually or disable NetworkManager (which stops the DHCP client on Linux instances), the instance can’t renew its DHCP lease and will become inaccessible when the lease expires (typically within 24 hours). Do not disable NetworkManager unless you use another method to ensure renewal of the lease.
    • Stopping the DHCP client might remove the host route table when the lease expires. Also, loss of network connectivity to your iSCSI connections might result in loss of the boot drive.
  • User Access
    • If you created your instance using an Oracle-provided Linux image, you can use SSH to access your instance from a remote host as the opc user. After logging in, you can add users on your instance.
    • If you created your instance using an Oracle-provided Windows image, you can access your instance using a Remote Desktop client as the opc user. After logging in, you can add users on your instance.
  • NTP Service
    • Oracle Cloud Infrastructure offers a fully managed, secure, and highly available NTP service that you can use to set the date and time of your Compute and Database instances from within your virtual cloud network (VCN).
    • We recommend that you configure your instances to use the Oracle Cloud Infrastructure NTP service.
  • Fault Domains
    • A fault domain is a grouping of hardware and infrastructure that is distinct from other fault domains in the same availability domain. Each availability domain has three fault domains. By properly leveraging fault domains you can increase the availability of applications running on Oracle Cloud Infrastructure.
    • Your application’s architecture will determine whether you should separate or group instances using fault domains.
  • Customer-Managed Virtual Machine (VM) Maintenance
    • When an underlying infrastructure component needs to undergo maintenance, you are notified before the impact to your VM instances. You can control how and when your applications experience maintenance downtime by proactively rebooting (or stopping and starting) your instances at any time before the scheduled maintenance event.
    • A maintenance reboot is different from a normal reboot. When you reboot an instance for maintenance, the instance is stopped on the physical VM host that needs maintenance, and then restarted on a healthy VM host.
    • If you choose not to reboot before the scheduled time, then Oracle Cloud Infrastructure will reboot and migrate your instances before proceeding with the planned infrastructure maintenance.

When you work with Oracle Cloud Infrastructure, one of the first steps is to set up a virtual cloud network (VCN) for your cloud resources. I was thinking to do a more detail explanation here but this topic is very big. Then I decide to try do a simple step by step in how to set you Network for you to access your resources from your computer.

This is not the best way to create an complex network or anything like that, is just a way to quick start using your always free components and test your VM and DB.

To start we will click in the “Setup a network with wizard” quick link:

After you click there you have 2 options:

Select VCN with Internet Connectivity, and then click Start VNC Wizard. In the next page, just insert the name of your VCN and leave averything else as it is (unless you have a reason to change). Click Next.

In the next page, it’ll show everything that will be create by the Wizard. Note that you can create manually piece by piece of it, but for simplicity, the wizard should be enough.”Click in Create.

Next screen will show the installation of what was requested:

And that’s it for the network. Now we can start to create our databases and VM’s all inside our network, and they all going to “see” each-other.

That’s it for the network. Again, this is a very simple way to set your Network and every single step above can be setup individually with greater complexity but I’m for sure, but that will be impossible to be done in the always free since a lot of the complexity stuff needs to be paid for.

You can get a lot more information in the Jumpstart your Cloud Skills on the Start Explore. There are a lot of videos there explaining a lot of things. For simplicity, I’ll post here all links available there just for people that wants to see the videos before they subscribe to the OCI.

Module NameNumber of SubmodulesRun Time (Minutes)
Core InfrastructureGetting Started with Oracle Cloud Infrastructure113
Core InfrastructureVirtual Cloud Network L10010116
Core InfrastructureVirtual Cloud Network L200471
Core InfrastructureCompute L100660
Core InfrastructureCompute L200670
Core InfrastructureVPN Connect L100228
Core InfrastructureFastConnect L100218
Core InfrastructureVPN Connect L200215
Core InfrastructureFastConnect L200224
Core InfrastructureBlock Volume L100647
Core InfrastructureFile Storage L100455
Core InfrastructureObject Storage L100340
Core InfrastructureStorage L200341
Core InfrastructureLoad Balancing L100330
Core InfrastructureLoad Balancing L200224
Core InfrastructureHA and DR L300231
DatabaseDatabase L100445
DatabaseDatabase Capacity Planning L200466
DatabaseDatabase HA L200236
DatabaseDatabase Migration L200333
DatabaseDatabase CLI L200110
DatabaseData Safe L100115
DatabaseAutonomous Database L100552
DatabaseAutonomous Database L200579
DatabaseExadata Cloud Service Overview L3001100
DatabaseExadata API and CLI L300195
DatabaseExadata Patching L300161
DatabaseExadata Backup and Recovery L300157
Solutions and PlatformFunctions L100348
Solutions and PlatformEvents L100348
Solutions and PlatformContainer Engine for Kubernetes L100327
Solutions and PlatformRegistry L100421
Solutions and PlatformDNS Traffic Management L100326
Solutions and PlatformDNS Zone Manager L100212
Solutions and PlatformResource Manager L100122
Solutions and PlatformMonitoring L100135
Solutions and PlatformStreaming L100111
MigrationData Migration L100338
MigrationOCI-Classic to OCI Migration136
MigrationOCI-Classic to OCI Migration Tools171
Governance and AdministrationIdentity and Access Management L100565
Governance and AdministrationIdentity and Access Management L2001107
Governance and AdministrationBilling and Cost L100237
Governance and AdministrationService Requests and SLAs119
Governance and AdministrationSecurity Overview L1001061
Governance and AdministrationWeb Application Firewall L100230
Governance and AdministrationKey Management L100118

Next thing we can do is create a load balancing. To do that, we just have to click in the Create Load Balancer in the Quick Actions and then fill the new page like this:

The most important thing here is to make sure you selected the Micro in the Bandwidth selection. This one is free (you can also see the Always free Eligible logo there. Click Next after this.

In the next page we need to choose the load balance policy, and for that, depending of your application you’ll select one specific one. We have 3 options:

  • Weighted Round Robin: This one distribute the load sequentially in the servers (one each)
  • IP Hash: This one guarantee that the request from one specific client always go to the same server
  • Least Connections: this one always select the server with less connections

Next you need to add Back-ends. We don’t have any create now, but we can add this later. And finally we can change the Health Check policy, but for what we are doing we can just leave as it is. Click Next. In this screen we have to create a listener:

Here we have 3 options of traffic listener, HTTPS, HTTP and TCP. I’ll going to select TCP without SSL for simplicity, but if you select HTTPS you’ll need to have SSL certificate files and private keys. It’s safer but if you want just o play around its better to select HTTP or TCP.

For TCP we just have this options:

If you select USE SSL you also need to provide the Digital Certificate and private keys.

After you select yours, just finish the process. You’ll be taking to the Load Balance Monitoring page where’ll see something like this:

And that’s it for the network. Next time we’ll provisioning a VM and we’ll set our machine to connect into the VM.

I hope you guys enjoy this and see you soon.