Very Large Database (VLDB) is a common need for all industries these days to support and expand their current services/businesses. Unlike a long time back, the data these days comes through a several wide variety of channels/resources, and the amount of data growing is very high and rapid. This situation not only presents high storage demands also throws tough challenges and put the management and IT under tremendous pressure. Although huge data presents in the database, it is unlikely that the business actively use the complete data often. To handle the situation and demands, organization use the data life cycle management policy to classify the data into different categories.   

 

Many 3rd party vendors support various data life cycle management solutions. The ILM or DLM solutions provides the best practices of applying policies for effective management of data throughout useful life cycles. Fortunately,  Oracle 12c provides automatic information life cycle management with its Heat Map and Automatic Data Optimization (ADO) new features. This paper is intended to explain you the data life cycle management, how to achieve in-database archiving to fulfill the requirement with Oracle 12c.

 

The image below outlines the data lifecycle management process:

 

Automatic ILM with 12c

The automated ILM strategy in 12c uses the two new features: Heat Map and Advanced data optimization (ADO) to the advanced compression option. Heat Map automatically keep track of rows and segment level usage information, whereas, the ADO allows to create policies for data compression and movement. The below picture outlines the simple ILM clause:

 

Heat Map

The Heat Map feature provides the information about how data is being used by keeping track of data modification and query time stamp. It gathers the statistical information of data usage frequency of a segment. This statistical information then used by the ADO to apply the different policies on the data.

 

 

 

By default the Heat Map feature is turned off. To enable the feature, you need to turn the heat_map initialization parameter ON. The parameter is session and system level modifiable.

 

               SQL> ALTER SESSION|SYSTEM SET HEAT_MAP=ON SCOPE=BOTH;

 

               SQL> ALTER SESSION|SYSTEM SET HEAT_MAP=OFF SCOPE=BOTH;

 

When the feature is enabled, the data access is tracked in-memory, except for SYSTEM and SYSAUX tablespaces. The data tracking information can be view through the following dynamic  views:

 

·        V$HEAT_MAP_SEGMENT

·        ALL/DBA/USER_HEAT_MAP_SEGMENT

·        ALL/DBA/USER_HEAT_MAP_SEG_HISTOGRAM

·        DBA_HEATMAP_TOP_OBJECTS

·        DBA_HEATMAP_TOP_TABLESPACES

 

Automatic Data Optimization (ADO)

With ADO you can create policies for smart data compression and data movement. The Heat Map and ADO goes hand-in-hand, where ADO refers to the information provided by the Heat Map for compression policies and levels.

 

ADO policies can be set at segments level, row level & Tablespace level. The defined policies will be evaluated automatically during the week-end maintenance window and executed in the background with no DBA intervention. However, a DBA can manually evaluate and execute these policies at any time either manually or through set of scripts.

 

One can define the policies based on the requirements, such as, move data when no modification happened until certain period. The ILM clauses are used for CREATE TABLE or ALTER TABLE statements to enable/disable policies.

 

Here are a few working examples to define the policies on various requirements:

 

--- ILOM clause with CREATE TALE

 

 

--- Partition level compress policy when no modification on data after 30 days period

 

               SQL> ALTER TABLE sales MODIFY PARTITION partion_name

           ILM ADD POLICY ROW STORE COMPRESS ADVANCE ROW

           AFTER 30 DAYS OF NO MODIFICATIONS;

 

--- Segment level compress policy when no change in the data for 3 months

               SQL> ALTER TABLE sales MODIFY PARTITION partition_name

           ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT

           AFTER 3 MONTHS OF NO MODIFICATION;

 

--- enabling/disabling policies

 

               SQL> ALTER TABLE sales ILM DISABLE_ALL;

        SQL> ALTER TABLE sales ILM DISABLE POLICY policy_name;

        SQL> ALTER TABLE sales ILM DELETE_ALL;

        SQL> ALTER TABLE sales MODIFY PARTITION partition_name ILM DISABLE POLICY policy_name;

              

You can verify the ILM policies using the following query:

               SQL> SELECT policy_name,policy_type,enabled from dba/user_ilmpolicies;

 

In-Database Archiving

In the above examples we have seen how to compress and archive the data when there is no modification for a specified period. However, imagine when you want to keep the cold data, without actually deleting it, but mark as invisible so that it won’t appear to the application. How do you achieve this with Oracle? Well, the Oracle 12c new in-database archiving feature facilitates this option. With in-database archiving, you can control the visibility of the database where rather than deleting cold data from a table, you can actually mark it as delete so that will remain physically but invisible to the application. The invisible rows are optimized using compression.

 

In order to use this feature, you must define the segment with ROW ARCHIVAL clause and set ROW ARCHIVAL VISIBILITY value.

 

SQL> ALTER TABLE sales ROW ARCHIVAL;

SQL> CREATE TABLE sales(…..) ROW ARCHIVAL;

 

When in-database archiving is defined, an extra column (hidden), ORA_ARCHIVE_STATE will be added to the table to flag with values 0 or 1.

 

SQL> ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ACTIVE|ALL|

 

In order to archive the rows, you will have to use the following command:

 

SQL> update table_name set ora_archive_state= '1 ' WHERE where_condition;

 

To disable in-database archiving feature, you need to do the following:

               SQL> ALTER TABLE table_name NO ROW ARCHIVAL;

 

Heads-up: There have been performance issues raised when in-database archiving was enable. Need to test your application performance before you taking the decision.

 

 

Conclusion

This paper explained the data life cycle management concepts, needs and when to implement. Also, you have learned how to deploy the ILM at various levels with different policies.