Implementing Information Lifecycle Management (ILM) with Oracle Database
Implementing Information Lifecycle Management (ILM) with Oracle Database, ADO

Implementing Information Lifecycle Management (ILM) with Oracle Database

Exponential increases in data volumes are putting enterprise IT infrastructures under severe pressure – from a cost, performance, scalability and manageability perspective. It has become imperative to employ more efficient ways of storing and managing data to meet the growing demands being placed on IT systems. Dramatic increases in storage volumes are evident in all types of applications, and enterprise applications are no exception.

This article discusses the components in Oracle Database which can build a strategy to manage and maintain data based on time.

While most organizations have traditionally considered their data repositories among the most valuable corporate assets, the methods of managing and maintaining this data differ significantly from one company to another. Initially, data served the purpose of aiding in the accomplishment of operational goals, facilitating day-to-day business operations, and contributing to the identification of future directions and successes for the company.

However, new government regulations and guidelines are a key driving force in how and why data is being retained. Regulations now require organizations to retain and control information for very long periods of time. Consequently, today there are additional objectives that information technology (IT) managers are trying to satisfy:

  • To store vast quantities of data for the lowest possible cost
  • To meet the new regulatory requirements for data retention and protection
  • To improve business opportunities by better analysis based on an increased amount of data

Managing Data in Oracle Database With ILM

With Information Lifecycle Management (ILM), you can manage data in Oracle Database using rules and regulations that apply to that data.

One challenge facing each organization is to understand how its data evolves and grows, monitor how its usage changes over time, and decide how long it should survive, while adhering to all the rules and regulations that now apply to that data. Information Lifecycle Management (ILM) is designed to address these issues, with a combination of processes, policies, software, and hardware so that the appropriate technology can be used for each stage in the lifecycle of the data.

ILM solution can be completed by following these five simple steps:

  1. Define the Data Classes: For the primary databases that drive your business, identify the types of data in each database and where it is stored, and then determine:

  • Which data is important, where it is, and what must be retained
  • How this data flows within the organization
  • What happens to this data over time and when is it no longer actively needed
  • The degree of data availability, and protection, that is needed
  • Data retention for legal and business requirements

2.???? Create Logical Storage Tiers: For the data classes that represent the different types of storage tiers available in your environment.

3.???? Define a Lifecycle: A Lifecycle definition describes how data migrates across logical storage tiers during its lifetime. A lifecycle definition comprises one or more lifecycle stages that select a logical storage tier, data attributes such as compression and read-only, and a duration for data residing on that lifecycle stage. To summarize, a lifecycle defines WHERE to store data, HOW to store data and HOW LONG data should be retained.

4.???? Assign a Lifecycle to Database Tables/Partitions

5.???? Define and Enforce Compliance Policies

You can read more about this step from Implementing ILM Using Oracle Database

Typical lifecycle of data

The data lifecycle initiates with the insertion of new data through SQL statements, where the information is added to the table. During this phase, the data is either retained in its non-compressed state or undergoes compression as decided by the Database Administrator (DBA). After a month, there is a noticeable decrease in activity, although substantial OLTP (Online Transaction Processing) transactions continue to occur.

During this stage, the data in the previously most active partition remains in the OLTP table, while a new partition is automatically created to accommodate the ongoing data influx. This partitioning strategy helps optimize data management by segregating the active and less active data, facilitating efficient storage and retrieval processes.

Two months later, the data in the older partition experiences infrequent modifications and rare access. At this point, the partition becomes a candidate for migration to a low-cost storage tier and can undergo a higher compression level, such as hybrid columnar compression (HCC). This strategy optimizes storage costs and maintains data accessibility while reflecting its reduced activity.

After a year, the data is considered inactive, with no further access or updates. During this phase, the data can be transferred to a low-cost storage tier with the highest level of HCC compression. Additionally, it may be designated as read-only, signifying its archival status and minimizing storage expenses while preserving data integrity for potential future reference.

To implement your ILM strategy, you can use Automatic Data Optimization (ADO) to automate the compression and movement of data between different tiers of storage within the database.

You can specify policies for ADO at the row, segment, and tablespace granularity level when creating and altering tables with SQL statements. In addition, ADO policies can perform actions on indexes.

By specifying policies for ADO, you can automate data movement between different tiers of storage within the database. These policies also enable you to specify different compression levels for each tier, control when the data movement takes place, and optimize indexes.

How (ADO) Automatic Data Optimization works

Automatic data optimization (ADO) is an ILM strategy which uses activity tracking by using heat map to determine if the data is still getting accessed or not.

Heat map provides ability to track and mark data as it goes through lifecycle changes. Heat map can track data at following level

  • Data accesses at segment level
  • Data modification at segment level and block level

ADO allows you to create policies that uses heat map statistics to compress and move data when necessary. ADO automatically evaluates and executes policies that perform compression and storage tiring actions.

Following are the outline steps which can be used to implement and use ADO policies

  • Enable heat map at database level. Heap map tracks data access at segment level and data modification at segment level and row level
  • Create ADO policies.
  • Evaluating and executing ADO policies. By default, segment level ADO policies are evaluated every day during maintenance window. DBA can also create custom schedule for evaluating ADO policies. Row level policies are evaluated by MMON every 15 mins.

Implementing an ILM Strategy With Heat Map and ADO

Heat Map provides data access tracking at the segment-level and data modification tracking at the segment and row level;

ALTER SYSTEM SET HEAT_MAP = ON;        

Create ADO policies.

A segment level policy executes only one time. After the policy executes successfully, it is disabled and is not evaluated again. However, you can explicitly enable the policy again. A row level policy continues to execute and is not disabled after a successful execution.

Level at which you can create ADO policies

  • Tablespace level – Default ADO policy can be defined at tablespace level and will be applicable to all the segments existing in that tablespace or new segments getting created in that tablespace.
  • Group level – When we define group level ADO policy, if a table is eligible for specific ADO action defined in policy, the same action would be performed on all dependent objects. If action is compress and table has LOB columns, all secureFile LOBs will be compressed as well.
  • Segment level – Applies to tables and table partitions. If no policy defined at segment level and we have a default policy at tablespace level, tablespace level policy applies to segment. But segment level policy overrides tablespace level policy.
  • Row level – Can be created only for tracking modifications. We cannot have row level policy for creation and access operations.

Operations to track

  • Creation – Action will be taken on specified time condition after creation of data
  • Access – Action will be taken on specified time condition after access of data
  • Modification – Action will be taken on specified time condition after modification of data

Condition when this policy takes effect

  • Based on time period. Ex: After 3 days, After 1 week, After 1 year
  • Tablespace usage above threshold

Action to take for ADO policy

  • Compression
  • Data Movement-? Moving to different storage tier
  • Both compression + move

The scope of an ADO policy can be specified for a group of related objects or at the level of a segment or row, using the keywords GROUP, ROW, or SEGMENT.

The default mappings for compression that can be applied to group policies are:

  • COMPRESS ADVANCED on a heap table maps to standard compression for indexes and LOW for LOB segments.
  • COMPRESS FOR QUERY LOW/QUERY HIGH on a heap table maps to standard compression for indexes and MEDIUM for LOB segments.
  • COMPRESS FOR ARCHIVE LOW/ARCHIVE HIGH on a heap table maps to standard compression for indexes and HIGH for LOB segments.

Implementing multiple policies on segment

A segment can be subject to multiple Automatic Data Optimization (ADO) policies, reflecting a logical approach to handling data as it ages. This allows for distinct actions based on various time frames and conditions. For instance, implementing a basic level of compression within a week of data insertion, transitioning to more advanced compression after a month of inactivity or no modification, and employing the highest compression level or moving data to low-cost storage after a prolonged period of inactivity, such as a year.

However, there are specific rules that should be adhered to when implementing these policies:

Rule 1) Policies should be applied to the same operation or statistics (creation, modification, or access). If a segment-level policy is established on a table based on access, ensure that other policies are also created based on access operations only.

Rule 2) Compression levels should increase with time. It is important not to reduce the compression level as the timeline condition increases. For example, attempting to set column-level compression after 7 days of no access and row store compression (which is a lower-level compression than column level) after 30 days of no access would violate this rule.

Oracle Database ILM Implementation simple scenario

In this example, we're implementing a scenario to understand how Automatic Data Optimization (ADO) works for a table named SALES in the schema TEST, which is partitioned based on access and modification timelines. We have defined three different access categories: less access/modification after 7 days, rare access/modification after 30 days, and no access/modification after 120 days. Two storage tiers, ILM_HIGH_COST and ILM_LOW_COST, ranging from high to low performance, are created. Here's the sequence of SQL commands to set up the ADO policies:

?-- Enable Heat Map tracking
SQL> ALTER SYSTEM SET HEAT_MAP = ON;

-- Assign unlimited quota on storage tiers to the user 'test'
SQL> ALTER USER test QUOTA UNLIMITED ON ILM_HIGH_COST;
SQL> ALTER USER test QUOTA UNLIMITED ON ILM_LOW_COST;

-- Set the ILM_LOW_COST tablespace to read-only
SQL> ALTER TABLESPACE ILM_LOW_COST READ ONLY;

-- Implement ADO policies on the SALES table
SQL> ALTER TABLE TEST.SALES ILM ADD POLICY ROW STORE COMPRESS BASIC SEGMENT AFTER 7 DAYS OF NO MODIFICATION;
SQL> ALTER TABLE TEST.SALES ILM ADD POLICY ROW STORE COMPRESS ADVANCED SEGMENT AFTER 30 DAYS OF NO MODIFICATION;
SQL> ALTER TABLE TEST.SALES ILM ADD POLICY TIER TO ILM_LOW_COST READ ONLY SEGMENT AFTER 120 DAYS OF NO MODIFICATION;        

With this set of policies, the data in the SALES table will be automatically managed based on the specified access and modification timelines. As time progresses and data usage patterns change, ADO will optimize the storage tier and compression levels accordingly.

To check the details of Information Lifecycle Management (ILM) policies in your Oracle database, you can use several views that provide information about heat maps, data movement policies, evaluation details, objects, parameters, policies, results, and tasks. Here are the relevant views:

?

1. DBA_HEAT_MAP_SEGMENT: This view provides details about the heat map at the segment level.

2. DBA_HEAT_MAP_SEG_HISTOGRAM: This view gives information about the historical heat map data for segments.

3. V_$HEAT_MAP_SEGMENT: A dynamic performance view providing information about the heat map at the segment level.

4. DBA_ILMDATAMOVEMENTPOLICIES: Contains details about data movement policies in the database.

5. DBA_ILMEVALUATIONDETAILS: Provides information about the evaluation details of ILM policies.

6. DBA_ILMOBJECTS: Contains information about the objects associated with ILM policies.

7. DBA_ILMPARAMETERS: Contains details about the parameters defined for ILM policies.

8. DBA_ILMPOLICIES: Contains information about the ILM policies defined in the database.

9. DBA_ILMRESULTS: Provides details about the results of ILM evaluations.

10. DBA_ILMTASKS: Contains information about the tasks associated with ILM policies.

Resource and links

Managing Data in Oracle Database With ILM

Oracle 12c Learning Series: Automatic Data Optimization – ADO

ilm_policy_clause

DBMS_ILM, DBMS_ILM_admin packages

ILM storage tiering

Custom ILM ADO policy conditions using PL/SQL

ILM – is it possible to mix ADO policies for compression and storage?


Ahmad Hamedani

IT and Network Administrator

10 个月

Thanks for sharing

Arya Seraj

Oracle Database Administration.

10 个月

Meticulously crafted and technically detailed. Great job ??

ehsan dadashi

DEVOPS engineering at saminray company

10 个月

Good job

Alireza Sheybani

Network Operation Center (NOC) Team Lead

10 个月

It's so good work ,

Amirhossein Bahmani

System Administrator

10 个月

Nice

要查看或添加评论,请登录

社区洞察

其他会员也浏览了