Skip to content
Home » Oracle » 2 Unawared Cases May Produce Large Amount of Archived Logs

2 Unawared Cases May Produce Large Amount of Archived Logs

At times, you may notice that the destination of archived logs was growing so fast unreasonably, and it could be worse, it grew too fast to make the database available to access. You might think it's caused by new data imported, mostly, it's true. But it's not in some cases:

  1. Long data digestion process.
  2. That is, data added, copied, moved and deleted repeatedly. In any digestion cycle, the overall space of data might grow little to slowly, but every step in the process results in tables / indexes changes, so as to produce multiple times of logs. Therefore, the archived logs could grow very fast.

    Furthermore, if you set and enable triggers on these tables to audit all activities, I think you will be in trouble very soon.

  3. Materialized views refreshed by force refresh periodically.
  4. Force Refresh means that once the Fast Refresh is unable to perform, use Complete Refresh instead. Yes, it's very safe and very serviceable in logic. But in fact, materialized views could always do Complete Refresh and never do Fast Refresh due to some reasons. During any Complete Refresh, it deleted all current data and load almost 99% of the same data from the master table over and over again. It could have generated many unnecessary archived logs.

    For more information about Force Refresh, you can refer to Oracle documentation: Materialized View Concepts and Architecture

Suggestions

My suggestion is to set NOLOGGING temporarily on the target objects which are used to generate a lot of logs. I think it will mitigate the problem.

If you're pretty sure that all the data in a table should be discarded, please do not use DELETE FROM, use TRUNCATE TABLE instead. Since there're very limited ways to recover truncated tables, just be careful while truncating tables.

The data pump utility impdp supports TABLE_EXISTS_ACTION attribute to optionally skip, truncate or append the data, whereas the legacy imp does not. You can take the advantage to truncate table in impdp command line.

NOLOGGING will not work when the database is playing the primary role in a data guard environment. Because the data guard requires the primary database to log all database changes forcibly in order to transport all the changes to the standby database completely. Therefore, altering any objects to NOLOGGING mode in such situation will be in vain. The status of FORCE_LOGGING at database-level can be checked as following:
SQL> select force_logging from v$database;

FOR
---
YES

Disabling the triggers before importing data is also a wise choice.

SQL> alter trigger trigger_name disable;

Leave a Reply

Your email address will not be published. Required fields are marked *