Reduce Redo
Hi Folks
RDBMS: 11gR1
we have a finance controlling database in archivelog mode which performs huge calculations from time to time. the size of the db is 10gb, but when such a report run it generates ~100gb in a couple of hours. our goal is to reduce the redo generation but we cannot adapt the dml (for direct path load etc..). there are 18 tables (including indexes) involved. so we recreated them as global temporary tables. the redo generation decreased of 30% but the reports took 20% longer, which is too long.
my question is, is there a way to reduce the redo generation for this reports?
something like this for the reporting sessions:
alter session set disablelogging=true;
but in a less dangerous way? :)
what we would try next is to create a second database in noarchivelogmode for reporting and get the results via view over database link to the first database.
thanks a lot for any suggestions
regards oviwan
I liked your approach for the first fix but wonder why it took longer - overhead for loading the temp tables? Swapping (probabaly not, since it was only 30% longer)? Missing or unnecessary indexes on the GTTs?
Might be worth a minute or two to think about what is happening before abandonding the GTT approach. If you can figure that out without too much trouble and fix the problem the GTT approach might still be workable
You'll get overhead when transferring the data through the database link; if the second database is on the same box would an offload with an upload using a direct path load be more efficient?
Some loading ideas
* direct path inserts (APPEND if loading heap tables through insert/select)
* If you're going to populate empty tables create indexes after loading the data
* if you're loading through sql use bulk commands (insert/select if possible); avoid context switching from cursor loops
Similar Messages
-
Reducing REDO generation from a current data refresh process
Hello,
I need to resolve an issue where a schema database is maintained with one delete followed by a tons of bulk insert. The problem is that the vast majority of deleted rows are reinserted as is. This process deletes and reinserts about 1 175 000 rows of data!
The delete clause is:
- delete from table where term >= '200705';
The data before '200705' is very stable and doesn't need to be refreshed.
The table is 9 709 797 rows big.
Here is an excerpt of cardinalities for each term code:
TERM NB_REGS
200001 117130
200005 23584
200009 123167
200101 115640
200105 24640
200109 121908
200201 117516
200205 24477
200209 125655
200301 120222
200305 26678
200309 129541
200401 123875
200405 27283
200409 131232
200501 124926
200505 27155
200509 130725
200601 122820
200605 27902
200609 129807
200701 121121
200705 27699
200709 129691
200801 120937
200805 29062
200809 130251
200901 122753
200905 27745
200909 135598
201001 127810
201005 29986
201009 142268
201101 133285
201105 18075This kind of operation is generating a LOT of redo logs: on average 25 GB per days.
What are the best options available to us to reduce redo generation without changing to much the current process?
- make tables in no logging ? (with mandatory use of append hint?)
- use of a global temporary table for staging and merging against the true table?
- use of partitions and truncate the reloaded one? this not reduce redo generated by subsequent inserts...?
This has not to be mandatory transactionnal.
We use 10gR2 on Windows 64 bits.
Thanks
Brunoyes, you got it, these are terms (Summer of 2007, beginning at May).
Is the perverse effect of truncating and then inserting in direct path mode pushing the high water mark up day after day while having unused space in truncated partitions? Maybe we should not REUSE STORAGE on truncation...
this data can be recovered easily from the datamart that pushes this data, this means we can use nologging and direct path mode without any «forever loss» of data.
Should I have one partition for each term, or having only one for the stable terms and one for the refreshed terms? -
os:x86_64 x86_64 x86_64 GNU/Linux
oracle:9.2.0.6
running : Data guard
Problem : Redo space wait is very high
Init.ora paramaeters
*.background_dump_dest='/u01/app/oracle/admin/PBPR01/bdump'
*.compatible='9.2.0'
*.control_files='/s410/oradata/PBPR01/control01.ctl','/s420/oradata/PBPR01/control02.ctl','/s430/oradata/PBPR01/control03.ctl'
*.core_dump_dest='/u01/app/oracle/admin/PBPR01/cdump'
*.cursor_space_for_time=true
*.db_block_size=8192
*.db_cache_size=576000000
*.db_domain='cc.com'
*.db_file_multiblock_read_count=16
*.db_files=150
*.db_name='PBPR01'
*.db_writer_processes=1
*.dbwr_io_slaves=2
*.disk_asynch_io=false
*.fast_start_mttr_target=1800
*.java_pool_size=10485760
*.job_queue_processes=5
*.log_archive_dest_1='LOCATION=/s470/oraarch/PBPR01'
*.log_archive_dest_3='service=DR_PBPR01 LGWR ASYNC=20480'
*.log_archive_format='PBPR01_%t_%s.arc'
*.log_archive_start=true
*.log_buffer=524288
*.log_checkpoints_to_alert=true
*.max_dump_file_size='500000'
*.object_cache_max_size_percent=20
*.object_cache_optimal_size=512000
*.open_cursors=500
*.optimizer_mode='CHOOSE'
*.processes=500
*.pga_aggregate_target=414187520
*.replication_dependency_tracking=false
*.undo_management=AUTO
*.undo_retention=10800
*.undo_tablespace=UNDOTBS1
*.undo_suppress_errors=TRUE
*.session_cached_cursors=20
*.shared_pool_size=450000000
*.user_dump_dest='/u01/app/oracle/admin/PBPR01/udump'
SGA :
SQL> show sga
Total System Global Area 1108839248 bytes
Fixed Size 744272 bytes
Variable Size 520093696 bytes
Database Buffers 587202560 bytes
Redo Buffers 798720 bytes
SQL>
I created log groups with 2 memebers each and with size 25 mb.
Redo space waits shows as
SQL> SELECT name, value
FROM v$sysstat
WHERE name = 'redo log space requests';
NAME VALUE
redo log space requests 152797
this is running between 140000 and 160000
some of the trace file error
[oracle@hipclora6b bdump]$ cat PBPR01_lns0_23689.trc
Dump file /u01/app/oracle/admin/PBPR01/bdump/PBPR01_lns0_23689.trc
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
ORACLE_HOME = /u01/app/oracle/product/9.2.0.6
System name: Linux
Node name: hipclora6b.clickipc.hipc.clickcommerce.com
Release: 2.4.21-37.EL
Version: #1 SMP Wed Sep 7 13:32:18 EDT 2005
Machine: x86_64
Instance name: PBPR01
Redo thread mounted by this instance: 1
Oracle process number: 34
Unix process pid: 23689, image: [email protected]
*** SESSION ID:(82.51071) 2008-04-14 23:40:04.122
*** 2008-04-14 23:40:04.122 46512 kcrr.c
NetServer 0: initializing for LGWR communication
NetServer 0: connecting to KSR channel
: success
NetServer 0: subscribing to KSR channel
: success
*** 2008-04-14 23:40:04.162 46559 kcrr.c
NetServer 0: initialized successfully
*** 2008-04-14 23:40:04.172 46819 kcrr.c
NetServer 0: Request to Perform KCRRNSUPIAHM
NetServer 0: connecting to remote destination DR_PBPR01
*** 2008-04-14 23:40:04.412 46866 kcrr.c
NetServer 0: connect status = 0
A Sample alert Log
Thread 1 advanced to log sequence 275496
Current log# 1 seq# 275496 mem# 0: /s420/oradata/PBPR01/redo01a.log
Current log# 1 seq# 275496 mem# 1: /s420/oradata/PBPR01/redo01b.log
Tue Apr 15 09:10:03 2008
ARC0: Evaluating archive log 4 thread 1 sequence 275495
ARC0: Archive destination LOG_ARCHIVE_DEST_3: Previously completed
ARC0: Beginning to archive log 4 thread 1 sequence 275495
Creating archive destination LOG_ARCHIVE_DEST_1: '/s470/oraarch/PBPR01/PBPR01_1_275495.arc'
Tue Apr 15 09:10:03 2008
Beginning global checkpoint up to RBA [0x43428.3.10], SCN: 0x0000.3c1594fd
Completed checkpoint up to RBA [0x43428.2.10], SCN: 0x0000.3c1594fa
Completed checkpoint up to RBA [0x43428.3.10], SCN: 0x0000.3c1594fd
Tue Apr 15 09:10:03 2008
ARC0: Completed archiving log 4 thread 1 sequence 275495
Tue Apr 15 09:29:15 2008
LGWR: Completed archiving log 1 thread 1 sequence 275496
Creating archive destination LOG_ARCHIVE_DEST_3: 'DR_PBPR01'
LGWR: Beginning to archive log 5 thread 1 sequence 275497
Beginning log switch checkpoint up to RBA [0x43429.2.10], SCN: 0x0000.3c15bc33
Tue Apr 15 09:29:16 2008
ARC1: Evaluating archive log 1 thread 1 sequence 275496
ARC1: Archive destination LOG_ARCHIVE_DEST_3: Previously completed
ARC1: Beginning to archive log 1 thread 1 sequence 275496
Creating archive destination LOG_ARCHIVE_DEST_1: '/s470/oraarch/PBPR01/PBPR01_1_275496.arc'
Tue Apr 15 09:29:16 2008
Thread 1 advanced to log sequence 275497
Current log# 5 seq# 275497 mem# 0: /s420/oradata/PBPR01/redo05a.log
Current log# 5 seq# 275497 mem# 1: /s420/oradata/PBPR01/redo05b.log
Tue Apr 15 09:29:16 2008
ARC1: Completed archiving log 1 thread 1 sequence 275496
Log file size
SQL> select GROUP#,MEMBERS ,sum(bytes)/(1024*1024) from v$log group by
2 GROUP#,MEMBERS;
GROUP# MEMBERS SUM(BYTES)/(1024*1024)
1 2 25
2 2 25
3 2 25
4 2 25
5 2 25
Pl. give your view what can be thought of to reduce redospace waitBelow are my suggestion:
Increase log buffer between [ 5Mb and 15Mb]
differ the the commit: COMMIT_WRITE=NOWAIT,BATCH
You can also increase your redo log fil, but read the following
Sizing Redo Logs with Oracle 10g
Oracle has introduced a Redo Logfile Sizing Advisor that will recommend a size for our redo logs that limit excessive log switches, incomplete and excessive checkpoints, log archiving issues, DBWR performance and excessive disk I/O. All these issues result in transactions bottlenecking within redo and performance degradation. While many DBAs' first thought is throughput of the transaction base, not very many give thought to the recovery time required in relation to the size of redo generated or the actual size of the redo log groups. With the introduction of Oracle's Mean Time to Recovery features, DBAs can now specify through the FAST_START_MTTR_TARGET initialization variable just how long a crash recovery should take. Oracle will then try its best to issue the proper checkpoints during normal system operation to help meet this target. Since the size of redo logs and the checkpointing of data have a key role in Oracle's ability to recover within a desired time frame, Oracle will now use the value of FAST_START_MTTR_TARGET to suggest an optimal redo log size. In actuality, the setting of FAST_START_MTTR_TARGET is what triggers the new redo logfile sizing advisor, and if you do not set it, Oracle will not provide a suggestion for your redo logs. If you do not have any real time requirement for recovery you should at least set this to its maximum value of 3600 seconds, or one hour and you will then be able to take advantage of the advisory. After setting the FAST_START_MTTR_TARGET initialization parameter a DBA need only query the V$INSTANCE_RECOVERY view for the column OPTIMAL_LOGFILE_SIZE value, in MEG, and then rebuild the redo log groups with this recommendation.
Simple query to show the optimal size for redo logs
SQL> SELECT OPTIMAL_LOGFILE_SIZE
FROM V$INSTANCE_RECOVERY
OPTIMAL_LOGFILE_SIZE
64
A few notes about setting FAST_START_MTTR_TARGET
• Specify a value in seconds (0-3600) that you wish Oracle to perform recovery within.
• Is overridden by LOG_CHECKPOINT_INTERVAL:
Since LOG_CHECKPOINT_INTERVAL requests Oracle to checkpoint after a specified amount of redo blocks have been written, and FAST_START_MTTR_TARGET basically attempts to size the redo logs in such a way as to perform a checkpoint when they switch, you can easily see that these two parameters are of conflicting interest. You will need to unset LOG_CHECKPOINT_INTERVAL if you wish to use the redo log sizing advisor and have checkpoints occur with log switches. This is how it was recommended to be done in the v7 days and really I can't quite see any reason for anything else.
• Is overridden by LOG_CHECKPOINT_TIMEOUT:
LOG_CHECKPOINT_TIMEOUT controls the amount of time in between checkpoints if a log switch or the amount of redo generated has not yet triggered a checkpoint. Since our focus is now on Mean Time to Recovery (MTTR) this parameter is no longer of concern because we are asking Oracle to determine when to checkpoint based on our crash recovery requirements.
• Is overridden by FAST_START_IO_TARGET:
Actually, the FAST_START_IO_TARGET parameter is deprecated and you should switch over to the FAST_START_MTTR_TARGET parameter
Thanks -
Reduce redo when shrinking a large partitioned LOB table?
Hi,
Oracle 10.2.0.5 - Solaris 10 - Dataguard
We have a large (30 Tb) partitioned table with two columns ID and BODY. This is range partitioned on ID, and hash subpartitioned with a million records per range partition and between 500Gb and 1Tb data per partition.
We never modify the LOB, but do delete around 40% over time. After a partiton is full (the ID sequence is greater than the partition value) this becomes read only data. Due to this pattern of partitioning, the space freed by delete is often not used by other inserts.
Looking at one of the partitions (one partition = one tablespace) we have a datafile size of 1100 Gb, segment size of 1095Gb, and a DBMS_LOB.GETLENGTH of 370Gb - so 725Gb "free space"
While we can use something like
alter table test_lob modify partition p1 lob (body) (shrink space cascade);
This generates a lot of redo which we then have to move to the standby and apply.
What other methods could be used to reclaim this space with reduced / no redo?
Thanks
MarkFran,
As I said we're using Dataguard - so force logging. I'm looking for an approach that avoids the redo generation, rather than just turns it off.
I'm currently wondering if swapping partitons and transportable tablespaces may work so I can do the work in a non dataguard database, then swap it back in and just copy that datafile across. The data is read only now. -
Hi,
We have a problem with redo generation. Last few days,redo generation is high than normal.No changes in application level.I don't know where to start.I tried to compare AWR report.But i did not get.
1,Is it possilbe to find How much redo generated for a DML statement by Segment wise(table segment,index segment) when it's executed?
For Ex : The table M_MARCH has 19 colums and 6 indexes.Another tables M_Report has 59 columns and 5 indexes.the query combines both tables.
We need to find whether indexex are really is usable or not?
2,Is there any other way to reduce redo geneation?
Br,
RajeshHigh redo generation can be of two types:
1. During a specific duration of the day.
2. Sudden increase in the archive logs observed.
In both the cases, first thing to be checked is about any modifications done either at the database level(modifying any parameters, any maintenance operations performed,..) and application level (deployment of new application, modification in the code, increase in the users,..).
To know the exact reason for the high redo, we need information about the redo activity and the details of the load. Following information need to be collected for the duration of high redo generation.
1] To know the trend of log switches below queries can be used.
SQL> alter session set NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS'; Session altered. SQL> select trunc(first_time, 'HH') , count(*) 2 from v$loghist 3 group by trunc(first_time, 'HH') 4 order by trunc(first_time, 'HH'); TRUNC(FIRST_TIME,'HH COUNT(*) -------------------- ---------- 25-MAY-2008 20:00:00 1 26-MAY-2008 12:00:00 1 26-MAY-2008 13:00:00 1 27-MAY-2008 15:00:00 2 28-MAY-2008 12:00:00 1 <- Indicate 1 log switch from 12PM to 1PM. 28-MAY-2008 18:00:00 1 29-MAY-2008 11:00:00 39 29-MAY-2008 12:00:00 135 29-MAY-2008 13:00:00 126 29-MAY-2008 14:00:00 135 <- Indicate 135 log switches from 2-3 PM. 29-MAY-2008 15:00:00 112
We can also get the information about the log switches from alert log (by looking at the messages 'Thread 1 advanced to log sequence' and counting them for the duration), AWR report.
1] If you are in 10g or higher version and have license for AWR, then you can collect AWR report for the problematic time else go for statspack report.
a) AWR Report
-- Create an AWR snapshot when you are able to reproduce the issue: SQL> exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT (); -- After 30 minutes, create a new snapshot: SQL> exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT (); -- Now run $ORACLE_HOME/rdbms/admin/awrrpt.sql
b) Statspack Report
SQL> connect perfstat/<Password> SQL> execute statspack.snap; -- After 30 minutes SQL> execute statspack.snap; SQL> @?/rdbms/admin/spreport
In the AWR/Statspack report look out for queries with highest gets/execution. You can check in the "load profile" section for "Redo size" and compare it with non-problematic duration.
2] We need to mine the archivelogs generated during the time frame of high redo generation.
-- Use the DBMS_LOGMNR.ADD_LOGFILE procedure to create the list of logs to be analyzed: SQL> execute DBMS_LOGMNR.ADD_LOGFILE('<filename>',options => dbms_logmnr.new); SQL> execute DBMS_LOGMNR.ADD_LOGFILE('<file_name>',options => dbms_logmnr.addfile); -- Start the logminer SQL> execute DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG); SQL> select operation,seg_owner,seg_name,count(*) from v$logmnr_contents group by seg_owner,seg_name,operation;
Please refer to below article if there is any problem in using logminer.
Note 62508.1 - The LogMiner Utility
We can not get the Redo Size using Logminer but We can only get user,operation and schema responsible for high redo.
3] Run below query to know the session generating high redo at any specific time.
col program for a10 col username for a10 select to_char(sysdate,'hh24:mi'), username, program , a.sid, a.serial#, b.name, c.value from v$session a, v$statname b, v$sesstat c where b.STATISTIC# =c.STATISTIC# and c.sid=a.sid and b.name like 'redo%' order by value;
This will give us the all the statistics related to redo. We should be more interested in knowing "redo size" (Total amount of redo generated in bytes)
This will give us SID for problematic session.
In above query output look out for statistics against which high value is appeared and this statistics will give fair idea about problem. -
Oracle 11.0.1.7:
This is a stupid question. If we don't care about recovering the database and are only testing for functionality purposes then is there a way to not log anything in redo log. Reason is that we are seeing very slow write to redo because of resource issue and are limited with the resource issue. So I was wondering if there is a way to eliminate logging to redo logs. Are redo logs required if DBWR process is refreshing from buffer cache anyways? I do understand that we can't expect any recovery of data, but that is fine. Can someone please give more insights?
I also understand altering tablespace to NOLOGGING doesn't reduce redo generation for normal DML operations.Here is what I am seeing:
Avg
%Time Total Wait wait Waits % DB
Event Waits -outs Time (s) (ms) /txn time
log file sync 104,448 0 745 7 1.0 54.2
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 86,978 0 137 2 0.8 71.8
control file parallel writ 1,745 0 7 4 0.0 3.6
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
LGWR wait for redo copy 422 99.5 .2 .2
SQL*Net message to client 1936K 100.0
SQL*Net more data from cli 57K 97.4 1.1 .8 .6 .1 .0 .0 .0 -
Improving redo log writer performance
I have a database on RAC (2 nodes)
Oracle 10g
Linux 3
2 servers PowerEdge 2850
I'm tuning my database with "spotilght". I have alredy this alert
"The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
The serveres are not in RAID5.
How can I improve redo log writer performance?
Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
Therefore, redo log devices should be placed on fast devices.
Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
To reduce redo write time see Improving redo log writer performance.
See Also:
Tuning Contention - Redo Log Files
Tuning Disk I/O - Archive WriterSome comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
> Assuming that you are not CPU constrained,
moving the online redo to
high-speed solid-state disk can make a hugedifference.
Do you honestly think this is practical and usable
advice Don? There is HUGE price difference between
SSD and and normal hard disks. Never mind the
following disadvantages. Quoting
(http://en.wikipedia.org/wiki/Solid_state_disk):[
i]
# Price - As of early 2007, flash memory prices are
still considerably higher
per gigabyte than those of comparable conventional
hard drives - around $10
per GB compared to about $0.25 for mechanical
drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
Capacity - The capacity of SSDs tends to be
significantly smaller than the
capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
Lower recoverability - After mechanical failure the
data is completely lost as
the cell is destroyed, while if normal HDD suffers
mechanical failure the data
is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
Vulnerability against certain types of effects,
including abrupt power loss
(especially DRAM based SSDs), magnetic fields and
electric/static charges
compared to normal HDDs (which store the data inside
a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
Limited write cycles. Typical Flash storage will
typically wear out after
100,000-300,000 write cycles, while high endurance
Flash storage is often
marketed with endurance of 1-5 million write cycles
(many log files, file
allocation tables, and other commonly used parts of
the file system exceed
this over the lifetime of a computer). Special file
systems or firmware
designs can mitigate this problem by spreading
writes over the entire device,
rather than rewriting files in place.
Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
>
Looking at many of your postings to Oracle Forums
thus far Don, it seems to me that you are less
interested in providing actual practical help, and
more interested in self-promotion - of your company
and the Oracle books produced by it.
.. and that is not a very nice approach when people
post real problems wanting real world practical
advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us. -
Reduce flashback log - disable flashback for a table
Hi,
We have a table which we use only for logging. This information is not essential but there is a lot (given our scale of data :p ) of data (5.000.000 by day)
So we have put the table in nologging (reduce redo => archived),
the insert are /*+ append*/ (reduce undo)
Now, we want to reduce the flashback logs?
Is there a way to disable flashback for a given table ?
thanks
NicolasHi,
first i would use:
alter table tablename nologging;
So you won't generate a lot of redo.
http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#CJAHHIBI
I'm not sure you can disable flashback at table level.
You can do that on tbs level.
Hope this helped..
Bye
Acr -
How can resize redo log ?
hi, i want how can resize redo log size,
i have try resize redo log file following way
1] first i have my redo log size is
GROUP# STATUS MEMBER SIZE
3 ONLINE /oradata/xyz/redo03.log 100M
2 ONLINE /oradata/xyz/redo02.log 100M
1 ONLINE /oradata/xyz/redo01.log 100M
I want this size is reducing , i have try following way
1] first i have create GROUP 4 add member
ALTER DATABASE ADD LOGFILE GROUP 4 ('/oradata/xyz/redo04.log') SIZE 30M;
then
alter system switch logfile;
in this way i have create remaning two redo log,
after create i have delete old redo log, is is correct method for reducing redo log.Check Note: 1035935.6 - Example of How To Resize the Online Redo Logfiles
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=1035935.6 -
Archvie log generation and performance issue
Hi there,
I am facing some problem with archvie log file which is significantly degrading database performance.
Please go thru below details
there are some long running transaction(DML) performed in our database and during this time massive Archive log files are generating,
here the problem comes, This operation is running for about 1 hr and during this time the database performance is very slow even user logging in to the application is taking time.
There is enouch undo tablespace and undo retention configured for the database,
I am not getting why its making such a bad impact on database performance.
----- What could be the reason for this performance degrade -------
----- Is there any way to find which "user session" and "transaction" are generating too many archive log file -----
Your quick response will be highly appriciatedTo resolve your problem with performance degradation first thing to do is to collect more information during performance degradation.
You can do that running AWR or statspack reports during specified time (as it is said in post before) or checking tables like v$session_wait or v$system_event. Then search in report where are you losing your time or find expensive queries.
Run AWR or statspack reports and post information about wait events and then you will probably get more precise help. You can also post information about Oracle version, host, optimizer parameters or similar relevant information.
The more information you provide, the better help you'll get.
btw
Do you receive "checkpoint not complete" in alert log during excessive redo generation?
You can also check if application can reduce redo generation using 'nologging'. If you have transaction that deletes whole table, you could use truncate instead.
Regards,
Marko Sutic
Edited by: msutic on Mar 1, 2009 12:11 AM -
My partitioned table has 150 Million records with average row length of 444. I need to add 8 million more records from another non partitioned table, which will fit into 2 partitions of earlier table.
I have set database to no force logging, disabled all foreign keys and using APPEND hint to add the rows. I suppose this won't generate redo logs for table.
Now to reduce redo logs due to indexes, I am planning to set them unusable and then skip the unusable indexes (any unique index will be in used state as Oracle needs to enforce integrity). I can do this for partitioned indexes and rebuild required partitions alone.
But I have around 12 global single column(with maxlength of 25 bytes) indexes . so my question is, if I rebuild the index offline (as online rebuild scans the full table- Doc ID: 272762.1), will it scans the full table or the existing index, which is in unusable state?
Or is there any other approach? I suppose I can try using partition exchange option but I still need to deal with those indexes.Rebuilding unusable Index scans the table rather than the original index. Also we cannot build it online.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1669403535933#11145054166877 -
Why size of archive log file increasing in merge clause
my database is running in archive log mode.
someone is running oracle merge statement. still it is running.
He will issue commit after the operation.
in that period redolog file increasing now.
my question is why size of archive log file increasing with redolog file.
i know that after commit archive log file should generate.(may be it is wrong).
please suggest........
Edited by: 855516 on Mar 13, 2012 11:18 AM855516 wrote:
my database is running in archive log mode.
someone is running oracle merge statement. still it is running.
He will issue commit after the operation.
in that period redolog file increasing now.
my question is why size of archive log file increasing with redolog file.
i know that after commit archive log file should generate.(may be it is wrong). No this is not correct that after commit archive log will generate....You know merge statement causes the insert (if data not present already) or update if database is present.. Obviously these operation will generate lots of redo if the amount of data been processed is high.
If you feel that this operation is causing excessive of redo then root cause analysis should be done...
For that use Logminer (excellent tool to provide segment level breakdown of redo size). V$logmnr_contens has columns redo block and redo byte address associated with the current redo
change
There are some gudlines in order to reduce redos( which may vary in any environment)
1) check if there are unwanted indexes being used in tables which are refereed in merge. If yes then remove those could bring down the redo
2) Use global temporary tables to reduce redo (if there is a need to keep data only temporarily in a session)
3) Use nologging if possible (but see its implications)
Hope this helps -
Archive Logs in every 15 minutes(Oracle 11g 64 bit EE on Linux RHEL 4
In our production database we have very few transactions may be of few MB's in whole day but it is generating archive logs constantly in every 15 minutes(may be sometime in 14 minutes also) of file size 50 MB each and this way consume 4 GB of space in a day for archive log which is way above than expected.
I have checked archive_lag_target and value of this is 0.
Any clue why it is creating 50 MB archive log file after every 14-15 minutes?It's easy enough to reduce redo log file size without downtime; just add new smaller redo log files, switch logfile a couple of times and drop the old redo log files.
However, if the redo logs are filling up before they switch, then this will probably only make matters worse.
If the redo logs are switching before they are full then maybe you also need to consider log_checkpoint_interval and log_checkpoint_timeout settings.
If the redo logs are filling up before they switch then use the techniques suggested by a couple of the other posters to track down the guilty SQL. -
Hello guys,
a litte question about parameter LOG_BUFFER.
I thought all the time, that the modified data are immediately written to the redo log files by the LGWR, if a value is changed. But in the documentation for parameter LOG_BUFFER stands:
LOG_BUFFER specifies the amount of memory (in bytes) that Oracle uses when buffering redo entries to a redo log file. Redo log entries contain a record of the changes that have been made to the database block buffers. The LGWR process writes redo log entries from the log buffer to a redo log file.
In general, larger values for LOG_BUFFER reduce redo log file I/O, particularly if transactions are long or numerous. In a busy system, a value 65536 or higher is reasonable.
http://saturn.uab.es/server.920/a96536/ch1100.htm
So i think, if the instance and db crashed and the redo log entries are cashed in the sga (and not written immediately to redo files), they are lost and no recovery can be done.
Or do i missunderstood the documentation about that parameter? But then how can be I/O reduced, if the data is written immediately to redolog files....
Thanks
Regards
StefanAhh ok ...
But a last information about the log_buffer:
Here is a statement from an articel:
LGWR will clean up buffers if log_buffer 1/3 full, 1mb log_buffer full or commit occurs. If we change log_buffer to 10mb will it helpful to improve performance.
Ok now ... the log_buffer is in the sga.. and i activated Automatic Memory Management (sga_target is set)
Now i saw this strange thing...
Oracle AMM only sizes the follwing parameters SID.* (example values from my database) ... the sga_target and log_buffer i was setting by hand:
SID.__db_cache_size=2785017856
SID.__java_pool_size=16777216
SID.__large_pool_size=16777216
SID.__shared_pool_size=838860800
SID.__streams_pool_size=0
*.sga_target=3670016000
*.log_buffer=1048576
I set log_buffer to 1048576 in my spfile... but if i make "show parameter" ... i get the following:
SQL> show parameter log_buffer
NAME TYPE VALUE
log_buffer integer 14254080
So Oracle sizes the log_buffer with AMM ... but didn't write it down to the spfile like the other caches (db_cache_size, java_pool_size, and so on...)
Kind of strange...
Regards
Stefan -
Physical standby without ALTER DATABASE FORCE LOGGING
Hi,
Is it possible to use physical standby database without executing ALTER DATABASE FORCE LOGGING on primary side?
Can I use alter tablespace force logginng instead?
I want to have one tablespace with nologging option turned on to reduce redo traffic for some operations.
I can not check this because I don`t have enough servers to build standby configuration.YuriAP wrote:
Hi,
Is it possible to use physical standby database without executing ALTER DATABASE FORCE LOGGING on primary side?
YES
Can I use alter tablespace force logginng instead?
YES
ALTER TABLESPACE <tablespace name> FORCE LOGGING;
I want to have one tablespace with nologging option turned on to reduce redo traffic for some operations.
I can not check this because I don`t have enough servers to build standby configuration.http://download.oracle.com/docs/cd/B10500_01/server.920/a96521/create.htm#1022863
Maybe you are looking for
-
Verification failed certificate for this server is invalid
I attempt to log into the iCloud on my iPod over a WiFi connection and it gives me the Verification Failed. The certificate for this server is invalid. You might be connecting to a server that is pretending to be "setup.icloud.com" which could put yo
-
Why does Photoshop Lightroom keep crashing and never opens?
I downloaded the latest version of LIghtroom, but when I try to open it, it crashes. It starts to open with opening with the yellow aspen photo, but it immediately crashes. I have tried it several times. Any help out there? Thanks.
-
Quicktime 7 and Snow Leopard 10.6.2
I just got a Mac Mini (Late 2009) and running 10.6.2. I have downloaded QT 7 and tried to install it however I get a message that QT V10.0 is installed and that I should get latest version of QT for my system. Everything is up to date according to So
-
Hello, I will create a standby database 11g. For that, I need to copy all files from source server to new standby destination server. sql> ALTER DATABASE BEGIN BACKUP; -- copy all datafiles, redologs, tempfiles, controlfiles, etc. by o/s from source
-
How do I reload my purchased ringtone after resetting iphone?
I reset my iphone because of syncing problems. I have a ringtone that Ihad purchased a while back and can't seem to get it to load back on my phone. It shows in Itunes in my purchased folder. I have checked and unchecked the tone, turned my phone off