Discussion about "Automatisation of resizing redo logs"
Hi
I've automated the resizing of redo log from plsql procedure. it's work very well but i'd like to optimize the number of necessary log switch.
Example: on the basis of 4 redo log groups.
I need to create 4 new Redo Log group with the desired size. ok. in the final step i need to delete the old groups... (hum) How to proceed to have the smallest number of log switches to finally stay on the database with only my 4 new Redo ?
Actually and as i said at the beginning, routine works fine but with a basis of 4 redo log groups i need 20 switch to perform the complete algorythmn !!!
My target databases are 8i, 9i or 10g.
Thanks in advance,
Regards
Den
I have a primary database and standby database. The archived redo logs will apply to standby database every 1hr.What is the DB version?
Why you want to shutdown the standby database?
either you can 1) Cancel MRP (or) 2) Set log_archive_dest_state_2='defer' on primary
you no need to shutdown standby database.
Suppose If I want to shutdown the standby database, What procedure I need to follow?1) Cancel MRP
SQL> alter databaes recover managed standby database cancel;
2) Shutdown Standby
3) startup mount
4) start MRP
SQL> alter databaes recover managed standby database disconnect from session;
In many sites, I have came across that, I need to cancel the managed recovery before shutting down the standby database.You no need to cancel MRP. please read above what i have written.
Ex: If my current apply of archived redo log is from 12:55 PM to 1:05 PM, what happens if I issue SHUTDOWN IMMEDIATE at 1PM. Also what happens if I issue SHUTDOWN IMMEDIATE after cancelling managed recovery. Consider I am going to start my standby database again at 4PM. So, what about the redo logs generated at 2pm, 3pm and 4pm. Will all these redo logs apply when I start the standby database at 4PM.Recover will be performed based on SCN, So lets suppose.
Sequence: 100
FIRST_CHANGE: 20000
NEXT_CHANGE: 21000
If your MRP was stopped at sequence 100, then your SCN would be as 21000, Now whenever you start MRP it will look for the SCN as "21001" which SCN exist in sequence "101".. So based on, it will be performed recovery.
Even standby or primary database recover concept is same.
Also, please let me know the need of cancelling managed recovery before shutting down standby database.Safely plug out...
When you give shutdown, MRP will be Interrupted , so cancel it properly and shutdown
Hope this clears.... :)
Similar Messages
-
Hi Guys,
How can i Resize redo log files, do I need to recreate control file?
Regards
AnshumanThat would mean
alter database add logfile group
and
alter database drop logfile group.
You might need to
alter system switch logfile
in between, as the logfile group might be in use.
Could you please try to be more accurate? The OP might think you can just rm the logfiles!!!
Sybrand Bakker
Senior Oracle DBA -
How can resize redo log ?
hi, i want how can resize redo log size,
i have try resize redo log file following way
1] first i have my redo log size is
GROUP# STATUS MEMBER SIZE
3 ONLINE /oradata/xyz/redo03.log 100M
2 ONLINE /oradata/xyz/redo02.log 100M
1 ONLINE /oradata/xyz/redo01.log 100M
I want this size is reducing , i have try following way
1] first i have create GROUP 4 add member
ALTER DATABASE ADD LOGFILE GROUP 4 ('/oradata/xyz/redo04.log') SIZE 30M;
then
alter system switch logfile;
in this way i have create remaning two redo log,
after create i have delete old redo log, is is correct method for reducing redo log.Check Note: 1035935.6 - Example of How To Resize the Online Redo Logfiles
https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=1035935.6 -
Resizing redo log files.
Hello All,
I am using Oracle RAC 11.2.0.3 with ASM.
I need you help to re size my redo logs. I know how to do that with Oracle single instance, but my question concerning Oracle RAC and ASM with Oracle Managed Files.
Below are some info based on the result returned by this query:
select l.group#,l.thread#, f.member, l.archived, l.status, (bytes / 1024 / 1024) fsize
from v$log l, v$logfile f where f.group# = l.group#
order by 1, 2
1 1 +FRA/istprod/onlinelog/group_1.257.787008381 NO CURRENT 50
1 1 +DATA/istprod/onlinelog/group_1.261.787008381 NO CURRENT 50
2 1 +FRA/istprod/onlinelog/group_2.258.787008383 YES INACTIVE 50
2 1 +DATA/istprod/onlinelog/group_2.262.787008381 YES INACTIVE 50
3 2 +DATA/istprod/onlinelog/group_3.265.787008427 NO CURRENT 50
3 2 +FRA/istprod/onlinelog/group_3.259.787008427 NO CURRENT 50
4 2 +DATA/istprod/onlinelog/group_4.266.787008427 YES INACTIVE 50
4 2 +FRA/istprod/onlinelog/group_4.260.787008429 YES INACTIVE 50I have the below question since I have RAC and ASM with Oracle Managed Files.
1. Should i be connected first to Instance 1 and do the resizing for thread 1 and then connect to instance 2 and do the resizing for instance 2?
2. Because of ASM should I use the below syntax when adding a redo log?
alter database add logfile THREAD 1 group 4 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 100M;3. When forcing a check point a check I should use the "global" syntax as below ?
ALTER SYSTEM CHECKPOINT GLOBAL;4. Any other notes to take it into consideration ? Is there any document that help resize reo logs in my case, RAC + ASM + oracle managed files?
Regards,NB wrote:
Hello All,
I am using Oracle RAC 11.2.0.3 with ASM.
I need you help to re size my redo logs. I know how to do that with Oracle single instance, but my question concerning Oracle RAC and ASM with Oracle Managed Files.
Below are some info based on the result returned by this query:
select l.group#,l.thread#, f.member, l.archived, l.status, (bytes / 1024 / 1024) fsize
from v$log l, v$logfile f where f.group# = l.group#
order by 1, 2
1 1 +FRA/istprod/onlinelog/group_1.257.787008381 NO CURRENT 50
1 1 +DATA/istprod/onlinelog/group_1.261.787008381 NO CURRENT 50
2 1 +FRA/istprod/onlinelog/group_2.258.787008383 YES INACTIVE 50
2 1 +DATA/istprod/onlinelog/group_2.262.787008381 YES INACTIVE 50
3 2 +DATA/istprod/onlinelog/group_3.265.787008427 NO CURRENT 50
3 2 +FRA/istprod/onlinelog/group_3.259.787008427 NO CURRENT 50
4 2 +DATA/istprod/onlinelog/group_4.266.787008427 YES INACTIVE 50
4 2 +FRA/istprod/onlinelog/group_4.260.787008429 YES INACTIVE 50I have the below question since I have RAC and ASM with Oracle Managed Files.
1. Should i be connected first to Instance 1 and do the resizing for thread 1 and then connect to instance 2 and do the resizing for instance 2?Yes, that would be appropriate.
2. Because of ASM should I use the below syntax when adding a redo log?
alter database add logfile THREAD 1 group 4 ('+DATA(ONLINELOG)','+FRA(ONLINELOG)') SIZE 100M;
Yes.
3. When forcing a check point a check I should use the "global" syntax as below ?
ALTER SYSTEM CHECKPOINT GLOBAL;
Yes.
>
4. Any other notes to take it into consideration ? Is there any document that help resize reo logs in my case, RAC + ASM + oracle managed files?Not really something that comes to mind right away. But you have very small sized log files at the moment and you are adding (or planning) now too about 100M. You may want to check that the size chosen by you is adequate enough not to cause you any checkpointing issues in the later run.
HTH
Aman.... -
Resizing redo log files on a 3 node RAC with single node standby database
Hi
On a 3 node 11g RAC system,I have to resize the redo logs on primary database from 50M to 100M. I was planning to do the following steps:
SQL> select group#,thread#,members,status from v$log;
GROUP# THREAD# MEMBERS STATUS
1 1 3 INACTIVE <-- whenefver INACTIVE, logfile group can be dropped
2 1 3 CURRENT & resized, switch logfile can change logfile group
3 1 3 INACTIVE
4 2 3 INACTIVE
5 2 3 INACTIVE
6 2 3 CURRENT
7 3 3 INACTIVE
8 3 3 INACTIVE
9 3 3 CURRENT
9 rows selected.
SQL> alter database drop logfile group 1;
Database altered.
SQL> ALTER DATABASE ADD LOGFILE THREAD 1
GROUP 1 (
'/PROD/redo1/redo01a.log',
'/PROD/redo2/redo01b.log',
'/PROD/redo3/redo01c.log'
) SIZE 100M reuse; 2 3 4 5 6
Database altered.
However I am not sure what needs to be done for the standby. The standby_file_management is set to auto and it is single instance standby.
SQL> select group#,member from v$logfile where type='STANDBY';
GROUP#
MEMBER
10
/PROD/flashback/PROD/onlinelog/o1_mf_10_7b44gy67_.log
11
/PROD/flashback/PROD/onlinelog/o1_mf_11_7b44h7gy_.log
12
/PROD/flashback/PROD/onlinelog/o1_mf_12_7b44hjcr_.log
Please let me know.
Thanks
SumathyHello;
For Redo and Standby redo this won't help :
standby_file_management is set to auto
On the Standby cancel recovery, then drop and recreate the redo and or Standby redo.
Then start recovery again.
Example ( I have a habit of removing the old file at the OS to avoid REUSE and conflicts )
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT='MANUAL';
alter database add standby logfile group 4
('/u01/app/oracle/oradata/orcl/standby_redo04.log') size 100m;
ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT='AUTO'
Notes worth reviewing :
Online Redo Logs on Physical Standby [ID 740675.1]
Error At Standby Database Ora-16086: Standby Database Does Not Contain Available Standby Log Files [ID 1155773.1]
Example of How To Resize the Online Redo Logfiles [ID 1035935.6]
Best Regards
mseberg -
Question about how Oracle manages Redo Log Files
Good morning,
Assuming a configuration that consists of 2 redo log groups (Group A and B), each group consisting of 2 disks (Disks A1 & A2 for Group A and Disks B1 and B2 for group B). Further, let's assume that each redo log file resides by itself in a disk storage device and that the device is dedicated to it. Therefore in the above scenario, there are 4 disks, one for each redo log file and, each disk contains nothing else other than a redo log file. Furthermore, let's assume that the database is in ARCHIVELOG mode and that the archive files are stored on yet another different set of devices.
sort of graphically:
GROUP A GROUP B
A1 B1
A2 B2The question is: When the disks that comprise Group A are filled and Oracle switches to the disks in Group B, can the disks in Group A be taken offline, maybe even physically removed from the system if necessary, without affecting the proper operation of the database ? Can the Archiver process be temporarily delayed until the disks (that were removed) are brought back online or is the DBA forced to wait until the Archiver process has finished creating a copy of the redo log file into the archive ?
Thank you for your help,
John.Hello,
Dropping Log Groups
To drop an online redo log group, you must have the ALTER DATABASE system privilege. Before dropping an online redo log group, consider the following restrictions and precautions:
* An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.)
* You can drop an online redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur.
* Make sure an online redo log group is archived (if archiving is enabled) before dropping it. To see whether this has happened, use the V$LOG view.
SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
GROUP# ARC STATUS
1 YES ACTIVE
2 NO CURRENT
3 YES INACTIVE
4 YES INACTIVE
Drop an online redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause.
The following statement drops redo log group number 3:
ALTER DATABASE DROP LOGFILE GROUP 3;
When an online redo log group is dropped from the database, and you are not using the Oracle Managed Files feature, the operating system files are not deleted from disk. Rather, the control files of the associated database are updated to drop the members of the group from the database structure. After dropping an online redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files.
When using Oracle-managed files, the cleanup of operating systems files is done automatically for you.
Your Database wont be affected as you can operate with 2 redo log files in each group as The minimum number of redo log files required in a database is two because the LGWR (log writer) process writes to the redo log files in a circular manner. so the process will hang becuase you are having 2 only groups if you want to remove 1 add a third one and make it the current group then remove the one you want to be offline.
Please refer to:
http://download.oracle.com/docs/cd/B10500_01/server.920/a96521/onlineredo.htm#7438
Kind regards
Mohamed
Oracle DBA -
About automatic disappearance of Redo log file
I had free Oracle9i Release 1(9.0.1) CD and I installed Oracle9i on my PC.
When using Database assistant to create a database(I choose not to create a database during installation of 9i), after clone of database, it will start database, error comes.
It says: Error write to redo01.log file, I check that file, it existed in the related folder, while at that time it disappears, once I had redo01, 02, 03 log file. It confused me.
Does anyone give something on that?
Thanks.Well, seriously you need to read basic oracle documents.
To give short answers to your question.
Redo logs are required for instance and crash recovery of your system.
You need to have minimum two redo groups with minimum one redo member for each group. They are written in a circular fashion, i.e. one after one. If you maintain your database in archivelog mode, before rewriting/reusing the filled redo group member, that will be archived through arch process to a archive file, this can be used for database recovery.
Every database must required two redo groups and you can't delete them.
Jaffar -
Need to understand when redo log files content is wrote to datafiles
Hi all
I have a question about the time when redo log files are wrote to the datafiles
supposing that the database is in NOARCHIVELOG mode, and all redo log files are filled, the official oracle database documentation says that: *a filled redo log file is available
after the changes recorded in it have been written to the datafiles* witch mean that we just need to have all the redo log files filled to "*commit*" changes to the database
Thanks for help
Edited by: rachid on Sep 26, 2012 5:05 PMrachid wrote:
the official oracle database documentation says that: a filled redo log file is available after the changes recorded in it have been written to the datafiles It helps if you include a URL to the page where you found this quote (if you were using the online html manuals).
The wording is poor and should be modified to something like:
<blockquote>
+"a filled online redo log file is available for re-use after all the data blocks that have been changed by change vectors recorded in the log file have been written to the data files"+
</blockquote>
Remember if a data block that is NOT an undo block has been changed by a transaction, then an UNDO block has been changed at the same time, and both change vectors will be in the redo log file. The redo log file cannot, therefore, be re-used until the data block and the associated UNDO block have been written to disc. The change to the data block can thus be rolled back (uncommitted changes can be written to data files) because the UNDO is also available on disc if needed.
If you find the manuals too fragmented to follow you may find that my book, Oracle Core, offers a narrative description that is easier to comprehend.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: <b><em>Oracle Core</em></b> -
Whole Database Online + redo log backup: How Long?
Hi gurus,
There's something I want to ask about online backup with redo log.
If i'm correct, redo log file is created everytime there's operation that change datafile right?
If i schedule this type of backup everyday, it means it will backup all datafiles, along with redo log file when transaction take places, at my pre-defined hours, right?
Then here's what i'm confused about. When will this backup process finished? Is it when all the datafiles have been backed up? Then how about the redo log files? If there are users that keep making transaction (therefore make change to datafiles), new redo log files will be created on disk, and that will prevent the backup process to finish. Then when will this cycle come to an end?
Please confirm if my grasp about oracle online backup is correct or not, and provide explanation to satisfy my curiousity.
Thanks gurus,
Edited by: Bobby Gunawan on Jan 6, 2009 10:16 AMHello Fidel,
i am feeling honored
> Keep in mind that the only thing where a DBA cannot make mistakes is restore/recovery and this depends on your backups
In general i would say you are right, but i have seen already one case where this statement is not true.
Some time ago i got a call from a colleague where his database crashed, the online redolog files were corrupted and the recovery was not working.
If you don't look really close at this point you are in trouble and can not continue the recovery. Let explain this on a example.
This demonstration is done on an oracle 10.2.0.4, but it work on every other version too.
Let's simulate a crash
SQL> shutdown abort;
Corrupt/delete a specifc redolog file
SQL> startup
ORA-00313: open failed for members of log group 3 of thread 1
ORA-00312: online log 3 thread 1: '/oracle/TST/oradata/redolog/redo03.log'
ORA-27037: unable to obtain file status
Ok - so far so good, lets check the groups and files
SQL> select GROUP#, STATUS, MEMBER from v$logfile where TYPE = 'ONLINE';
GROUP# STATUS MEMBER
1 /oracle/TST/oradata/redolog/redo01.log
2 /oracle/TST/oradata/redolog/redo02.log
3 /oracle/TST/oradata/redolog/redo03.log
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 95 52428800 1 NO CURRENT 5117817 06-JAN-09
3 1 94 52428800 1 YES ACTIVE 5117814 06-JAN-09
2 1 93 52428800 1 YES ACTIVE 5112855 15-DEC-08
What's the situation?
The online redolog file of group 3 is lost/corrupted and this group is needed to perform a complete recovery (see status ACTIVE).
But you are lucky, because of this group 3 is already archived - so you can perform a complete recovery.
Now let's perform a complete recovery but with UNTIL clause (because we need to jump between the online and archived redologfiles)
SQL> recover database until cancel;
ORA-00279: change 5116194 generated at 01/06/2009 21:06:48 needed for thread 1
ORA-00289: suggestion : /oracle/TST/oraarch/TST_1_93_6b8c0516_666969185.arc
ORA-00280: change 5116194 for thread 1 is in sequence #93
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
ORA-00279: change 5117814 generated at 01/06/2009 21:08:47 needed for thread 1
ORA-00289: suggestion : /oracle/TST/oraarch/TST_1_94_6b8c0516_666969185.arc
ORA-00280: change 5117814 for thread 1 is in sequence #94
ORA-00278: log file '/oracle/TST/oraarch/TST_1_93_6b8c0516_666969185.arc' no longer needed for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
ORA-00279: change 5117817 generated at 01/06/2009 21:08:51 needed for thread 1
ORA-00289: suggestion : /oracle/TST/oraarch/TST_1_95_6b8c0516_666969185.arc
ORA-00280: change 5117817 for thread 1 is in sequence #95
ORA-00278: log file '/oracle/TST/oraarch/TST_1_94_6b8c0516_666969185.arc' no longer needed for this recovery
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/oracle/TST/oradata/redolog/redo01.log
Log applied.
Media recovery complete.
Now execute an OPEN NORESETLOGS accidentally (maybe the dba think it is not necessary because of the complete recovery) and try an OPEN RESETLOGS after:
SQL> alter database open noresetlogs;
alter database open noresetlogs
ERROR at line 1:
ORA-00313: open failed for members of log group 3 of thread 1
ORA-00312: online log 3 thread 1: '/oracle/TST/oradata/redolog/redo03.log'
ORA-27037: unable to obtain file status
SQL> alter database open resetlogs;
alter database open resetlogs
ERROR at line 1:
ORA-01139: RESETLOGS option only valid after an incomplete database recovery
So now you are lost.. you can't execute an UNTIL CANCEL anymore which would be needed to perform a successful OPEN RESETLOGS.
In this special case you can do mistakes and you have to restore the whole database and perform the same recovery again and end with OPEN RESETLOGS.
Just for information
Regards
Stefan -
Dataguard Solution for standby redo log file groups
Respected Experts,
My database version is 10.2.0.1.0 and Red Hat 5 os.I want to create a standby database using RMAN.
Can any one help me with the full steps.And i'm also confuse about number of standby redo log file members
need to be created.
Thanks and Regards
Monoj DasMy database version is 10.2.0.1.0 and Red Hat 5 os.I want to create a standby database using RMAN.To configure standby either you can use duplicate target database for standby
or
1) restore standby controlfile
2) mount standby database
3) restore database
and configure standby paraemter then start MRP, will do.
http://docs.oracle.com/cd/B19306_01/server.102/b14239/create_ps.htm
Can any one help me with the full steps.And i'm also confuse about number of standby redo log file members
need to be created.It depends which parameter you want to use, if you mention log_archive_dest_2='service ARCH ' then no need to create any standby redo log file groups,
If you use log_archive_dest_2='service LGWR ' here transport will be in terms of redo and you need standby redo log files on standby database. Which is realtime.
When you use LGWR, data lost will be less if in case of any online redo log file lost. which is recommended.
HTH. -
What's the point of redo logs?
Why does Oracle bother writing everything to redo logs? If it's going to write data changes to the disk, why not just write them once to the data files and be done with it? What's the point of doing it twice? And if it's a redundancy thing, why not mirror the data disks?
Hemant K Chitale wrote:
How would you backup a database while it is in use ? You can't lock all the datafiles to prevent writes to them. Yet, transactions may be updating different blocks in different datafiles even as the backup is in progress. Say your backup starts with datafile 1 (or even datafiles 1,2,3,4 in parallel) at time t0. By time t5, it has copied 20% of the datafile to tape or alternate disk backup location. Along comes a transaction that updates the 100th block (somewhere within the 10-11% range) of datafile 1 and also the 60th block of datafile 5. Meanwhile, the backup continues running, already having taken a prior image of the 100th block and not being aware that the block has been changed. At time t25 it completes datafile 1 (or datafiles 1,2,3,4) and starts backing up datafile 5. Now, when it copies the 60th block of datafile 5, it (the backup utility) doesn't know that this block is inconsistent with the backup image of the 100th block of datafile 1.
Instead of 1 transaction imagine 100 or 1000 transactions occurring while the backup is running.
Surely, Oracle must be able to regenerate a consistent image of the whole database when it is restored ?
That is what the Redo stream provides. The Redo stream is written to Archivelogs so that it can be backed up -- no Archivelog file is "in flux" (particularly if you use RMAN to backup the Archivelogs as well !).
Had Oracle been merely writing to the datafiles alone, without a Redo stream, there is no way it could recreate a consistent database -- whether after Crash Recovery OR after Media Recovery.Interesting point about how redo logs facilitate backups. So what you're saying is that the redo logs help keep the data in the actual data files in a consistent state by only writing full transactions to them at a time. Presumably Oracle will either write out the redo log data to the data files before a backup or will at least prevent the redo logs from writing to the data files during a backup. I always wondered how databases got around that problem of keeping the system available for writing during a backup. I wonder how SQL Server does it.
Hemant K Chitale wrote:
Now, approach this from another angle. A database consists of 10 or 100 or 500 datafiles. You have 10 or 100 or 1000 sessions issuing COMMITs to complete their transactions, which could be of 1 row or 100 rows or 1million rows, each transaction of a different size. Should the 1000 sessions be forced to wait while Oracle writes all those updated blocks to disk in different datafiles -- how many blocks can it write in "an instant" ?
But what if Oracle manages to write much less information -- the bare minimum (called "change vectors") to re-play every transaction to a single file serially ? That would be much faster. Imagine writing to 500 datafiles concurrently, having to open the file, progess to the required block address and update the block, for each block changed in each file VERSUS writing much lesser information serially to a single file -- if the file is full, switch to another file, but keep writing serially.As to your second point, I don't really have a good enough understanding about the format of redo logs vs. the data files to follow you totally. Are you saying that it takes more time to write to the data files because you have to find the proper place in the B-Tree before you can write to it? And that doing that is slower than just opening the redo log and always appending new information to the very end? Maybe so, but it seems like all transactions having to write to a single redo log in serial would slow things down since there would be a ton of contention for one file. Whereas with the data files, you could potentially have several transactions writing to different files simultaneously (provided you hardware would support doing that). And it seems to me like a change vector would contain a lot more information than a field value, but, like I said, I'm not really familiar with the format. -
Hi,
I am running Oracle 11g on Red Hat with a data block size of 8K
I have question on the efficiency of redo logs
I have an application that is extracting data from one DB and updating a second DB based on this data (not a copy). I have the option of doing it as a single mass up date inserts such as the one below. My question is whether this is more efficient (purely in terms of redo log generation) then having discrete single row inserts. The reason for my question is that the current ETL process design is to drop the table and do a mass re-insert and there have been some concerns raised about the impact on redo log generation. I am of the belief that it will be efficient as there will only be one set of logs per block impacted.
Picking your brains and gaining from our knowledge is much appreciated.
Cheers,
Daryl
INSERT INTO TMP_DIM_EXCH_RT
(EXCH_WH_KEY,
EXCH_NAT_KEY,
EXCH_DATE, EXCH_RATE,
FROM_CURCY_CD,
TO_CURCY_CD,
EXCH_EFF_DATE,
EXCH_EFF_END_DATE,
EXCH_LAST_UPDATED_DATE)
VALUES
(1, 1, '28-AUG-2008', 109.49, 'USD', 'JPY', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
(2, 1, '28-AUG-2008', .54, 'USD', 'GBP', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
(3, 1, '28-AUG-2008', 1.05, 'USD', 'CAD', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
(4, 1, '28-AUG-2008', .68, 'USD', 'EUR', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
(5, 1, '28-AUG-2008', 1.16, 'USD', 'AUD', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'),
(6, 1, '28-AUG-2008', 7.81, 'USD', 'HKD', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008');darylo wrote:
I have an application that is extracting data from one DB and updating a second DB based on this data (not a copy). I have the option of doing it as a single mass up date inserts such as the one below. My question is whether this is more efficient (purely in terms of redo log generation) then having discrete single row inserts. Single mass insert is much more efficient in terms of redo usage, and almost everything else.
{message:id=4337747}
Run1 - Single insert, Run2 - multiple inserts for same data.
Name Run1 Run2 Diff
STAT...Heap Segment Array Inse 554 13 -541
STAT...free buffer requested 222 909 687
STAT...redo subscn max counts 221 990 769
STAT...redo ordering marks 44 871 827
STAT...calls to kcmgas 46 873 827
STAT...free buffer inspected 133 982 849
LATCH.object queue header oper 625 2,624 1,999
LATCH.simulator hash latch 223 6,005 5,782
STAT...redo entries 1,258 100,643 99,385
STAT...HSC Heap Segment Block 555 100,014 99,459
STAT...session cursor cache hi 5 100,010 100,005
STAT...opened cursors cumulati 7 100,014 100,007
STAT...execute count 7 100,014 100,007
STAT...session logical reads 2,162 102,988 100,826
STAT...db block gets 1,853 102,780 100,927
STAT...db block gets from cach 1,853 102,780 100,927
STAT...recursive calls 33 101,092 101,059
STAT...db block changes 1,873 201,552 199,679
LATCH.cache buffers chains 7,176 507,892 500,716
STAT...undo change vector size 240,500 6,802,736 6,562,236
STAT...redo size 1,566,136 24,504,020 22,937,884 -
Confused about standby redo log groups
hi masters,
i am little bit confuse about creating redo log group for standby database,as per document number of standby redo group depends on following equation.
(maximum number of logfiles for each thread + 1) * maximum number of threads
but i dont know where to fing threads? actually i would like to know about thread in deep.
how to find current thread?
thanks and regards
VDis it really possible that we can install standby and primary on same host??
yes its possible and i have done it many times within the same machine.
For yours confusion about spfile ,i agree document recommend you to use spfile which is for DG broker handling if you go with DG borker in future only.
There is no concern spfile using is an integral step for primary and standby database implementation you can go with pfile but good is use spfile.Anyhow you always keep pfile on that basis you created spfile,i said you make an entry within pfile then mount yours standby database with this pfile or you can create spfile from this pfile after adding these parameter within pfile,i said cause you might be adding this parmeter from SQL prompt.
1. logs are not getting transfered(even i configure listener using net manager)
2.logs are not getting archived at standby diectory.
3.'ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION' NEVER COMPLETE ITS RECOVERY
4. when tried to open database it always note it 'always' said system datafile is not from sufficiently old backup.
5.i tried 'alter database recover managed standby database camncel' also.Read yours alert log file and paste the latest log here..
Khurram -
Resizing online and standby redo log in dataguard setup.
In 10gr2 dataguard i would like to increase redo logsize from 50M to 100M.
on primary
standby_file_management=manual
added online redo groups with 100M
log switched
drop old one and readded with 100m
deleted log added in step2.
same for standby redo logs.
On standby
was able to resize standby redo logs.
but cannot resize online redologs status is clearing or clearing_current.
please comment. thanks.I assume you just had to wait until the Primary switched out of that online log so it became inactive at the standby as well? We track where the Primary is by marking the online redo log files at the standby as clearing_current so you can tell where the primary was at any given moment.
Make sure you create new standby redo log files at the Primary and Standby to match the new online redo log file size.
Larry -
Hi Experts
i need to resize my online redo log size. at present its size 100Mb. can you plz guide me and give me the steps how i can accomplish this activity successfully
my DB Version is 10g(10.2.0.3) and O/S is RHEL AS 4.6
regards,To changing size of online logfiles you need drop that and create with new size.But there you can not drop current and active log group.To avoiding this you can use alter system switch logfile and alter system checkpoint .Following example you will see that.
C:\Documents and Settings\Administrator>set ORACLE_SID=TEST
C:\Documents and Settings\Administrator>sqlplus "/as sysdba"
SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jul 29 16:42:04 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> select group||' - '||member from v$logfile;
select group||' - '||member from v$logfile
ERROR at line 1:
ORA-00936: missing expression
SQL> select group#||' - '||member from v$logfile;
GROUP#||'-'||MEMBER
1 - D:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\REDO01.LOG
3 - D:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\REDO03.LOG
2 - D:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\REDO02.LOG
SQL> desc v$log;
Name Null? Type
GROUP# NUMBER
THREAD# NUMBER
SEQUENCE# NUMBER
BYTES NUMBER
MEMBERS NUMBER
ARCHIVED VARCHAR2(3)
STATUS VARCHAR2(16)
FIRST_CHANGE# NUMBER
FIRST_TIME DATE
SQL> select group#||' - '||status from v$log;
GROUP#||'-'||STATUS
1 - INACTIVE
2 - INACTIVE
3 - CURRENT
SQL> alter database drop logfile group 1;
Database altered.
SQL> select group#||' - '||status from v$log;
GROUP#||'-'||STATUS
2 - INACTIVE
3 - CURRENT
SQL> alter database add logfile group 1 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\R
EDO_01.LOG' size 50M;
Database altered.
SQL> select group#||' - '||status from v$log;
GROUP#||'-'||STATUS
1 - UNUSED
2 - INACTIVE
3 - CURRENT
SQL> alter database drop logfile group 2;
Database altered.
SQL> alter database add logfile group 2 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\R
EDO_02.LOG' size 50M;
Database altered.
SQL> select group#||' - '||status from v$log;
GROUP#||'-'||STATUS
1 - UNUSED
2 - UNUSED
3 - CURRENT
SQL> alter system switch logfile;
System altered.
SQL> alter system switch logfile;
System altered.
SQL> select group#||' - '||status from v$log;
GROUP#||'-'||STATUS
1 - ACTIVE
2 - CURRENT
3 - ACTIVE
SQL> alter system checkpoint;
System altered.
SQL> select group#||' - '||status from v$log;
GROUP#||'-'||STATUS
1 - INACTIVE
2 - CURRENT
3 - INACTIVE
SQL> alter database drop logfile group 3;
Database altered.
SQL> alter database add logfile group 3 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\TEST\R
EDO_03.LOG' size 50M;
Database altered.
SQL>
Maybe you are looking for
-
Vendor Bill no. entry field
Dear SAP experts what is the standard field for storing vendor bill no. while processing miro, fb60. in order to avoid duplicate invoice entry , if we enter bill number in reference field in document header then best practice will be to enter one bil
-
Calculate Avg for the given months based on the no. of days in months
Dear Experts I have a report in which there is a characterictic claenderyr/month.The Input paramter is claenderyr/month which is restricted by a variable of type interval. i mean user will enter calenderyr/month as interval say 012010 to 06.2010. No
-
Hi Gurus, Need your support. We have a scenario: We create a SO for 100 Qty(Confirm Qty) and customer is not worthy,and it goes for credit block. So my SAP system moves the Confirm Qty of 100 to another customer. Requirement: Can my SO will go to blo
-
How do I upgrade a mac using 10.5.8 to Lion
How do i upgrade a mac using 10.5.8 to Lion?
-
Problems with 10.4.9 and USB memory stick??
Since I've updated to 10.4.9 I'm having nothing but grief when using my Memorex USB 1GB memory stick. I use it for backing up crucial files (e.g address book, etc.) Now, when I attempt to copy files to the memory stick, the system goes into an infini