GG SCN..........
Hi,
Could you please clarify the following question,
1. Once enabled the supplemental log data in database , whether that supplemental log data consume the more space on disk from actual redolog file size?
2.how extract process fetch the changed datas from online redo log file?
3. We have the scn number in database so we can start the extract process based on the SCN(CSN in gg) then why we need RBA? Where actually RBA will be generated and what scenario we can use RBA detail?
3. What is the difference between SCN and RBA?
Please kindly clarify the above queries that would be very much helpful to understand the internal functionality of GG?
Because of the need for supplemental logging, you will have more log switching. The files are still about the same size (when archiving, for example, Oracle will place additional info in the log files, so even though you say 100M for a size, you may see something a bit different). Using some numbers, suppose your before G ORL held 1000 committed txns. With supplemental logging, maybe now it gets switched at 800 txns "full."
Files are files are files. There are utilities to read data in database files (BBED, for example). Reading the contents of a transaction log is no different. The contents of the log contain not only the SQL, but also who and when. Extract simply reads/extracts transactions of interest, i.e., those tables you identified in the extract param file with the TABLE parameter. Extract (GoldenGate, to be more precise) extracts committed transactions, not every transaction with changes (around 40% or so overall for committed txns).
RBA allows you to start at a more specific point within a trail file. Trail files contain unstructured data. The RBA is like a pointer or address within the file. Same exact idea used to reposition through a data file (if you ever use BBED).
statement, statement, statement, then commit. One SCN, but multiple RBAs within the trail.
Similar Messages
-
In case of Control File Failure, Create Control File cmd how get scn?
The following lines i picked from the
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:5033895918209
======================================================
1. We can use the 'alter database rename ' at mount stage to rename any datafile. Or is it not possible to rename the system datafile like this? Why?
2. What happens to the SCN information in the controlfile when a controlfile is recreated? How will the database sync the SCN with that of the datafiles?
If I issue a 'backup controlfile to <file>' at 8 am and then restore that controlfile binary backup at 10 am and try to open the database, it will give me a control file old error. I understand that it is because the SCN is not in sync. But if I issue a 'backup controlfile to trace' at 8 am and use that script to recreate a new controlfile at 10 am, why doesn't I get the error? Where does it get the SCN information then?
So what is the use of taking a binary copy of the controlfile. Looks like having a 'backup controlfile to trace' script is better than a binary backup. Do you agree? Why/whynot?
Followup August 16, 2002 - 2pm US/Eastern:
1) you could but I just always did it with the create controlfile statement.
When moving system -- I do it that way
When moving ANY OTHER tablespace -- i just offline it, move the files, rename the files online it.
2) it just happens.
The control file you create will read the files to figure out what is up.
I agree, I've never used a binary controlfile backup myself.
=========================================================
My Question- In the Point2 above "Where does it get the SCN information and how control file do SCN Sync with data files?
"1. The CREATE CONTROLFILE reads SCNs from the DataFiles. If the
database was last shutdown, all the datafiles are "non-fuzzy" and have the same
SCN (as of the shutdown checkpoint), If the database or some of the files are from
are hot backup, you cannot open the database because the SCN of some files is
older (lower) than others -- that is why a RECOVER (DATABASE or DATAFILE) is
required.
See http://web.singnet.com.sg/~hkchital/Incomplete_Recovery_with_BackupControlfile.doc
2. I'm not sure I agree with Tom Kyte's response
"I agree, I've never used a binary controlfile backup myself. "
to the question
"So what is the use of taking a binary copy of the controlfile. Looks like having a 'backup controlfile to trace' script is better than a binary backup. Do you agree? Why/whynot?"
If you have lost your database (storage/filesystem failure) and all your datafiles are lost,
you cannot simply do a CREATE CONTROLFILE from a Trace -- because the
CREATE CONTROLFILE has to read and verify all the datafiles specified in the
CREATE statement. If you have an RMAN Repository, you can use that to restore
your database files but otherwise, the RMAN information about backups and backupsets
are in the binary controlfile.
That is why it is important to take binary controlfile backups either manually or
using RMAN or using CONFIGURE CONTROLFILE AUTOBACKUP ON. -
How to get the last SCN number from catalog database
Hi All,
I have a catalog database where my PROD database is registered. Evereyday at 12AM rman takes the hot backup of PROD.
Now I want to create a auxillary database using the last RMAN backup, for this I want to restore using the SCN from the catalog views.
Please help me to get the SCN number from the RC_ views.
Regards,
Bikramasifkabirdba wrote:
Current SCN:
Use the dbms_flashback package to get the current SCN. This value will be used during instantiation at the destination site, as well as by RMAN when duplicating the database.
SET SERVEROUTPUT ON
DECLARE
until_scn NUMBER;
BEGIN
until_scn:=
DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER;
DBMS_OUTPUT.PUT_LINE('Until SCN: ' || until_scn);
END;
Regards
Asif KabirHello,
i am a bit confused,
SELECT CURRENT_SCN FROM V$DATABASE;
6272671324
and from your package
SET SERVEROUTPUT ON
DECLARE
until_scn NUMBER;
BEGIN
until_scn:=
DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER;
DBMS_OUTPUT.PUT_LINE('Until SCN: ' || until_scn);
END;Until SCN: 6272671267
why are they different, and why first result is lower than yours? can you explain please, thank you
Ugur MIHCI -
How can i generate a large SCN for database
Hi,
On databases where SCN number is larger that 2^32 , my application seems to work in a weird way. Am trying to get this re produced in my test setup , but am unable to pump up the SCN number to such a large value.
I am trying this on RHEL5-64 bit and WIN2003-64 bit with Oracle 11gR2 using a script which does some table creation , deletion etc.
Does any one know any method to increase the SCN value ? This would be of great help.
--AmithSimply selecting a row from v$database, in a loop, will increase the current SCN. Don't ask me why this happens as I do not have an answer for the same.
SQL> select current_scn from v$database;
CURRENT_SCN
2944216219
1 row selected.
SQL> select current_scn from v$database;
CURRENT_SCN
2944216221
1 row selected.
SQL> select current_scn from v$database;
CURRENT_SCN
2944216222
1 row selected.
SQL> -
How can I determine what is the minimum SCN number I need to restore up to.
Say if I have a full database backup, I know I have file inconsistency, but I want to know what is the minimum time or SCN number a need to roll forward to in order to be able to open the database?
For example: I do a database restore.
restore database ;
RMAN> sql 'alter database open read only';
sql statement: alter database open read only
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of sql command on default channel at 03/16/2009 15:00:04
RMAN-11003: failure during parse/execution of SQL statement: alter database open read only
ORA-16004: backup database requires recovery
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/u01/oradata/p1/system01.dbf'
I need need to apply archive log files. All references I find for ORA-00194 state the solution is to "apply more logs until the file is consistent " But "HOW MANY LOGS", or more apporaite up to what time or SCN? How does one determine what TIME or SCN is required to get all file consistent?
I thought this query might provide the answer, but it doesn't
select max(checkpoint_change#)
from v$datafile_header
MAX(CHECKPOINT_CHANGE#)
7985876903
--It applies a bit more redo, but not enough to make my datafiles consistent.
recover database until SCN=7985876903 ;
Starting recover at 03/16/09 15:04:54
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
using channel ORA_DISK_5
using channel ORA_DISK_6
using channel ORA_DISK_7
using channel ORA_DISK_8
starting media recovery
channel ORA_DISK_1: starting archive log restore to default destination
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=18436
channel ORA_DISK_1: reading from backup piece /temp-oracle/backup/hot/p1/20090315/hourly.arch_P1_47353_681538638_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/temp-oracle/backup/hot/p1/20090315/hourly.arch_P1_47353_681538638_1 tag=TAG20090315T041716
channel ORA_DISK_1: restore complete, elapsed time: 00:02:26
archive log filename=/u01/app/oracle/flash_recovery_area/P1/archivelog/2009_03_16/o1_mf_1_18436_4vxd81yc_.arc thread=1 se quence=18436
Oracle Error:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/u01/oradata/p1/system01.dbf'
I've discover I need to apply archive logs until this query reports all datafiles as FUZZY=NO , but this only works by guessing at some time periord to roll forward to, then checking the FUZZY column, and try again. Is there a way to know, I have to roll forward to a specific SNC in order for all my datafiles to be consistent?
select file#
, status
, checkpoint_change#
, checkpoint_time
, FUZZY
, RECOVER
,LAST_DEALLOC_SCN
from v$datafile_header
order by checkpoint_time
Thanks,
JasonThe minimum point in time is the time when the last backup piece for datafiles in that backup was completed.
Your alert.log should show the redo log sequence number at that time.
You can query V$ARCHIVED_LOG and get the FIRST_CHANGE# of the first archivedlog generated after that backup piece completed.
A
LIST BACKUP;in RMAN should also show you the SCNs at the time of the backups.
You can also query SCN_TO_TIMESTAMP -- eg
select timestamp_to_scn(to_timestamp('15-MAR-09 09:24:01','DD-MON-RR HH24:MI:SS')) from dual;will return an approximation of the SCN.
Hemant K Chitale
http://hemantoracledba.blogspot.com
Edited by: Hemant K Chitale on Mar 17, 2009 9:41 AM
added the LIST BACKUP command from RMAN. -
How to create a single SCN [Inbound Deliery] ' VL31n ' for multiple PO's
How to create a single SCN [Inbound Deliery] ' VL31n ' for multiple Purchase-Orders with the help of BAPI or BDC Recording in 4.6 b version.
Manually its possible.. but how is it possible in the background i.e through BDC Recording or BAPI's.
[ As we donot have an option of creating one same SCN [Inbound Deliery] for multiple Purchase-Orders in 'VL31n' ].
Please provide the needful Information.These mpeg2 clips do not have audio.
I simply want to create a script that can read in the files, append to each other and then export to an .mov format.
I want the process to be called from a command line that will open QT, run the script, read in the files, append and export.
Thanks.
G5 and Mac Pro Mac OS X (10.4.8) PC's and Windows
G5 and Mac Pro Mac OS X (10.4.8) PC's and Windows -
Tips & Tricks to manage time to contribute in SCN
Hi SCN Mates,
The question of managing time to contribute in SCN and be in the top list of contributors is always a challenge, I wonder how other guys are managing it along with project work and personal life.
Through this thread I would like to ask all top contributors to share few tips on how do you manage your time to contribute in SCN and be in top?
I know with dedication and knowledge, at same time everyone will have their office work at priority after family/personal time.
Also contributing for an hour daily is also not possible because you never know whether you get ideas within an hour ;-).
Anyway, share your tips & tricks...
Thanks,
UmashanakrPeople who become Top Contributors fall into two main categories
Category 1 - There is no need to know anything about SAP at all, but since they are SAP professionals at the end of the day, they know enough to get points. They spend all their time on SCN, Must follow every single member of HCL. Must go to each blog and like it ,and write "thank you". They read every question posted on the forum, whether they know anything about it or not, and reply to it. They have excellent google skills, so they google the answer and post links in reply to every question. They could vary their approach by becoming active in Coffee Corner as well. Soon they will have million points and be a top contributor.
Unfortunately, this method is not for those who have family/work/other commitments which are more important.
Category 2 - These people have built up a tremendous amount of knowledge after years of practising in the business. They are genuine experts. They do not come into SCN with the sheer intent of accumulating points. However, when they do come in, and see fit to respond to a post, it is invariably the correct answer or very helpful, that they get marked accordingly. These people post thoughtful blogs or comments or documents or discussions, which are useful and appreciated by the community and they get marked accordingly. Sooner or later, without really trying, but by simply practising their profession, these people accumulate enough points to become Top Contributors.
These people do not have a set strategy intended to accumulate points, but they do have a set strategy of being good and knowledgeable at their job. They manage their work/life balance in a way that suits them to achive this goal.
I am afraid there is no fixed way to achieve lofty goals with little or no effort.
There is a 3rd category, who are pretty decent in their knowledge and sporadically answer questions when they feel like, and who randomly get points for their responses, and they will never become top contributors, however, they are not too worried about it, and they do get to spend time with their families/other interests. This is a technicality that need not detain us.
YMMV
Hope this helps. -
"ORA-01203 - wrong creation SCN" got during copy of a db on another machine
Hello colleagues,
I copy a database from a machine to a second one through this procedure:
I set each tablespace (data and temp) in backup mode
I copy the datafiles (data and temp)
I copy the control file
I copy archived redo logs
On the second machine I try to startup the database by the command
On the second machine I try to startup the database but the following errors are got:
SQL> @/usr/Systems/1359HA_9.0.0_Master/HA_EOMS_1_9.0.0_Master/tmp/oracle/CACH
E/apply_redo.sql;
ORACLE instance started.
Total System Global Area 423624704 bytes
Fixed Size 2044552 bytes
Variable Size 209718648 bytes
Database Buffers 209715200 bytes
Redo Buffers 2146304 bytes
Database mounted.
alter database recover automatic from '/usr/Systems/1359HA_9.0.0_Master/HA_EOMS_1_9.0.0_Master/data/warm_rep
l/WarmArchive/CACHE' database until cancel using backup controlfile
but the following errors are got:
SQL> @/usr/Systems/1359HA_9.0.0_Master/HA_EOMS_1_9.0.0_Master/tmp/oracle/CACH
E/apply_redo.sql;
ORACLE instance started.
Total System Global Area 423624704 bytes
Fixed Size 2044552 bytes
Variable Size 209718648 bytes
Database Buffers 209715200 bytes
Redo Buffers 2146304 bytes
Database mounted.
alter database recover automatic from '/usr/Systems/1359HA_9.0.0_Master/HA_EOMS_1_9.0.0_Master/data/warm_rep
l/WarmArchive/CACHE' database until cancel using backup controlfile
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-01110: data file 1: '/cache/db/db01/system_1.dbf'
ORA-01122: database file 1 failed verification check
ORA-01110: data file 1: '/cache/db/db01/system_1.dbf'
ORA-01203: wrong incarnation of this file - wrong creation SCN
You see the mount command and the error got.
What can I do to troubleshoot the problem?
thanks for the support
Enrico
The complete copy procedure is the following:
#!/bin/ksh
# Step 2 -- Verifying the DBMS ARCHIVELOG mode
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
spool ${ORACLE_TMP_DIR}/archive.log
archive log list;
spool off
EOF
grep NOARCHIVELOG ${ORACLE_TMP_DIR}/archive.log >/dev/null 2>&1
# Step 3 -- Creating DB_filenames.conf / DB_controfile.conf fles
[ -f ${ORACLE_TMP_DIR}/DB_filenames.conf ] && rm -f ${ORACLE_TMP_DIR}/DB_filenames.conf
[ -f ${ORACLE_TMP_DIR}/DB_controfile.conf ] && rm -f ${ORACLE_TMP_DIR}/DB_controfile.conf
[ -f ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf ] && rm -f ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
set linesize 600;
spool ${ORACLE_TMP_DIR}/DB_filenames.conf
select 'TABLESPACE=',tablespace_name from sys.dba_data_files;
select 'FILENAME=',file_name from sys.dba_data_files;
select 'LOGFILE=',MEMBER from v\$logfile;
spool off
EOF
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
set linesize 600;
spool ${ORACLE_TMP_DIR}/DB_controfile.conf
select name from v\$controlfile;
spool off
EOF
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
set linesize 600;
spool ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf
select 'TABLESPACE=',tablespace_name from sys.dba_temp_files;
select 'FILENAME=',file_name from sys.dba_temp_files;
spool off
EOF
note "Executing cp ${ORACLE_TMP_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf ..."
cp ${ORACLE_TMP_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf
[ $? -ne 0 ] && error "Error executing cp ${ORACLE_TMP_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf!"\
&& LocalExit 1
chmod ug+x ${INSTANCE_DATA_DIR}/DB_filenames.conf
note "Executing cp ${ORACLE_TMP_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf ..."
cp ${ORACLE_TMP_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf
[ $? -ne 0 ] && error "Error executing cp ${ORACLE_TMP_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf!"\
&& LocalExit 1
chmod ug+x ${INSTANCE_DATA_DIR}/DB_controfile.conf
note "Executing cp ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf ..."
cp ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf
[ $? -ne 0 ] && error "Error executing cp ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf!"\
&& LocalExit 1
chmod ug+x ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf
set -a
set -A arr_tablespace `grep "^TABLESPACE=" ${INSTANCE_DATA_DIR}/DB_filenames.conf | awk '{ print \$2 }'`
index=`grep "^TABLESPACE" ${INSTANCE_DATA_DIR}/DB_filenames.conf | wc -l`
backup_status=0
i=0
while [ $i -lt $index ]
do
note "tablespace=${arr_tablespace[$i]}"
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
set linesize 600;
spool ${ORACLE_TMP_DIR}/tablespace.log
select 'FILENAME=',file_name from sys.dba_data_files where tablespace_name='${arr_tablespace[$i]}';
spool off
alter tablespace ${arr_tablespace[$i]} end backup;
spool ${ORACLE_TMP_DIR}/backup_tablespace.log
alter tablespace ${arr_tablespace[$i]} begin backup;
spool off
EOF
set -A arr_filename `grep "^FILENAME=" ${ORACLE_TMP_DIR}/tablespace.log | awk '{ print \$2 }'`
index1=`grep "^FILENAME" ${ORACLE_TMP_DIR}/tablespace.log | wc -l`
h=0
while [ $h -lt $index1 ]
do
name=`basename ${arr_filename[$h]}`
note "filename = ${arr_filename[$h]}"
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
host compress -c ${arr_filename[$h]} > ${BACKUP_AREA}/$name.Z
EOF
h=`expr $h + 1`
done
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
spool ${ORACLE_TMP_DIR}/backup_tablespace.log
alter tablespace ${arr_tablespace[$i]} end backup;
spool off
EOF
i=`expr $i + 1`
done
[ $backup_status -eq 1 ] && LocalExit 1
set -a
set -A arr_tablespace `grep "^TABLESPACE=" ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf | awk '{ print \$2 }'`
index=`grep "^TABLESPACE" ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf | wc -l`
${ORACLE_TMP_DIR}/tablespace.logi=0
while [ $i -lt $index ]
do
note "tablespace=${arr_tablespace[$i]}"
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
set linesize 600;
spool ${ORACLE_TMP_DIR}/tablespace.log
select 'FILENAME=',file_name from sys.dba_temp_files where tablespace_name='${arr_tablespace[$i]}';
spool off
EOF
set -A arr_filename `grep "^FILENAME=" ${ORACLE_TMP_DIR}/tablespace.log | awk '{ print \$2 }'`
index1=`grep "^FILENAME" ${ORACLE_TMP_DIR}/tablespace.log | wc -l`
h=0
while [ $h -lt $index1 ]
do
name=`basename ${arr_filename[$h]}`
note "filename = ${arr_filename[$h]}"
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
host compress -c ${arr_filename[$h]} > ${BACKUP_AREA}/$name.Z
EOF
h=`expr $h + 1`
done
i=`expr $i + 1`
done
# "log switch & controlfile backup"
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
spool ${ORACLE_TMP_DIR}/backup_controlfile.log
alter database backup controlfile to '${BACKUP_AREA}/ctrl_pm.ctl' reuse;
host chmod a+rw ${BACKUP_AREA}/ctrl_pm.ctl
alter system archive log current;
spool off
spool ${ORACLE_TMP_DIR}/archive_info.log
archive log list;
spool off
EOF
# Step 5 -- Copying the DBMS on the companion node
note "transferring archived redo log files from ACT to SBY host"
name=`grep 'Archive destination' ${ORACLE_TMP_DIR}/archive_info.log| awk '{ print \$3 }'`
set -A vett_logfiles `grep "^LOGFILE=" ${INSTANCE_DATA_DIR}/DB_filenames.conf | awk '{ print \$2 }'`
index=`grep "^LOGFILE" ${INSTANCE_DATA_DIR}/DB_filenames.conf | wc -l`
i=0
while [ $index -gt 0 ]
do
name=`basename ${vett_logfiles[$i]}`
###MOD001
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
host cp ${vett_logfiles[$i]} ${BACKUP_AREA}/$name
host chmod a+rw ${BACKUP_AREA}/$name
EOF
if [ $? -ne 0 ]; then
error "Error copying logfile on LOCAL_BACKUP_AREA"
LocalExit 1
fi
note "log_file=${vett_logfiles[$i]}"
index=`expr $index - 1`
i=`expr $i + 1`
done
note "Executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf 0 -k -ret 2 ..."
RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf 0 -k -ret 2
if [ $? -ne 0 ]; then
error "Error executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf 0 -ret 2!"
LocalExit 1
fi
note "Executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf 0 -k -ret 2 ..."
RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf 0 -k -ret 2
if [ $? -ne 0 ]; then
error "Error executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf 0 -ret 2!"
LocalExit 1
fi
note "Executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf 0 -k -ret 2 ..."
RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf 0 -k -ret 2
if [ $? -ne 0 ]; then
error "Error executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf 0 -k -ret 2!"
LocalExit 1
fi
note "Executing RemoteCopy ${COMPANION_HOSTNAME} ${BACKUP_AREA} ${RECOVER_AREA} 0 -k -ret 2 ..."
RemoteCopy ${COMPANION_HOSTNAME} ${BACKUP_AREA} ${RECOVER_AREA} 0 -k -ret 2If the Operating system is same :
Working Machine
================
Shutdown the database and copy everything
Copy the init.ora
Copy the pdf,ctl,and log
Copy the bdump, udump etc..
On the second machine
==================
Copy your file in the same path as original i.e
C:\oracle..<dbname>\system.dbf
C:\oracle..<dbname>\system.dbf
Start the database
If your paths in second machine does not match as original update this thread again
Michael
http://mikegeorgiou.blogspot.com -
In 10g can we dump control file and get SCN info.
Hi,
I have a question can any one help me on this.
In 9.2.0.5 we could dump the control file and get the SCN using the command :
alter session set events 'immediate trace name CONTROLF level 10';
In 10.1.0.4.0 the output of the dump has changed and we do not get the SCN. or any of the following information :
DATABASE ENTRY
CHECKPOINT PROGRESS RECORDS
EXTENDED DATABASE ENTRY
REDO THREAD RECORDS
LOG FILE RECORDS
DATA FILE RECORDS
RMAN CONFIGURATION RECORDS
LOG FILE HISTORY RECORDS
OFFLINE RANGE RECORDS
ARCHIVED LOG RECORDS
BACKUP SET RECORDS
BACKUP PIECE RECORDS
BACKUP DATAFILE RECORDS
Can I get similar output in 10g as 9i ?
with regards,
Dilip.Hi.
What are you trying to achieve here? If you just want the current SCN, you can get it using one of these:
SQL> select current_scn from v$database;
CURRENT_SCN
8058824527
1 row selected.
SQL>or
SQL> select dbms_flashback.get_system_change_number from dual;
GET_SYSTEM_CHANGE_NUMBER
8079317404
1 row selected.
SQL>Cheers
Tim... -
How to find the timestamp and SCN in the standby database?
Hai,
I have Oracle 9.2.0.4 RAC with 2 nodes in the production. The logs generated at these servers will be manully moved to my standby database and will be applied. To know what isthe maximum log files applied in the standby database, i am using the below mentioned query in the standby database,
Select thread#,max(sequence#) from v$log_history group by thread#
In general i am using "recover standby database until cancel" command and then checking the database with the above mentioned query whether all the logs are applied or not.
If i use time based or scn based recovery in standby database i.e., "recover standby database until time <time>" or "recover standby database until change <scn number>" , after completion of the recovery, apart from the message "Media recovery complete" or by seeing the alert log, is there any way to query the standby database, so that i can identify the time or scn upto which the archived redo log files got applied.Hi Sridhar,
There should be some view which will have applied_scn information. There is one more option i can suggest, you can create a heart beat table in production with 2 column like scn and timestamp. Update this table every minute. From standby db you can query this table and get fair idea on applied_scn and timestamp.
While exporting you can export using flashback_scn by taking the value from heartbeat table of standby.
This heartbeat table is used very common in streams environment. Just see if this helps you.
hth,
http://borndba.com -
RMAN-05556: not all datafiles have backups that can be recovered to SCN
Oracle 11.2.0.2 SE-One
Oracle Linux 5.6 x86-64
Weekly refresh of a test db from prod, using rman DUPLICATE DATABASE, failed with “RMAN-05556: not all datafiles have backups that can be recovered to SCN”
Background Summary:
Weekly inc 0 backup of production starts on Sunday at 0100, normally completes around 1050. Includes backups of archivelogs
Another backup of just archivelogs runs on Sunday at 1200, normally completes NLT 1201.
On the test server, the refresh job starts on Sunday at 1325. In the past this script used a set until time \"to_date('`date +%Y-%m-%d` 11:55:00','YYYY-MM-DD hh24:mi:ss')\"; -- hard-coded for ‘today at 11:55’.
For a variety of reasons I decided to replace this semi-hard coding of the UNTIL with a value determined by querying the rman catalog, getting the completion time of the inc 0 backup. This tested out just fine in my vbox lab, even when I deliberately drove some updates and log switches during the period the backup was running. But the first time to go live I got the above reported error.
Details:
The key part of the inc 0 backup is this (run from a shell script)
export BACKUP_LOC=/u01/backup/dbprod
$ORACLE_HOME/bin/rman target=/ catalog rman/***@rmcat<<EOF
configure backup optimization on;
configure default device type to disk;
configure retention policy to recovery window of 2 days;
crosscheck backup;
crosscheck archivelog all;
delete noprompt force obsolete;
delete noprompt force expired backup;
delete noprompt force expired archivelog all;
configure controlfile autobackup on;
configure controlfile autobackup format for device type disk to '$BACKUP_LOC/%d_%F_ctl.backup';
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '$BACKUP_LOC/%U.rman' MAXPIECESIZE 4096 M;
sql "alter system archive log current";
show all;
backup as compressed backupset archivelog all delete all input format "$BACKUP_LOC/%U.alog";
backup as compressed backupset incremental level 0 database tag tag_dbprod;
sql "alter system archive log current";
backup as compressed backupset archivelog all delete all input format "$BACKUP_LOC/%U.alog";
list recoverable backup;
EOF
The archivelog-only backup (runs at noon) looks like this:
export BACKUP_LOC=/u01/backup/dbprod
$ORACLE_HOME/bin/rman target=/ catalog rman/***@rmcat<<EOF
configure backup optimization on;
configure default device type to disk;
configure retention policy to recovery window of 2 days;
crosscheck backup;
crosscheck archivelog all;
delete noprompt force obsolete;
delete noprompt force expired backup;
delete noprompt force expired archivelog all;
configure controlfile autobackup on;
configure controlfile autobackup format for device type disk to '$BACKUP_LOC/%d_%F_ctl.backup';
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '$BACKUP_LOC/%U.rman' MAXPIECESIZE 4096 M;
sql "alter system archive log current";
show all;
backup as compressed backupset archivelog all delete all input format "$BACKUP_LOC/%U.alog";
list recoverable backup;
EOF
And the original refresh looked like this:
>> a step to ftp the backups from the prod server to the test server, and some other housekeeping <<, then
cd /backup/dbtest
echo "connect catalog rman/***@rmcat" > /backup/dbtest/dbtest_refresh.rman
echo "connect target sys/*******@dbprod" >> /backup/dbtest/dbtest_refresh.rman
echo "connect auxiliary /" >> /backup/dbtest/dbtest_refresh.rman
echo "run {" >> /backup/dbtest/dbtest_refresh.rman
echo "set until time \"to_date('`date +%Y-%m-%d` 11:55:00','YYYY-MM-DD hh24:mi:ss')\";" >> /backup/dbtest/dbtest_refresh.rman
echo "duplicate target database to DBTEST;" >> /backup/dbtest/dbtest_refresh.rman
echo "}" >> /backup/dbtest/dbtest_refresh.rman
So, my mod to the refresh was
bkup_point=`sqlplus -s rman/***@rmcat <<EOF1
set echo off verify off feedback off head off pages 0 trimsp on
select to_char(max(completion_time),'yyyy-mm-dd hh24:mi:ss')
from rc_backup_set_details
where db_name='DBPROD'
and backup_type='D'
and incremental_level=0
exit
EOF1`
cd /backup/dbtest
echo "connect catalog rman/***@rmcat" > /backup/dbtest/dbtest_refresh.rman
echo "connect target sys/*******@dbprod" >> /backup/dbtest/dbtest_refresh.rman
echo "connect auxiliary /" >> /backup/dbtest/dbtest_refresh.rman
echo "run {" >> /backup/dbtest/dbtest_refresh.rman
echo "set until time \"to_date('${bkup_point}','YYYY-MM-DD hh24:mi:ss')\";" >> /backup/dbtest/dbtest_refresh.rman
echo "duplicate target database to DBTEST;" >> /backup/dbtest/dbtest_refresh.rman
echo "}" >> /backup/dbtest/dbtest_refresh.rman
Now the fun begins.
First, an echo in the refresh script confirmed the ‘bkup_point’:
=======================================================
We will restore to 2013-08-25 10:41:38
=======================================================
Internally, rman reset the ‘until’ as follows:
executing command: SET until clause
Starting Duplicate Db at 25-Aug-2013 15:35:44
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=162 device type=DISK
contents of Memory Script:
set until scn 45633141350;
Examining the result of LIST BACKUP (the last step of all of my rman scripts) the full backup shows this:
BS Key Type LV Size Device Type Elapsed Time Completion Time
5506664 Full 61.89M DISK 00:00:03 25-Aug-2013 02:11:32
BP Key: 5506678 Status: AVAILABLE Compressed: NO Tag: TAG20130825T021129
Piece Name: /u01/backup/dbprod/DBPROD_c-3960114099-20130825-00_ctl.backup
SPFILE Included: Modification time: 24-Aug-2013 22:33:08
SPFILE db_unique_name: DBPROD
Control File Included: Ckp SCN: 45628880455 Ckp time: 25-Aug-2013 02:11:29
BS Key Type LV Size Device Type Elapsed Time Completion Time
5507388 Incr 0 206.03G DISK 08:30:00 25-Aug-2013 10:41:30
List of Datafiles in backup set 5507388
File LV Type Ckp SCN Ckp Time Name
1 0 Incr 45628880495 25-Aug-2013 02:11:38 +SMALL/dbprod/datafile/system.258.713574775
>>>>>>>>> snip lengthy list <<<<<<<<<
74 0 Incr 45628880495 25-Aug-2013 02:11:38 +SMALL/dbprod/event_i2.dbf
Backup Set Copy #1 of backup set 5507388
Device Type Elapsed Time Completion Time Compressed Tag
DISK 08:30:00 25-Aug-2013 10:41:36 YES TAG_DBPROD
List of Backup Pieces for backup set 5507388 Copy #1
BP Key Pc# Status Piece Name
5507391 1 AVAILABLE /u01/backup/dbprod/eeoi55iq_1_1.rman
>>>>>>>>>>>>> snip lengthy list <<<<<<<<<<<
5507442 52 AVAILABLE /u01/backup/dbprod/eeoi55iq_52_1.rman
Notice the slight difference in time between what is reported in the LIST BACKUP and what was reported by my query to the catalog.
Continuing with the backup list, the second archivelog backup in the script generated six backupsets. The fifth set showed:
BS Key Size Device Type Elapsed Time Completion Time
5507687 650.19M DISK 00:02:18 25-Aug-2013 10:54:53
BP Key: 5507694 Status: AVAILABLE Compressed: YES Tag: TAG20130825T104156
Piece Name: /u01/backup/dbprod/ekoi643j_1_1.alog
List of Archived Logs in backup set 5507687
Thrd Seq Low SCN Low Time Next SCN Next Time
1 1338518 45632944587 25-Aug-2013 05:58:18 45632947563 25-Aug-2013 05:58:20
>>>>>>>>>>>>> snip lengthy list <<<<<<<<<<<
1 1338572 45633135750 25-Aug-2013 10:08:21 45633140240 25-Aug-2013 10:08:24
1 1338573 45633140240 25-Aug-2013 10:08:24 45633141350 25-Aug-2013 10:30:06
1 1338574 45633141350 25-Aug-2013 10:30:06 45633141705 25-Aug-2013 10:41:51
1 1338575 45633141705 25-Aug-2013 10:41:51 45633141725 25-Aug-2013 10:41:55
Notice the availability of the archivelogs including the referenced scn.
Investigation of the ftp portion of the refresh script confirmed that all backup pieces were copied from the prod server.
So what am I overlooking? Having reverted back to the original script to get the refresh completed,HemantKChitale wrote:
So, technically, you only need the database and archivelogs backed up by the database script and not the noon run of the archivelog backup.
backup as compressed backupset archivelog all delete all input format "$BACKUP_LOC/%U.alog";
backup as compressed backupset incremental level 0 database tag tag_dbprod;
sql "alter system archive log current";
backup as compressed backupset archivelog all delete all input format "$BACKUP_LOC/%U.alog";
Yet, why does backupset 5 of the noon archivelog backup show archivelogs from 10:30 to 10:40 if they had been deleted by the database backup script which has a delete input ? It is as if the database backup script did NOT delete the archivelogs and the noon run was the one to backup the archivelogs (again ?)
No, that is from the morning full backup. Note the 'Completion Time" of 25-Aug-2013 10:54:53
However, the error message seems to point to a datafile. Why would reverting the recovery point to 11:55 make a difference, I wonder.
As do I.
Also puzzling to me are the times associated with the completion of the backups. I don't recall ever having to scrutinize a backup listing this closely so I'm sure it's just a matter of filling in some gaps in my understanding, but I noticed this. The backup report (list backup;) shows this for the inc 0 backup:
BS Key Type LV Size
Device Type Elapsed Time Completion Time
5507388 Incr 0 206.03G
DISK
08:30:00
25-Aug-2013 10:41:30 ------- NOTE THE COMPLETION TIME ----
List of Datafiles in backup set 5507388
File LV Type Ckp SCN
Ckp Time
Name
1
0 Incr 45628880495 25-Aug-2013 02:11:38 +SMALL/dbprod/datafile/system.258.713574775
------ SNIP ------
74 0 Incr 45628880495 25-Aug-2013 02:11:38 +SMALL/dbprod/event_i2.dbf
Backup Set Copy #1 of backup set 5507388
Device Type Elapsed Time Completion Time
Compressed Tag
DISK
08:30:00
25-Aug-2013 10:41:36 YES
TAG_DBPROD ------- NOTE THE COMPLETION TIME ----
List of Backup Pieces for backup set 5507388 Copy #1
BP Key Pc# Status
Piece Name
5507391 1 AVAILABLE /u01/backup/dbprod/eeoi55iq_1_1.rman
------ SNIP ------
5507442 52 AVAILABLE /u01/backup/dbprod/eeoi55iq_52_1.rman
Then the autobackup of the control file immediatly following:
BS Key Type LV Size
Device Type Elapsed Time Completion Time
5507523 Full
61.89M
DISK
00:00:03
25-Aug-2013 10:41:47 ------- NOTE THE COMPLETION TIME ----
BP Key: 5507587 Status: AVAILABLE Compressed: NO Tag: TAG20130825T104144
Piece Name: /u01/backup/dbprod/DBPROD_c-3960114099-20130825-01_ctl.backup
SPFILE Included: Modification time: 25-Aug-2013 05:57:15
SPFILE db_unique_name: DBPROD
Control File Included: Ckp SCN: 45633141671 Ckp time: 25-Aug-2013 10:41:44
Then the archivelog backup immediately following (remember, this created a total of 5 backupset, I'm showing number 4)
BS Key Size
Device Type Elapsed Time Completion Time
5507687 650.19M
DISK
00:02:18
25-Aug-2013 10:54:53 ------- NOTE THE COMPLETION TIME ----
BP Key: 5507694 Status: AVAILABLE Compressed: YES Tag: TAG20130825T104156
Piece Name: /u01/backup/dbprod/ekoi643j_1_1.alog
List of Archived Logs in backup set 5507687
Thrd Seq
Low SCN
Low Time
Next SCN Next Time
1
1338518 45632944587 25-Aug-2013 05:58:18 45632947563 25-Aug-2013 05:58:20
------ SNIP ------
1
1338572 45633135750 25-Aug-2013 10:08:21 45633140240 25-Aug-2013 10:08:24
1
1338573 45633140240 25-Aug-2013 10:08:24 45633141350 25-Aug-2013 10:30:06
1
1338574 45633141350 25-Aug-2013 10:30:06 45633141705 25-Aug-2013 10:41:51
1
1338575 45633141705 25-Aug-2013 10:41:51 45633141725 25-Aug-2013 10:41:55
and the controlfile autobackup immediately following:
BS Key Type LV Size
Device Type Elapsed Time Completion Time
5507984 Full
61.89M
DISK
00:00:03
25-Aug-2013 10:55:07 ------- NOTE THE COMPLETION TIME ----
BP Key: 5508043 Status: AVAILABLE Compressed: NO Tag: TAG20130825T105504
Piece Name: /u01/backup/dbprod/DBPROD_c-3960114099-20130825-02_ctl.backup
SPFILE Included: Modification time: 25-Aug-2013 05:57:15
SPFILE db_unique_name: DBPROD
Control File Included: Ckp SCN: 45633142131 Ckp time: 25-Aug-2013 10:55:04
and yet, querying the rman catalog
SQL> select to_char(max(completion_time),'yyyy-mm-dd hh24:mi:ss')
2 from rc_backup_set_details
3 where db_name='DBPROD'
4 and backup_type='D'
5 and incremental_level=0
6 ;
TO_CHAR(MAX(COMPLET
2013-08-25 10:41:38
SQL>
which doesn't match (to the second) the completion time of either the full backup or the associated controlfile autobackp.
Hemant K Chitale
I hope this posts in a readable, understandable manner. I really struggeled with the 'enhanced editor', which I normally use. When I pasted in blocks from the rman report, it kept trying to make some sort of table structure out of it .... guess I'll have to follow that up with a question in the Community forum .... -
Not getting SCN details in Log Miner
Oracle 11g
Windows 7
Hi DBA's,
I am not getting the SCN details in log miner. Below are steps for the same:-
SQL> show parameter utl_file_dir
NAME TYPE VALUE
utl_file_dir string
SQL> select name,issys_modifiable from v$parameter where name ='utl_file_dir';
NAME ISSYS_MOD
utl_file_dir FALSE
SQL> alter system set utl_file_dir='G:\oracle11g' scope=spfile;
System altered.
SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 1071333376 bytes
Fixed Size 1334380 bytes
Variable Size 436208532 bytes
Database Buffers 629145600 bytes
Redo Buffers 4644864 bytes
Database mounted.
Database opened.
SQL> show parameter utl_file_dir
NAME TYPE VALUE
utl_file_dir string G:\oracle11g\logminer_dir
SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
SUPPLEME
NO
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.
SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
SUPPLEME
YES
SQL> /* Minimum supplemental logging is now enabled. */
SQL>
SQL> alter system switch logfile;
System altered.
SQL> select g.group# , g.status , m.member
2 from v$log g, v$logfile m
3 where g.group# = m.group#
4 and g.status = 'CURRENT';
GROUP# STATUS
MEMBER
1 CURRENT
G:\ORACLE11G\ORADATA\MY11G\REDO01.LOG
SQL> /* start fresh with a new log file which is the group 1.*/
SQL> create table scott.test_logmnr
2 (id number,
3 name varchar2(10)
4 );
Table created.
SQL> BEGIN
2 DBMS_LOGMNR_D.build (
3 dictionary_filename => 'logminer_dic.ora',
4 dictionary_location => 'G:\oracle11g');
5 END;
6 /
PL/SQL procedure successfully completed.
SQL> /*
SQL> This has recorded the dictionary information into the file
SQL> "G:\oracle11g\logminer_dic.ora".
SQL> */
SQL> conn scott/
Connected.
SQL> insert into test_logmnr values (1,'TEST1');
1 row created.
SQL> insert into test_logmnr values (2,'TEST2');
1 row created.
SQL> commit;
Commit complete.
SQL> select * from test_logmnr;
ID NAME
1 TEST1
2 TEST2
SQL> update test_logmnr set name = 'TEST';
2 rows updated.
SQL> select * from test_logmnr;
ID NAME
1 TEST
2 TEST
SQL> commit;
Commit complete.
SQL> delete from test_logmnr;
2 rows deleted.
SQL> commit;
Commit complete.
SQL> select * from test_logmnr;
no rows selected
SQL> conn / as sysdba
Connected.
SQL> select g.group# , g.status , m.member
2 from v$log g, v$logfile m
3 where g.group# = m.group#
4 and g.status = 'CURRENT';
GROUP# STATUS MEMBER
1 CURRENT G:\ORACLE11G\ORADATA\MY11G\REDO01.LOG
SQL> begin
2 dbms_logmnr.add_logfile
3 (
4 logfilename => 'G:\oracle11g\oradata\my11g\REDO01.LOG',
5 options => dbms_logmnr.new
6 );
7
8 /
PL/SQL procedure successfully completed.
SQL> select filename from v$logmnr_logs;
FILENAME
G:\oracle11g\oradata\my11g\REDO01.LOG
PL/SQL procedure successfully completed.
SQL> BEGIN
2 -- Start using all logs
3 DBMS_LOGMNR.start_logmnr (
4 dictfilename => 'G:\oracle11g\logminer_dic.ora');
5
6 END;
7 /
PL/SQL procedure successfully completed.
SQL> DROP TABLE myLogAnalysis;
Table dropped.
SQL> create table myLogAnalysis
2 as
3 select * from v$logmnr_contents;
Table created.
SQL> begin
2 DBMS_LOGMNR.END_LOGMNR();
3 end;
4 /
PL/SQL procedure successfully completed.
SQL> set lines 1000
SQL> set pages 500
SQL> column scn format a6
SQL> column username format a8
SQL> column seg_name format a11
SQL> column sql_redo format a33
SQL> column sql_undo format a33
SQL> select scn , seg_name , sql_redo , sql_undo
2 from myLogAnalysis
3 where username = 'SCOTT'
4 AND (seg_owner is null OR seg_owner = 'SCOTT');
SCN SEG_NAME
SQL_REDO
SQL_UNDO
set transaction read write;
commit;
set transaction read write;
########## TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"
("ID","NAME") values ('1','TEST1' where "ID" = '1' and "NAME" = 'T
EST1' and ROWID = 'AAARjeAAEAAAAD
PAAA';
########## TEST_LOGMNR insert into "SCOTT"."TEST_LOGMNR" delete from "SCOTT"."TEST_LOGMNR"
("ID","NAME") values ('2','TEST2' where "ID" = '2' and "NAME" = 'T
EST2' and ROWID = 'AAARjeAAEAAAAD
PAAB';
commit;
set transaction read write;
########## TEST_LOGMNR update "SCOTT"."TEST_LOGMNR" set update "SCOTT"."TEST_LOGMNR" set
"NAME" = 'TEST' where "NAME" = 'T "NAME" = 'TEST1' where "NAME" = '
EST1' and ROWID = 'AAARjeAAEAAAAD TEST' and ROWID = 'AAARjeAAEAAAAD
PAAA';
PAAA';
########## TEST_LOGMNR update "SCOTT"."TEST_LOGMNR" set update "SCOTT"."TEST_LOGMNR" set
"NAME" = 'TEST' where "NAME" = 'T "NAME" = 'TEST2' where "NAME" = '
EST2' and ROWID = 'AAARjeAAEAAAAD TEST' and ROWID = 'AAARjeAAEAAAADKindly type
Desc v$logmnr_contents
Please notice the scn is a *number* column,not varchar2
By using format a6 you are forcing Oracle to display a too big number as a char. Hence the ##.
Sybrand Bakker
Senior Oracle DBA -
SCN document: 'Insert Image' greyed out at a certain stage/size
I noticed a similar issue likeTammy has reported in discussion/thread .Cannot insert or upload an image
After having inserted text and several images to a draft document, the 'Insert Image' icon became greyed out at a certain stage/size.
I did some tests following document Solution for Error: Cannot Insert Images on SCN Content, but the the behavior remained the same. Only after having deleted the last inserted image in the document > Save Draft > Edit again the 'Insert Image' icon was not greyed out anymore which is argueing for a size restriction.
The same html coding as in the document put into a blog does not grey out the ‘Insert Image’ icon. But I found out that after clicking on the ‘Insert Image’ icon > ‘From your Computer’ tab ‘Browse’ is greyed out. Strange and argueing against a size limit is, when I choose ‘Uploaded Images’ I can select images and still insert them into the blog.
Does someone have similar experiences / can assist.
Thank you, BarbaraWe have a recent discussion in SCN support.
He has same problem. Have a look into the discussion Issue with 'Insert Image' button while creating document in SCN
I suggest to split you document into two part as part 1 and part 2. -
Data Concurrency and Consistency ( SCN , DATA block)
Hi guys, i am getting very very very confused about how oracle implement consistency / multiversioning with regards to SCN in a data block and transaction list in the data block..
I will list out what i know so you guys can gauge me on where i am..
When a SELECT statement is issued, SCN for the select query is determined. Then Blocks with higher SCN are rebuilt from the RBS.
Q1) The SCN in the block implied here - is it different from the SCNs in the transaction list of the block ? where is this SCN store ? where is the transaction list store ? how is the SCN of the block related with the SCNs in the transaction list of the block ?
Q2) can someone tell me what happen to the BLOCK SCN and the transaction list
of the BLOCK when a transaction start to update to a row in the block occurs.
Q3) If the BLOCK SCN reflects the latest change made to the block and If the SCN of the block is higher then the SCN of the SELECT query, it means that the block has change since the start of the SELECT query, but it DOESNT mean that the row (data) that the SELECT query requires has changed.
Therefore why cant ORACLE just check to see whether the row has changed and if it has, rebuilt a block from the RBS ?
Q4) when ORACLE compares the BLOCK SCN, does it only SCAN for the BLOCK SCN or does it also SEARCH through the TRANSACTION LIST ? or it does both ? and why ?
Q5) is transaction SCN same as Transaction ID ? which is store in the RBS , the transaction SCN or ID ?
Q6) in short i am confuse with the relationship between BLOCK SCN, transaction list SCN, their location, their usage and relationship of the BLOCK SCN and transaction list when doing a SELECT, their link with RBS..
any gurus clear to give me a clearer view of what is actually happening ?Hi Aman
Hmm agreed.So when commit is issued , what happens at that time?Simply put:
- The SCN for the transaction is determined.
- The transaction is marked as committed in the undo header (the commit SCN is also stored in the undo header).
- If fast cleanout takes place, the commit SCN is also stored in the ITL. If not, the ITL (i.e. the modified data blocks) are not modified.
So at commit, Oracle will replace the begin scn in the ITL with this scn
and this will tell that the block is finally committed is it?The ITL does not contain the begin SCN. The undo header (specifically the transaction table) contains it.
I lost here.In the ITL , the scn is transaction SCN or commit scn?As I just wrote, the ITL contains (if the cleanout occured) the commit SCN.
This sounds like high RBA information?What is RBA?
Commit SCNThis is the SCN associated with a committed transaction.
Begin SCNThis is the SCN at which a transaction started.
Transaction SCNAs I wrote, IMO, this is the same as the commit SCN.
Also please explain that what exactly the ITL stores?If you print an ITL slot, you see the following information:
BBED> print ktbbhitl[0]
struct ktbbhitl[0], 24 bytes @44
struct ktbitxid, 8 bytes @44
ub2 kxidusn @44 0x0009
ub2 kxidslt @46 0x002e
ub4 kxidsqn @48 0x0000fe77
struct ktbituba, 8 bytes @52
ub4 kubadba @52 0x00800249
ub2 kubaseq @56 0x3ed6
ub1 kubarec @58 0x4e
ub2 ktbitflg @60 0x2045 (KTBFUPB)
union _ktbitun, 2 bytes @62
b2 _ktbitfsc @62 0
ub2 _ktbitwrp @62 0x0000
ub4 ktbitbas @64 0x06f4c2a3- ktbitxid --> XID, the transaction holding the ITL slot
- ktbituba --> UBA, used to locate the undo information
- ktbitflg --> flags (active, committed, cleaned out, ...)
- _ktbitfsc --> free space generated by this transaction in this block
- _ktbitwrp+ktbitbas --> commit SCN
HTH
Chris -
Trying to find the last committed SCN
My developers have told me that schema refreshes (export/import) done via DataPump (without using either of the FLASHBACK_* parameters) have caused some sequences to be out of sync with the data in their associated tables. My research into the DataPump documentation directs me to the FLASHBACK_SCN parameter, but the only method I find in documentation to find an SCN is to use DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER to get the current SCN. However, to ensure we resolve the aforementioned issue of out-of-sync objects, I would prefer to use the last committed SCN. Unfortunately, I can find no documentation to suggest how I might derive that value. Can anyone here point me in the right direction for this, or perhaps even provide a solution?
Thank you in advance for your attention, consideration, and enlightened feedback!
Mark C (a.k.a. user621573)Sequences are database objects but are not really protected by transactions: if you rollback your transaction that doesn't rollbacks sequence numbers. I'm not sure that using some SCN parameters is the solution for Data Pump.
There are known limitations when using old export/import.
I cannot find documentation about Data Pump limitations when dealing with sequences.
Maybe you are looking for
-
JDev team: is there a workaround for bug1482140?
I would like to know if anybody has figured out a workaround for bug 1482140 which is listed in the JDeveloper 3.2.2 readme. It states that it is not possible to deploy EJBs to 8.1.7 running on Solaris. It also indicates that it should have been fixe
-
XMLSignature API doesn't generate tags separated by newlines
Hello: When generating an enveloped signature (example at: http://java.sun.com/javase/6/docs/technotes/guides/security/xmldsig/GenEnveloping.java), the output signature element comes in a single line. I would like to produce an output with newlines a
-
Hi gurus, Is there any report to see the link between goods receipt done & for which purchase order.The GR document number & PO should come side by side.Not the MB51 PLEASE. regards murugan
-
Trying to buy DataMan in-app Pro version got error "Your request could not be completed". Iphone 4s, 32 Gb, 5.1.1, never Jailbroken
-
Camera stop working on my iPad 2 when I did the update
Camera stop working on my iPad 2 when I did the update to soi7