Filename of tablespaces
When I created the tablespaces in 11g, I forgot to type the .DBF extension in the filename. For example, instead of USERS_VR.DBF, I typed USERS_VR only. When I looked at the physical datafiles created in the filesystem, it appeared that the type of those files have "file" type instead of "DBF file" type. Does it matter? Are they still valid Oracle datafiles even though the .DBF is missing?
Thanks.
Andy
Yes they are still valid, the extension doesn't matter. The extension is really only of use when you are browsing the file system so you know what the file are for.
HTH!
Similar Messages
-
"ORA-01203 - wrong creation SCN" got during copy of a db on another machine
Hello colleagues,
I copy a database from a machine to a second one through this procedure:
I set each tablespace (data and temp) in backup mode
I copy the datafiles (data and temp)
I copy the control file
I copy archived redo logs
On the second machine I try to startup the database by the command
On the second machine I try to startup the database but the following errors are got:
SQL> @/usr/Systems/1359HA_9.0.0_Master/HA_EOMS_1_9.0.0_Master/tmp/oracle/CACH
E/apply_redo.sql;
ORACLE instance started.
Total System Global Area 423624704 bytes
Fixed Size 2044552 bytes
Variable Size 209718648 bytes
Database Buffers 209715200 bytes
Redo Buffers 2146304 bytes
Database mounted.
alter database recover automatic from '/usr/Systems/1359HA_9.0.0_Master/HA_EOMS_1_9.0.0_Master/data/warm_rep
l/WarmArchive/CACHE' database until cancel using backup controlfile
but the following errors are got:
SQL> @/usr/Systems/1359HA_9.0.0_Master/HA_EOMS_1_9.0.0_Master/tmp/oracle/CACH
E/apply_redo.sql;
ORACLE instance started.
Total System Global Area 423624704 bytes
Fixed Size 2044552 bytes
Variable Size 209718648 bytes
Database Buffers 209715200 bytes
Redo Buffers 2146304 bytes
Database mounted.
alter database recover automatic from '/usr/Systems/1359HA_9.0.0_Master/HA_EOMS_1_9.0.0_Master/data/warm_rep
l/WarmArchive/CACHE' database until cancel using backup controlfile
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-01110: data file 1: '/cache/db/db01/system_1.dbf'
ORA-01122: database file 1 failed verification check
ORA-01110: data file 1: '/cache/db/db01/system_1.dbf'
ORA-01203: wrong incarnation of this file - wrong creation SCN
You see the mount command and the error got.
What can I do to troubleshoot the problem?
thanks for the support
Enrico
The complete copy procedure is the following:
#!/bin/ksh
# Step 2 -- Verifying the DBMS ARCHIVELOG mode
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
spool ${ORACLE_TMP_DIR}/archive.log
archive log list;
spool off
EOF
grep NOARCHIVELOG ${ORACLE_TMP_DIR}/archive.log >/dev/null 2>&1
# Step 3 -- Creating DB_filenames.conf / DB_controfile.conf fles
[ -f ${ORACLE_TMP_DIR}/DB_filenames.conf ] && rm -f ${ORACLE_TMP_DIR}/DB_filenames.conf
[ -f ${ORACLE_TMP_DIR}/DB_controfile.conf ] && rm -f ${ORACLE_TMP_DIR}/DB_controfile.conf
[ -f ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf ] && rm -f ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
set linesize 600;
spool ${ORACLE_TMP_DIR}/DB_filenames.conf
select 'TABLESPACE=',tablespace_name from sys.dba_data_files;
select 'FILENAME=',file_name from sys.dba_data_files;
select 'LOGFILE=',MEMBER from v\$logfile;
spool off
EOF
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
set linesize 600;
spool ${ORACLE_TMP_DIR}/DB_controfile.conf
select name from v\$controlfile;
spool off
EOF
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
set linesize 600;
spool ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf
select 'TABLESPACE=',tablespace_name from sys.dba_temp_files;
select 'FILENAME=',file_name from sys.dba_temp_files;
spool off
EOF
note "Executing cp ${ORACLE_TMP_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf ..."
cp ${ORACLE_TMP_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf
[ $? -ne 0 ] && error "Error executing cp ${ORACLE_TMP_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf!"\
&& LocalExit 1
chmod ug+x ${INSTANCE_DATA_DIR}/DB_filenames.conf
note "Executing cp ${ORACLE_TMP_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf ..."
cp ${ORACLE_TMP_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf
[ $? -ne 0 ] && error "Error executing cp ${ORACLE_TMP_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf!"\
&& LocalExit 1
chmod ug+x ${INSTANCE_DATA_DIR}/DB_controfile.conf
note "Executing cp ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf ..."
cp ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf
[ $? -ne 0 ] && error "Error executing cp ${ORACLE_TMP_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf!"\
&& LocalExit 1
chmod ug+x ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf
set -a
set -A arr_tablespace `grep "^TABLESPACE=" ${INSTANCE_DATA_DIR}/DB_filenames.conf | awk '{ print \$2 }'`
index=`grep "^TABLESPACE" ${INSTANCE_DATA_DIR}/DB_filenames.conf | wc -l`
backup_status=0
i=0
while [ $i -lt $index ]
do
note "tablespace=${arr_tablespace[$i]}"
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
set linesize 600;
spool ${ORACLE_TMP_DIR}/tablespace.log
select 'FILENAME=',file_name from sys.dba_data_files where tablespace_name='${arr_tablespace[$i]}';
spool off
alter tablespace ${arr_tablespace[$i]} end backup;
spool ${ORACLE_TMP_DIR}/backup_tablespace.log
alter tablespace ${arr_tablespace[$i]} begin backup;
spool off
EOF
set -A arr_filename `grep "^FILENAME=" ${ORACLE_TMP_DIR}/tablespace.log | awk '{ print \$2 }'`
index1=`grep "^FILENAME" ${ORACLE_TMP_DIR}/tablespace.log | wc -l`
h=0
while [ $h -lt $index1 ]
do
name=`basename ${arr_filename[$h]}`
note "filename = ${arr_filename[$h]}"
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
host compress -c ${arr_filename[$h]} > ${BACKUP_AREA}/$name.Z
EOF
h=`expr $h + 1`
done
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
spool ${ORACLE_TMP_DIR}/backup_tablespace.log
alter tablespace ${arr_tablespace[$i]} end backup;
spool off
EOF
i=`expr $i + 1`
done
[ $backup_status -eq 1 ] && LocalExit 1
set -a
set -A arr_tablespace `grep "^TABLESPACE=" ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf | awk '{ print \$2 }'`
index=`grep "^TABLESPACE" ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf | wc -l`
${ORACLE_TMP_DIR}/tablespace.logi=0
while [ $i -lt $index ]
do
note "tablespace=${arr_tablespace[$i]}"
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
set linesize 600;
spool ${ORACLE_TMP_DIR}/tablespace.log
select 'FILENAME=',file_name from sys.dba_temp_files where tablespace_name='${arr_tablespace[$i]}';
spool off
EOF
set -A arr_filename `grep "^FILENAME=" ${ORACLE_TMP_DIR}/tablespace.log | awk '{ print \$2 }'`
index1=`grep "^FILENAME" ${ORACLE_TMP_DIR}/tablespace.log | wc -l`
h=0
while [ $h -lt $index1 ]
do
name=`basename ${arr_filename[$h]}`
note "filename = ${arr_filename[$h]}"
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
host compress -c ${arr_filename[$h]} > ${BACKUP_AREA}/$name.Z
EOF
h=`expr $h + 1`
done
i=`expr $i + 1`
done
# "log switch & controlfile backup"
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
spool ${ORACLE_TMP_DIR}/backup_controlfile.log
alter database backup controlfile to '${BACKUP_AREA}/ctrl_pm.ctl' reuse;
host chmod a+rw ${BACKUP_AREA}/ctrl_pm.ctl
alter system archive log current;
spool off
spool ${ORACLE_TMP_DIR}/archive_info.log
archive log list;
spool off
EOF
# Step 5 -- Copying the DBMS on the companion node
note "transferring archived redo log files from ACT to SBY host"
name=`grep 'Archive destination' ${ORACLE_TMP_DIR}/archive_info.log| awk '{ print \$3 }'`
set -A vett_logfiles `grep "^LOGFILE=" ${INSTANCE_DATA_DIR}/DB_filenames.conf | awk '{ print \$2 }'`
index=`grep "^LOGFILE" ${INSTANCE_DATA_DIR}/DB_filenames.conf | wc -l`
i=0
while [ $index -gt 0 ]
do
name=`basename ${vett_logfiles[$i]}`
###MOD001
$ORACLE_HOME/bin/sqlplus /nolog << EOF
connect / as sysdba
host cp ${vett_logfiles[$i]} ${BACKUP_AREA}/$name
host chmod a+rw ${BACKUP_AREA}/$name
EOF
if [ $? -ne 0 ]; then
error "Error copying logfile on LOCAL_BACKUP_AREA"
LocalExit 1
fi
note "log_file=${vett_logfiles[$i]}"
index=`expr $index - 1`
i=`expr $i + 1`
done
note "Executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf 0 -k -ret 2 ..."
RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf 0 -k -ret 2
if [ $? -ne 0 ]; then
error "Error executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_filenames.conf ${INSTANCE_DATA_DIR}/DB_filenames.conf 0 -ret 2!"
LocalExit 1
fi
note "Executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf 0 -k -ret 2 ..."
RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf 0 -k -ret 2
if [ $? -ne 0 ]; then
error "Error executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf ${INSTANCE_DATA_DIR}/DB_TEMP_filenames.conf 0 -ret 2!"
LocalExit 1
fi
note "Executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf 0 -k -ret 2 ..."
RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf 0 -k -ret 2
if [ $? -ne 0 ]; then
error "Error executing RemoteCopy ${COMPANION_HOSTNAME} ${INSTANCE_DATA_DIR}/DB_controfile.conf ${INSTANCE_DATA_DIR}/DB_controfile.conf 0 -k -ret 2!"
LocalExit 1
fi
note "Executing RemoteCopy ${COMPANION_HOSTNAME} ${BACKUP_AREA} ${RECOVER_AREA} 0 -k -ret 2 ..."
RemoteCopy ${COMPANION_HOSTNAME} ${BACKUP_AREA} ${RECOVER_AREA} 0 -k -ret 2If the Operating system is same :
Working Machine
================
Shutdown the database and copy everything
Copy the init.ora
Copy the pdf,ctl,and log
Copy the bdump, udump etc..
On the second machine
==================
Copy your file in the same path as original i.e
C:\oracle..<dbname>\system.dbf
C:\oracle..<dbname>\system.dbf
Start the database
If your paths in second machine does not match as original update this thread again
Michael
http://mikegeorgiou.blogspot.com -
Patch Problem... patch no 6930112....
Hi,
Has anyone tried to apply patch 6930112, "APPS ADAPTER NOT WORKING WITH USER OTHERTHAN THE "APPS"", on Linux (E-Business Suite 11.5.10). When I try to apply, it fails at the point.... please refer to the logs below....
Creating FND_INSTALL_PROCESSES table...
Connecting to APPLSYS......Connected successfully.
Running adtasktim.sql ..
Connected.
PL/SQL procedure successfully completed.
Commit complete.
Already created fnd_install_processes table
Already created FND_INSTALL_PROCESSES_U1 index.
Connecting to APPS......Connected successfully.
Connecting to APPLSYS......Connected successfully.
CREATE TABLE AD_DEFERRED_JOBS( phase number not null,
file_product varchar2(10) not null, subdirectory varchar2(30) not null,
filename varchar2(30) not null, arguments varchar2(1996),
start_time date, restart_time date, elapsed_time number,
restart_count number, defer_count number) TABLESPACE APPS_TS_TX_DATA
initrans 100 storage(initial 4K next 4K)
CREATE UNIQUE INDEX AD_DEFERRED_JOBS_U1 on AD_DEFERRED_JOBS
(phase,file_product,subdirectory,filename,arguments) TABLESPACE
APPS_TS_TX_IDX storage(initial 4K next 4K)
GRANT ALL ON AD_DEFERRED_JOBS TO APPS WITH GRANT OPTION
Connecting to APPS......Connected successfully.
Connecting to APPLSYS......Connected successfully.
INSERT INTO fnd_install_processes (worker_id, control_code, status,
context, pdi_product, pdi_username, command, file_product, subdirectory,
filename, phase, install_group_num, arguments,
symbolic_arguments,machine_name) VALUES (0, 'W', 'W', 'UNDEF', 'UNDEF',
'UNDEF', 'UNDEF', 'UNDEF', 'UNDEF', 'UNDEF', 0, 0,
rpad('-',100,'-'),rpad('-',100,'-'),'UNDEF')
AutoPatch error:
The following ORACLE error:
ORA-00942: table or view does not exist
occurred while executing the SQL statement:
INSERT INTO fnd_install_processes (worker_id, control_code, status,
context, pdi_product, pdi_username, command, file_product, subdirectory,
filename, phase, install_group_num, arguments,
symbolic_arguments,machine_name) VALUES (0, 'W', 'W', 'UNDEF', 'UNDEF',
'UNDEF', 'UNDEF', 'UNDEF', 'UNDEF', 'UNDEF', 0, 0,
rpad('-',100,'-'),rpad('-',100,'-'),'UNDEF')
Error running SQL and EXEC commands in parallel
You should check the file
/apps/apps11i/visappl/admin/VIS/log/adpatch.log
for errors.Hi Birender,
I tried doing the same but of no use, my error is still the sameDid you try to start the patch from the beginning?
Do you get the same error when trying to apply any other patch?
The ORA-00942 error indicates that adpatch cannot access fnd_install_processes table and/or the synonym. Did you check if this table was created under APPLSYS schema and you can select from it as apps user?
I don't understand the phrase " Error running SQL and EXEC commands in parallel ", is it, that both EXEC process and SQL are trying to update the same table(s) and they obtain an exclusive lockThis error could appear due to different reasons (like hanging sessions, performance issues, ..etc). Have a look at the following notes for an explanation and some examples:
Note: 756063.1 - How to Deal with the adpatch Peforrmance issue : Slow, Hang or Crash ?
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=756063.1
Note: 272940.1 - Error Running Sql And Exec Commands In Parallel
https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=272940.1
Regards,
Hussein -
Syntax to resize datafile in 10g
syntax to resize datafile in 10g
the reason is
in 10g if u want to create tablespace the syntax is
create tablespace table_name datafile 10M;No, actually. The syntax on how to create a tablespace is exactly the same today as it was in 1999. Create tablespace X datafile '/path/filename' size 10M.
It is, however, possible to omit both the path and size clause if you want to use Oracle Managed Files (OMF):
alter system set db_create_file_dest=/path/filename;
create tablespace X;
And then it's possible to add back the size clause if you don't like the default 100M autoextend on which OMF gives you:
create tablespace X datafile size 10M;
or
create tablespace X datafile size 10m autoextend off;
or
create tablespace X datafile size 10m autoextend on next 10m maxsize 400m;
Point is, the syntax for creating a tablespace has many different variations and options, but the basic syntax hasn't changed a bit. The other point is: what has creating a tablespace got to do with what you originally asked about, which was resizing an existing datafile?
to add datafile
alter tablespace table_name add datafile;
it will add datafile with default size and with name
according to its naming convention.Yes, but you are not obliged to let that happen.
alter system set db_create_file_dest=/path/filename;
alter tablespace X add datafile '/different/path/myownfilename' size 37M;
...you switch on OMF and then decide you don't want to use it for one particular file. The presence/existence of OMF is an addition of features if you want to use them. It doesn't take anything away from you and if you want to specify all the parts of the create or alter tablespace clause yourself, you can do so, no sweat -at which point, your syntax will look incredibly like what you would have issued in 8i or 9i days.
Analogy time: yes, today, you can build homes out of steel, concrete, carbon-reinforced composites, whereas in the 16th century you might have used timber, wattle and daub. But a house still has rooms, chimney flues, windows, doors. The house I live in would be recognisable to Shakespeare as a house. And what he lived in would be something I could live in too.
Yeah, well: maybe analogies aren't all they're cracked up to be! But the underlying truth is that Oracle gives you new features in new versions and using them can be highly convenient and useful. Nevertheless, if you understand the underlying principles,the old stuff is still there, still recognisable, still usable.
thats why i am asking for syntax to resize datafile
in 10g ,i had fired above sqls with no errors.Again, I am a little at a loss understanding why the fact that the syntax for creating a tablespace has new options should cause you to think anything weird has happened to the syntax for resizing a datafile.
As others I think have already mentioned: there always were and still remain only three ways of making a tablespace bigger:
add a datafile
resize an existing datafile
switch on autoextension of an existing datafile
alter tablespace X add datafile ['/path/filename'][size 10m];
alter database datafile '/path/filename' resize 54m;
alter database datafile '/path/filename' autoextend on [next Xm] [maxsize Ym];
None of that syntax is different from what you'd use in version 7. Yes, some commands have optional clauses -and some of the clauses which are optional in 10g were compulsory in 7 or 8. But the general syntax is identical, still. -
Hi,
does anyone knows what are representations of X$KCCDI table columns names ?
Do we have any official docs on this ?
I don't have metalink access , is there some info?I am not sure Tony but I guess, the column dimlm should be telling it some thing. But the output is not some thing which I guess you are looking for. Here is what I see from the db of mine, 10201 on Windows XP Prof SP2.
SQL> select dimlm from x$kccdi;
DIMLM
3
SQL> select dimdm from x$kccdi;
DIMDM
1
SQL> select dindf from x$kccdi;
DINDF
5
SQL>Now I guess this doesn't show the maximum number but the numbers currently we have in use for the respective type of the data. As in my db I have,
SQL> select count(*) from v$logfile;
COUNT(*)
3
SQL>
SQL> select * from V$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS
FIRST_CHANGE# FIRST_TIM
1 1 11 52428800 1 NO INACTIVE
827289 04-MAY-09
2 1 12 52428800 1 NO CURRENT
847513 04-MAY-09
3 1 10 52428800 1 NO INACTIVE
790268 03-MAY-09Also for the data files,
2
SQL> select count(*) from V$datafile;
COUNT(*)
5
SQL>So this strcuture IMO shows the information about what we have at the moment. To prove my point, I created one more tablespace with a single datafile and checked again.
SQL> create tablespace test_ts datafile 'e:\testts.dbf' size 1m ;
Tablespace created.
SQL> select dindf from x$kccdi;
DINDF
6
SQL> drop tablespace test_ts including contents and datafiles;
Tablespace dropped.
SQL> select dindf from x$kccdi;
DINDF
5
SQL>So I guess I am correct.
Now you said that from V$controlfile_record_section, the info about the maxlogmembers is not shown. That's correct as I have this,
SQL> select type, records_total from V$controlfile_record_section;
TYPE RECORDS_TOTAL
DATABASE 1
CKPT PROGRESS 11
REDO THREAD 8
REDO LOG 16
DATAFILE 100
FILENAME 2298
TABLESPACE 100
TEMPORARY FILENAME 100
RMAN CONFIGURATION 50
LOG HISTORY 292
OFFLINE RANGE 163
ARCHIVED LOG 28
BACKUP SET 409
BACKUP PIECE 200
BACKUP DATAFILE 282
BACKUP REDOLOG 215
DATAFILE COPY 223
BACKUP CORRUPTION 371
COPY CORRUPTION 409
DELETED OBJECT 818
PROXY COPY 249
BACKUP SPFILE 454
DATABASE INCARNATION 292
FLASHBACK LOG 2048
RECOVERY DESTINATION 1
INSTANCE SPACE RESERVATION 1055
REMOVABLE RECOVERY FILES 1000
RMAN STATUS 141
THREAD INSTANCE NAME MAPPING 8
MTTR 8
DATAFILE HISTORY 57
STANDBY DATABASE MATRIX 10
GUARANTEED RESTORE POINT 2048
RESTORE POINT 2083
34 rows selected.
SQL>And here is the trace file info,
CREATE CONTROLFILE REUSE DATABASE "ORCL10" NORESETLOGS NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292Okay I guess I didn't give anything useful. But I shall try to dig it more deeper and if I shall find some thing I shall update you.
Update:
I am not sure but it is possible that you are right about the column from X$KCCDI( I am just reading my demo again). But still, just have a look at X$KCCRT . As this deals with Redo Thread information, this also should have the information about the log members. Check the column, RTNLF from it. I have not edited my original reply as I want you to see it and correct me if I am wrong some where.
HTH
Aman....
Edited by: Aman.... on May 4, 2009 5:43 PM -
UNDOTBS1 and SYSAUX Tablespace Filename Mismatch in Oracle 11g XE
Hi,
We have installed Oracle 11g XE 32 bit on Windows and found that Tablespace name and filenames are mismatched for UNDOTBS1, SYSAUX.
If you run following query:
SELECT TABLESPACE_NAME, FILE_NAME FROM DBA_DATA_FILES; You will get below output:
TABLESPACE_NAME FILE_NAME
============================================================================
USERS C:\ORACLE\ORACLE11GXE\APP\ORACLE\ORADATA\XE\USERS.DBF
UNDOTBS1 C:\ORACLE\ORACLE11GXE\APP\ORACLE\ORADATA\XE\SYSAUX.DBF
SYSAUX C:\ORACLE\ORACLE11GXE\APP\ORACLE\ORADATA\XE\UNDOTBS1.DBF
SYSTEM C:\ORACLE\ORACLE11GXE\APP\ORACLE\ORADATA\XE\SYSTEM.DBFNotice the difference between UNDOTBS1 and SYSAUX.
UNDOTBS1 tablespace has filename SYSAUX.DBF while
SYSAUX tablespace has filename UNDOTBS1.DBF
Is this a bug or just a wrong name mapping?
Will this affects the internal behavior of tablespace as well for undo tablespace and APEX Installation?
Regards,
Sohil Bhavsar.Error: ORA-03297 ... related to this mismatch of UNDOTBS1 with SYSAUXActually its related to trying to resize a file smaller than the existing extents in use, so yes, restores aren't the only place that could cause confusion from the "incorrect" datafile names.
Fixing it could be done in one pass if you don't mind adjusting filenames so they don't clash, i.e. shut down your database instance and move the files at the OS, startup mount and rename the files in the database.
Don't use the services applet, the instance has to be shutdown with sqlplus, leave the database service running. If you try to open the database with OS files renamed but not corrected in the instance, the database won't open if there are troubles with the undo datafile.
sqlplus /nolog
conn /as sysdba;
... connected ...
col name format a60
col tsname format a10
set lines 120
select t.name tsname, d.name, d.BYTES / 1024 / 1024 mb from v$tablespace t, v$datafile d where t.ts# = d.ts#;
system .../system.dbf <n>
shutdown immediate;
... database closed ...
exit
cd <datafile directory>
move sysaux.dbf undotbs01.dbf
move UNDOTBS1.DBF sysaux.dbf
# or use file explore GUI after shutdown
sqlplus /nolog
conn /as sysdba;
... connected to idle instance ...
startup mount;
... SGA, instance size info ...
alter database rename file 'C:\...<full path>\SYSAUX.DBF' to 'C:\...<full path>\UNDOTBS01.DBF';
alter database rename file 'C:\...<full path>\UNDOTBS1.DBF' to 'C:\...<full path>\SYSAUX.DBF';
select t.name tsname, d.name, d.BYTES / 1024 / 1024 mb from v$tablespace t, v$datafile d where t.ts# = d.ts#;
... make sure you get what was asked for ...
TSNAME NAME MB
SYSTEM C:\ORACLEXE\APP\ORACLE\ORADATA\XE\SYSTEM.DBF 360
SYSAUX C:\ORACLEXE\APP\ORACLE\ORADATA\XE\SYSAUX.DBF 660
UNDOTBS1 C:\ORACLEXE\APP\ORACLE\ORADATA\XE\UNDOTBS01.DBF 25
USERS C:\ORACLEXE\APP\ORACLE\ORADATA\XE\USERS.DBF 100
alter database open;
... database altered ... -
System tablespace space not regained when objects are dropped
Mine is a Oracle 10g 10.2 on windows.
I am importing a export file into a user ,It takes some amount of space in SYSTEM and another tablespace .When I drop the user space in system tablespace is not coming back. ANY IDEA WHY
BEFORE IMPORT
SQL> select sum(bytes)/1024/1024 from dba_segments where owner='SYSTEM';
SUM(BYTES)/1024/1024
22.1875
SQL> select sum(bytes)/1024/1024 from dba_segments where owner='SYS';
SUM(BYTES)/1024/1024
544.1875
SQL> select sum(bytes)/1024/1024 from dba_segments where segment_name='SOURCE$';
SUM(BYTES)/1024/1024
41
I use the following commands to import
SQL>create user <username> identified by <password> default tablespace <tsname> quota unlimited on <tsname>;
SQL>grant create session,imp_full_database to <username>;
imp system file=filename.dmp log=logname.log fromuser=<username> touser=<username> statistics=none
AFTER IMPORT
SQL> select sum(bytes)/1024/1024 from dba_segments where segment_name='SOURCE$';
SUM(BYTES)/1024/1024
53
SQL> select sum(bytes)/1024/1024 from dba_segments where owner='SYSTEM';
SUM(BYTES)/1024/1024
22.1875
SQL> select sum(bytes)/1024/1024 from dba_segments where owner='SYS';
SUM(BYTES)/1024/1024
728.375
AFTER DROPPING THE USER/SCHEMA
SQL> select sum(bytes)/1024/1024 from dba_segments where owner='SYS';
SUM(BYTES)/1024/1024
728.375
SQL> select sum(bytes)/1024/1024 from dba_segments where owner='SYSTEM';
SUM(BYTES)/1024/1024
22.1875
SQL> select sum(bytes)/1024/1024 from dba_segments where segment_name='SOURCE$';
SUM(BYTES)/1024/1024
53
I even tried deleting the objects first and then dropping the user
SQL> delete from source$ where obj# in(select object_id from dba_objects where owner='USERNAME');
211252 rows deleted.
SQL> commit;
Commit complete.
SQL> drop user USERNAME cascade;
User dropped.
The space used by the schema on system tablespace is not coming back.Hi user509593!
Adding objects to a tablespace requires space in that tablespace. This space is managed in segments and extents. If an extent is fully used (that means 100 % usage) a new extent will be added to a segment. Oracle uses a mechanism called "High Water Mark" to mark the last used extent.
Your problem is that oracle don't set this High Water Mark back if you are dropping objects from a tablespace. Once an extent is marked as it it retains marked as used.
Before Adding Objects:
u = used Extent
x = free Extent
| = High Water Mark
uuuuuuxxxxx
...........|
After Adding Objects:
uuuuuuuxxxx
............|
After dropping objects:
uuuuuuuxxxx
............|
The only chance to get your "unused" space back is to reorganized your tablespace. But before you reorganize something please read the documentation to know all about the costs and traps that comes with reorganization.
Hope this help!
null -
Regarding Maxsize of Undo Tablespace
Dear expetrs
While executing a procedure i got error.
Error In Insertion..ORA-30036: unable to extend segment by 16384 in undo tablesp
ace 'UNDOTBS1'
then i increase the size of the Undo Tablespace
then again i got error.
ORA-01144: File size (7680000 blocks) exceeds maximum of 4194303 blocks
plz give me answer as soon as possible.
thnaks.1) resize your datafile to 4194303 * db_block_size
alter database datafile < path/filename > resize <4194303 * db_block_size> ;
you find db_block_size by:
sqlplus /nolog
SQL> connect / as sysdba
SQL> show parameter db_block_size
or by simply have a look in the pfile (init<SID>.ora) in $ORACLE_HOME/dbs
2) add another file to the undo tablespace:
SQL>alter tablespace undotbs1 add datafile <path/filename> size <n> M;
a tablespace may have up to 1022 datafiles.
hope this helps
roman -
Restore - filename for datafile is missing in the control file
I am practicing RMAN recovery procedures and I encountered a problem that I do not know how to solve.
1) I took a hot backup of my test database.
2) Create tablespace TS_DATA_TEMP
3) Deleted tablespace TS_DATA which had 2 datafiles
4) Created tablespace TS_DATA with 1 datafile
5) shutdown database
6) logged into RMAN in nomount
7) restore controlfile from autobackup;
8) alter database mount;
9) restore database;
RMAN> restore database;
Starting restore at 22-AUG-08
using channel ORA_DISK_1
the filename for datafile 5 is missing in the control file
skipping datafile 1; already restored to file /u02/oradata/EDM91/system01.dbf
skipping datafile 2; already restored to file /u02/oradata/EDM91/undotbs01.dbf
skipping datafile 6; already restored to file /u02/oradata/EDM91/index
skipping datafile 3; already restored to file /u02/oradata/EDM91/sysaux01.dbf
skipping datafile 7; already restored to file /u03/oradata/EDM91/TS_INDEX01.dbf
skipping datafile 9; already restored to file /u03/oradata/EDM91/TS_LOB01.dbf
skipping datafile 4; already restored to file /u02/oradata/EDM91/users01.dbf
restore not done; all files readonly, offline, or already restored
Finished restore at 22-AUG-08
10) tried to recover
Starting recover at 22-AUG-08
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 08/22/2008 15:04:22
RMAN-06094: datafile 5 must be restored
SQL> select name from v$datafile where file#=5;
NAME
/u04/oradata/EDM91/TS_DATA.dbf
12) alter database backup controlfile to trace
DATAFILE
'/u02/oradata/EDM91/system01.dbf',
'/u02/oradata/EDM91/undotbs01.dbf',
'/u02/oradata/EDM91/sysaux01.dbf',
'/u02/oradata/EDM91/users01.dbf',
'/u04/oradata/EDM91/TS_DATA.dbf',
'/u02/oradata/EDM91/index',
'/u03/oradata/EDM91/TS_INDEX01.dbf',
'/u04/oradata/EDM91/TS_DATA02.dbf',
'/u03/oradata/EDM91/TS_LOB01.dbf'
CHARACTER SET UTF8
So i'm not sure what the problem is. Datafile TS_DATA.dbf is located in the control file but i'm getting the error: the filename for datafile 5 is missing in the control file.
Suggestions on what's wrong with the control file?I found my mistake; a newbie error.
I did not know I needed to include in my restore command the 'until' option when doing incomplete recovery. I only included the 'until' option in the recover command:
original command
restore database;
recover database until sequence =123;
should be
restore database until sequence = 123;
recover database until sequence =123;
I usually run the command using a RUN block with a SET UNTIL SEQUENCE but this case I used the single commands separately.
RUN
+{+
SET UNTIL sequence 123;
RESTORE DATABASE;
RECOVER DATABASE;
+}+ -
Migrating SYSTEM tablespace from DMTS to LMTS in Oracle 9.2.0.7
Migrating SYSTEM tablespace from DMTS to LMTS in Oracle 9.2.0.7 using
brspace -f dbcreate
SAP version: 4.6C
Oracle: 9.2.0.7
OS: AIX 5.3
BRTools: 6.40(42) /** 6.40(10) or (12) will be sufficient according to SAP ***/
IMPORTANT ***************************************
MUST DO:
1. Create a Full Backup of your system
2. Test your Restore and recovery of your backup.
3. Have a copy of all your tablespaces names on hand
4. Know your SYS and SYSTEM passwords
5. Run CheckDB in DB13 to ensure it is completed successfully with no warnings. This reduce the chance of hitting errors in the process
6. Ensure your UNDO tablespace is big enough
7. OSS 400241 Problems with ops$ or sapr3 connect to Oracle
NOTE: OSS 706625(Read this note)
The migration from a dictionary-managed SYSTEM tablespace to a locally-managed tablespace using the PL/SQL procedure DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL is not supported in the SAP environment.
In UNIX, logon as ora<sid>
run command: brspace -f dbcreate
This command will triggers a Menu. The are seven(7) steps to complete the whole process. Do them in sequence, from step 1 to step 7 faithfully. In Step 1, ensure that your settings of PSAPTEMP, PSAPUNDO etc details such as filenames are correct. The rest I leave it as default and they are fine. Do not change redo log group from 8 to 4 even if you only have 4 redo groups. If not, you might need to restore the system! If the seven steps are complete without errors(warnings is acceptable), congrats. Perform a backup again.
Problems I encountered that caused me to restore system:
1./ Problem: I changed the redo group from 8 to 4 and in the later stage after the tablespaces and files are dropped, the system prompted me that 4 is not acceptable! I can't go back then so a restore is performed.
Solution: Leave the default value 8 as it is
2./ I was using wireless network and the network breaks thus process breaks.
Solution: This process in user-interactive and requires you to input confirmation along the way so do it using LAN.
3./ In the process of dropping tablespace PSAP<SID>, I encountered:
BR0301E SQL error -604 at location BrTspDrop-2
ORA-00601: error occurred at recursive SQL level 1
ORA-01555: snapshot too old: rollback segment number 22 with name '_SYSSMU22$" too small
Solution: I have not fixed this yet but I think it is because my PSAPUNDO is too small(800M) so I will increase it to a bigger value e.g. 5GB
4. Problem: Unable to start sap after successfully migrated. OPS$user problem
Solution: logon as <sid>adm, run R3trans -x in a directory that <sid>adm has read/write permission. R3trans -x will creates a file call trans.log. Read the details and refer to OSS 400241
Result: I have successfully performed this on one(1) system and doing this on the another one currently but encounter Problem 3. Will update this further if there are more findings.
REFERENCE:
OSS 748434 New BRSPACE function "dbcreate" - recreate database
OSS 646681 Reorganizing tables with BRSPACE
OSS 541538 FAX: Reorganizations
Message was edited by:
Annie Chan
Message was edited by:
Annie Chan
Message was edited by:
Annie ChanThe current one I am implementing is a development system. The database is less than 100GB. 800MB of PSAPUNDO is sufficient for our development usage.
Follow up on Problem 3:
I created another undo tablespace PSAPUNDO2(undodata.dbf) with size of 5GB. I switched undo tablespace to PSAPUNDO2 and placed PSAPUNDO(undo.data1) offline. With PSAPUNDO2 online and PSAPUNDO offline, I started brspace -f dbcreate and encountered the error below at Step 2 Export User tablespace:
BR0301E SQL error -376 at location BrStattabCreate-3
ORA-00376: file 17 cannot be read at this time
ORA-01110: data file 17: '/oracle/DVT/sapdata1/undo_1/undo.data1'
ORA-06512: at 'SYS.DBMS_STATS", line 5317
ORA-06512: at line 1
I aborted the process and verified that SAP is able to run with this settings. I started CheckDB in DB13 and it shows me these messages:
BR0301W SQL error -376 at location brc_dblog_open-5
ORA-00376: file 17 cannot be read at this time
ORA-01110: data file 17: '/oracle/DEV/sapdata1/undo_1/undo.data1'
BR0324W Insertion of database log header failed
I don't understand then. I have already switched the undo tablespace from PSAPUNDO to PSAPUNDO2. Why the message above still appears? Once I put PSAPUNDO online, CheckDB completes successfully without warning.
I did show parameter undo_tablespace and the result is PSAPUNDO2(5GB).
So exactly, what's going on? Can anyone advise?
===============================================
I have managed to clear the message in DB13 after dropping PSAPUNDO tablespace including contents and datafiles. This is mentioned is OSS note 600141 pg 8 as below:
Note: You cannot just set the old rollback-tablespace PSAPROLL to offline instead of deleting it properly. This results in ORA-00376 in connection with ORA-01110 error messages. PSAPROLL must remain ONLINE until it is deleted. (Oracle bug 3635653)
Message was edited by:
Annie Chan -
TIP 03: Transportable Tablespaces in 10g by Joel Pèrez
Hi OTN Readers!
Everyday I get connection on Internet and one of the first issues that
I do is to open the OTN main page to look for any new article or any
new news about the Oracle Technology. After I open the main page of
OTN Forums and I check what answers I can write to help some people
to work with the Oracle Technology and I decided to begin to write some
threads to help DBAs and Developers to learn the new features of 10g.
I hope you can take advantage of them which will be published here in
this forum. For any comment you can write to me directly to : [email protected]
Please do not replay this thread, if you have any question related to
this I recommend you to open a new post. Thanks!
The tip of this thread is: Transportable Tablespaces
Joel Pérez
http://otn.oracle.com/expertsStep 9: Apply this command to see all the options of the export utility
C:\>
C:\>EXP HELP=Y
Export: Release 10.1.0.2.0 - Production on Fri Apr 23 19:48:30 2004
Copyright (c) 1982, 2004, Oracle. All rights reserved.
You can let Export prompt you for parameters by entering the EXP
command followed by your username/password:
Example: EXP SCOTT/TIGER
Or, you can control how Export runs by entering the EXP command followed
by various arguments. To specify parameters, you use keywords:
Format: EXP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: EXP SCOTT/TIGER GRANTS=Y TABLES=(EMP,DEPT,MGR)
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
USERID must be the first parameter on the command line.
Keyword Description (Default) Keyword Description (Default)
USERID username/password FULL export entire file (N)
BUFFER size of data buffer OWNER list of owner usernames
FILE output files (EXPDAT.DMP) TABLES list of table names
COMPRESS import into one extent (Y) RECORDLENGTH length of IO record
GRANTS export grants (Y) INCTYPE incremental export type
INDEXES export indexes (Y) RECORD track incr. export (Y)
DIRECT direct path (N) TRIGGERS export triggers (Y)
LOG log file of screen output STATISTICS analyze objects (ESTIMATE)
ROWS export data rows (Y) PARFILE parameter filename
CONSISTENT cross-table consistency(N) CONSTRAINTS export constraints (Y)
OBJECT_CONSISTENT transaction set to read only during object export (N)
FEEDBACK display progress every x rows (0)
FILESIZE maximum size of each dump file
FLASHBACK_SCN SCN used to set session snapshot back to
FLASHBACK_TIME time used to get the SCN closest to the specified time
QUERY select clause used to export a subset of a table
RESUMABLE suspend when a space related error is encountered(N)
RESUMABLE_NAME text string used to identify resumable statement
RESUMABLE_TIMEOUT wait time for RESUMABLE
TTS_FULL_CHECK perform full or partial dependency check for TTS
TABLESPACES list of tablespaces to export
TRANSPORT_TABLESPACE export transportable tablespace metadata (N)
TEMPLATE template name which invokes iAS mode export
Export terminated successfully without warnings.Joel Pérez
http://otn.oracle.com/experts -
Hello everyone!!!
I'm using for testing an ORACLE XE on my own XP machine (which has a cluster of 4K), and I've created a tablespace for LOBs with those settings:
CREATE TABLESPACE lobs_tablespace
DATAFILE '.....lobs_datafile.dbf'
SIZE 20M
AUTOTEXTEND ON
NEXT 20M
MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
BLOCKSIZE 16K;
so I created a test table with a LOB field:
CREATE TABLE test_table
ID NUMBER,
FIELD1 VARCHAR2(50)
FIELD2 BLOB,
CONSTRAINT......
LOB (FIELD2) STORE AS LOB_FIELD2 (
TABLESPACE lobs_tablespace
STORAGE (INITIAL 5M NEXT 5M PCTINCREASE 0 MAXEXTENTS 99)
CHUNK 16384
NOCACHE NOLOGGING
INDEX LOB_FIELD2_IDX (
TABLESPACE lobs_tablespace_idxs));
where lobs_tablespace_idxs is created with blocksize of 16K
so at this point, because i'm doing some tests on functions, I tried to insert in this table with a:
FOR i IN 1..10000 LOOP
fn_insert_into_table('description', 'filename');
END LOOP;
trying to insert a word file with dimension of almost 5Mb and I get the datafile lobs_datafile.dbf increased from start of 50M to almost 5Gb...
I have some parameters settled as:
db_16K_cache_size=1028576
db_block_checking = false
undo_management = auto
db_block_size = 8192
sga_target = 128M
sga_max_size = 256M
so the question is: doing some calculus 5Mb of a file * 10000 should be at max 60Mb...not 5Gb...so why the datafile increased so much as like it did? shall I have to check something else that I've missed?....
Thanks a lot to everyone! :-)Hi,
I'm guessing that you'll need to do a bit of a re-org in order to free up the space.
You may well be able to do that just at the LOB level, rather than rebuilding the entire table.
There's stuff about that in Chapter 3 of the 10G App Developers Guide: Large Objects.
Of course, it the table is now empty, then you might as well just drop it and recreate.
After that, you should be able to resize the datafile. -
TABLESPACE Issue with DEV_SOAINFRA
Hi,
I am getting the following error while using the function ora:readBinaryFromFile
ORA-01691: unable to extend lob segment DEV_SOAINFRA.SYS_LOB0000147944C00002$$ by 8192 in tablespace DEV_SOAINFRA.
I know that automatic extending of DEV_SOAINFRA tablespace in em or purging of old instance data will resolve the issue, but is there any other way where I can get rid of storing instance data in SOAINFRA. This is because, I get large amount of size files through FTP adapter and they are getting into dehydration store. Is there any way I can get rid of this so that I wont be storing this data.Thanks for your response,
1. in Step 1 i am using FTP adapter for copying from source directory to target directory in a single invoke ( i.e. file gets copied from remote location A to local location A) in a single invoke and I have used the following type in <interaction-spec className="oracle.tip.adapter.ftp.outbound.FTPIoInteractionSpec"> jca file.
2. After this I am invoking an webservice adapter which takes the files from local location to remote location B. The elements inside the WSDL are filename, filepath and file content. For file content i am using ora:readBinaryfromfile(). Also, I attached MTOM policy for WS adapter. Everything is working fine, but i am getting tablespace issue frequently as the attachments are getting stored in Attachment table of SOAINFRA.
Thanks, -
How to impdp when I do not have the source tablespace.
I have a situation where a may not know what a client's source tablespace is. I only know the source schema. What I need to be able to do is import the data into another schema(that's easy with remap_schema). What i need impdp to do is to place the 'new' objects from the export file into the default tablespace of the destination user. I cannot user REMAP_TABLESPACE because i don't know what it is ahead of time, and there may be mutliple tablespaces. I want EVERYTHING to go into the destination user's default tablespace. Does anyone know how to do this.? Here's the pl/sql that I'm currently using:
DECLARE
dph NUMBER;
BEGIN
dph := dbms_datapump.open(operation => 'IMPORT', job_mode =>'SCHEMA',job_name => 'test15_IMPORT');
dbms_datapump.add_file(handle => dph,filename => 'test_client_backup.dmp', directory => upper('CLIENT_FILES_DIR'),filetype=>1);
dbms_datapump.add_file(handle => dph,filename =>'clientimpdp'||to_char(sysdate,'ddmmyyyyHHMMSS')||'.log' ,directory =>upper('CLIENT_FILES_DIR'),filetype=>3);
dbms_datapump.metadata_filter (handle => dph, name =>'SCHEMA_EXPR',value => 'IN(''TEST_SOURCE_SCHEMA'')');
DBMS_DATAPUMP.METADATA_REMAP(dph,'REMAP_SCHEMA',upper('TEST_SOURCE_SCHEMA'),upper('TEST_DEST_SCHEMA'));
dbms_datapump.set_parameter(handle => dph,name =>'TABLE_EXISTS_ACTION', value =>'REPLACE');
DBMS_DATAPUMP.METADATA_FILTER(dph,'EXCLUDE_PATH_EXPR','=''GRANT''');
DBMS_DATAPUMP.METADATA_FILTER(dph,'EXCLUDE_PATH_EXPR','like''%GRANT''');
DBMS_DATAPUMP.metadata_filter (handle => dph, name => 'EXCLUDE_PATH_EXPR', VALUE => '=''USER''');
DBMS_DATAPUMP.metadata_filter (handle => dph, name => 'EXCLUDE_PATH_EXPR', VALUE => '=''TABLESPACE''');
DBMS_DATAPUMP.metadata_filter (handle => dph, name => 'EXCLUDE_PATH_EXPR', VALUE => '=''TABLESPACE_QUOTA''');
DBMS_DATAPUMP.metadata_filter (handle => dph, name => 'EXCLUDE_PATH_EXPR', VALUE => '=''SYSTEM_GRANT''');
DBMS_DATAPUMP.metadata_filter (handle => dph, name => 'EXCLUDE_PATH_EXPR', VALUE => '=''ROLE_GRANT''');
DBMS_DATAPUMP.metadata_filter (handle => dph, name => 'EXCLUDE_PATH_EXPR', VALUE => '=''DEFAULT_ROLE''');
DBMS_DATAPUMP.metadata_filter (handle => dph, name => 'EXCLUDE_PATH_EXPR', VALUE => '=''PROCACT_SCHEMA''');
dbms_datapump.start_job(dph);
dbms_datapump.detach(dph);
--EXCEPTION
--WHEN OTHERS THEN
--dbms_output.put_line('Error:' || sqlerrm || ' on Job-ID:' || dph);
END;I can't really understand your code. But if, the source tablespace is not available in the target database then the default behaviour is, it will automatically go to the default tablespace of that user. say for ex. you export the data from tablespace TB1 in the source. You try to import them in TB2 (which is the default tablespace for the user in target database) it will be automatically imported to TB2 if there is no TB1 or if the user have no quota on TB1.
-
TRANSPORTABLE TABLESPACE IN 8.1
제품 : ORACLE SERVER
작성날짜 : 2004-08-16
Oracle8i에서는 tablespace단위로 그 구성 datafile들을 옮겨서 다른
database에 연결시켜 사용할 수 있는 기능이 제공된다.
SCOPE
8i~10g Standard Edition 에서는 Import transportable tablespaces 기능만이 지원이 됩니다.
유용성
1. 전사적 정보시스템내에서 대량의 data의 흐름이 필요할 경우, - 예를 들어,
OLTP database에서 data warehouse database로의 data이전 또는 data
warehouse에서 data mart로의 data이전 등 - 8.1이전까지는 SQL*Loader의
direct path나 parallel DML등의 방법을 이용하여 그 작업속도를
향상시키려고 시도 하였다. 8.1의 Transportable Tablespace기능을
이용한다면 datafile들을 새로운 system으로 copy하는 정도의 시간으로
작업을 완료할 수 있다.
2. 중앙에서 변경, 관리되고 지방(지사)에서 사용되는 data들을 CD-ROM에 담아서
배포하는 등에 이용가능하다.
예를 들어, 제품의 사양, 가격등에 대한 정보를 담는 tablespace를 중앙에서
변경, 저장하여 배포하고, 이 data를 지방의 database에 연결하여 주문
system등에 이용할 수 있다.
3. Contents 사업자들은 자신이 제공하는 contents들을 Transportable
Tablespace형태로 제공 하여 고객들의 database에 바로 연결하여 사용할 수
있도록 할 수 있다.
특성, 제한사항
1. 특정 tablespace내의 전체 data를 이동시킨다.
2. Media recovery를 지원한다.
3. Source database와 target database는
- 동일한 OS에서 구동되고 있어야 한다.
- Oracle8i(8.1)이상의 version이어야 한다.
- 동일한 block size를 이용해야 한다.
- 동일한 characterset을 이용해야 한다.
작업절차
1. 대상 tablespace를 read only 상태로 변경한다.
file을 copy하는 동안 해당 tablespace에 변경작업이 일어나지 않도록
보장한다.
2. Source database에서 metadata를 export한다.
해당 tablespace와 그 안의 object들에 대한 dictionary정보를 dump file에
받는 과정이다.
3. 대상 tablespace의 datafile들을 target system으로 이동시킨다.
4. Export dump file을 이동시킨다.
5. Metadata를 target database에 import한다.
6. 필요하다면 이후에 해당 tablespace를 read-write mode로 변경한다.
SAMPLE
Source database : dbA
Target database : dbB
이동 대상 tablespace : TRANS_TS(/u01/data/trans_ts01.dbf, /u01/data/trans_ts02.dbf 로 구성)
1. dbA에서 TRANS_TS를 read only로 변경
alter tablespace TRANS_TS read only ;
2. dbA에서 metadata를 export한다.
exp sys/manager file=trans.dmp transport_tablespace=y
tablespaces=trans_ts triggers=n constraints=n
version이 8.1.6이상이라면,
exp system/manager 대신에 exp \'sys/manager as sysdba\'와 같이
주여야 한다.
transport_tablespace(Y or N)는 Y로 설정한다.
tablespaces에는 transport의 대상이 되는 tablespace를 지정한다.
대상 tablespace의 table들에 걸려있는 trigger, constraint들도 대상으로
할 것인지를 지정한다.
3. TRANS_TS의 두개의 datafile들을 dbB가 존재하는 system으로 binary
copy한다.
4. 위의 2번 과정에서 export한 dump file을 dbB가 존재하는 system으로
binary copy한다.
5. dbB에 metadata를 import한다.
imp sys/manager file=trans.dmp transport_tablespace=y
datafiles=/disk1/trans_ts01.dbf,/disk2/trans_ts02.dbf
8.1.6이상이라면 이 부분도 sys/manager대신에 \'sys/manager as dba\'
와 같이 적는다.
transport_tablespace(Y or N)는 Y로 설정한다.
datafile의 name은 dbB system에 copy된 filename을 지칭한다.
6. 필요할 경우 tablespace를 read write mode로 변경한다.
alter tablespace TRANS_TS read write ;
TRANSPORT SET
Transport하고자 하는 tablespace set은 self-contained이어야만 한다.
대상이 되는 tablespace set 내에 partitioned table이 존재한다면 해당
table의 모든 partition들이 이들 tablespace 내에 존재해야 하며, 비슷하게
LOB column의 data들도 table의 data들과 함께 이들 tablespace 내에 존재해야
하는데, 이렇게 서로 관련된 object들이 tablespace set내에 모두 존재하는
것을 self-contained라고 지칭한다.
tablespace set이 self-contained하지 않다면 transport할 수 없다.
Transport tablespace set이 self-contained인지의 여부를 확인하기 위해서
DBMS_TTS.TRANSPORT_SET_CHECK procedure를 이용한다.
예를 들어,
DBMS_TTS.TRANSPORT_SET_CHECK(ts_list=>'A,B,C',incl_constraints=>TRUE)
을 수행하면 A, B, C 세개의 tablespace로 구성된 transport tablespace set이
self-contained인지에 대한 정보를 TRANSPORT_SET_VIOLATIONS view에 기록해
준다.
incl_constraints를 설정하면 referencial(foreign key) constraint에
대해서도 self-contained 여부를 check해준다.you can do the following:
1. export all objects in your tablespace, by using option tables=(x,y,...)
2. drop your tablespace including contents
3. create new tablespace with a desired name using the same datafiles names (you can physically delete them first or use a REUSE option)
4. create objects you exported in the new tablespace.
5. perform your import
By creating objects first, you guarantee that export will populate tables in the tablespace you need.
This is just a general plan, you have to clarify and confirm all details.
Maybe you are looking for
-
Button to make a screenshot of the 3D window in my pdf
Hello, I wonder if it's possible to create a button to make a screenshot of the 3D window in my pdf to get an image of my subject at a specific time.? I am looking for some time but I have no idea how to proceed. Thank you for your help
-
Option in desktop mac / which is better?
My husband plays games on his desk top with 17in screen which is not MAC. For his birthday I thought it would be good to give him a 22-27in monitor as I can get a good one for about low $200.00. Than I thought, why not just get a (used) mini mac to g
-
To make Column editable in incoming excise invoice form
Dear All, i want user defined matrix column field to be editable in incoming excise invoice form . i tried oMatrix.Columns.Item("U_mnfDet").Editable = true; while use this code add-on failed. Give me any another solution on this or code to make that
-
So this isn't really a request for help, per se, but I'm hoping someone's come across it at some point. Does anyone know what the iPod's algorithm is for shuffling, at least a playlist? I know the old discussion of "playing favorites" has been discus
-
Maximum SQL statement on ODBC 8176 or higher ?
Hello. I am developing a DB reporting program using ODBC. My customer complained that the program shut down when query the SQL statement. But, my computer didn't shut down when I queried the same SQL statement. The difference is my customer used 8177