Startup restrict for export of large database ?
Hello,
the Oracle Admin guide suggests one possible use for the "restricted" mode of an oracle database is to do a consistent export of a large database.
But is this necessary as the option "CONSISTENT=Y" exists in the exp tool ? At least i understand that using CONSISTENT=Y may need a lot of undo space on a large database, but could there be any other reason than this to do an export in restricted mode rather than using the CONSISTENT=Y parameter ?
Hello,
the Oracle Admin guide suggests one possible use for
the "restricted" mode of an oracle database is to do
a consistent export of a large database.
But is this necessary as the option "CONSISTENT=Y"
exists in the exp tool ? At least i understand that
using CONSISTENT=Y may need a lot of undo space on a
large database, but could there be any other reason
than this to do an export in restricted mode rather
than using the CONSISTENT=Y parameter ?I believe the primary reason is like you mentioned, CONSISTENT=Y is going to need a lot of undo space for a busy large database. Depends on your situation, it might be feasible to allocate such undo space.
Similar Messages
-
Partner Function Restriction for Export Licnese in GTS
Hi GTS Guru,
We are setting a export license with license type IVLP, in order to restrict the forwarding agent, we check "Partner function" in the "Objects to be check" in the definition of licnese type.
And we also setup the appropriate partner group:
10 WE ship-to party
20 AG sold-to party
30 CR forwarding agent
In the license, we also setup appropriate partners in the "partner" tab of the license.
Then we go back to ECC.
1. create a STO -> Add the forwarding agen partner;
2. create the delivery -> it seems the forwarding agent partner was not copied from teh STO; then we add the appropriate forwarding agent in the header of the delivery;
3. Run "/SAPSLL/MENU_LEGALR3" to transfer the outbound delivery from ECC to GTS.
4. But it was blocked at the license check
The detail error information is " Partner Function CR does not exist in the document"
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can download a file include all screen snapshot for the whole progress.
http://rapidshare.com/files/229733335/Log_For_SAP_Support.doc.html
Thanks for your help...Hi Rajesh,
1. I can't subsequent to create the transfer order for the delivery -> the issue log is " SAP GTS: Legal control: item () will not adopted", that might becasue the delivery was blocked by export license.
2. I can see all 3 partner in the customs documents:
The detail is as following:
Sold-To Party for Export 4804 CN 320093 -> partner function: AG
Fwd. Agent (Export and Import) 33997 US EXDO -> partner function: SP
Ship-to Party for Export 4804 CN 320093 -> partner function: WE
The root cause for this issue might be the partner function for 33997 is not CR, which is SP in this case.
Then I go to check the partner function definition in GTS, the description for SP is sold-to party.
Could you let me know what's the descript of SP in your standard system?
The next step I am going to test is:
1. Change the partner group, remove CR, and add SP.
2. Update the export license: change CR to SP for 33997.
3. I also need to change the assignment of partner function from Feeder System;
4. Re-do the delivery transfer again.
Please help and advice if it is the root casue for this issue.
Thanks for your help.
Edited by: Rick Guan on May 7, 2009 4:56 PM -
Is there is any setting for export data from database- datasource
Hi ALL
i am using crm 2007 system in there , i had activated opportunities OLTP reports , here but i can not able to get the data in reports , then i check the respective data source in RSA3 it showing zero records,
is there is any procedure to getting data from data base tables to data source.you can follow the same 3.x dataflow eventhough u upgrade.
the only diff
3.x - would be emulated DS
7.0 - RSDS
3.x - file path declared in infopack
7.0 - is declared in DS (that would be inherted by infopack)
Infosource - optional - works fine on both the versions
creating a new DS, it has to be RSDS .. no way out -
How can we suggest a new DBA OCE certification for very large databases?
How can we suggest a new DBA OCE certification for very large databases?
What web site, or what phone number can we call to suggest creating a VLDB OCE certification.
The largest databases that I have ever worked with barely over 1 Trillion Bytes.
Some people told me that the results of being a DBA totally change when you have a VERY LARGE DATABASE.
I could guess that maybe some of the following topics of how to configure might be on it,
* Partitioning
* parallel
* bigger block size - DSS vs OLTP
* etc
Where could I send in a recommendation?
Thanks RogerI wish there were some details about the OCE data warehousing.
Look at the topics for 1Z0-515. Assume that the 'lightweight' topics will go (like Best Practices) and that there will be more technical topics added.
Oracle Database 11g Data Warehousing Essentials | Oracle Certification Exam
Overview of Data Warehousing
Describe the benefits of a data warehouse
Describe the technical characteristics of a data warehouse
Describe the Oracle Database structures used primarily by a data warehouse
Explain the use of materialized views
Implement Database Resource Manager to control resource usage
Identify and explain the benefits provided by standard Oracle Database 11g enhancements for a data warehouse
Parallelism
Explain how the Oracle optimizer determines the degree of parallelism
Configure parallelism
Explain how parallelism and partitioning work together
Partitioning
Describe types of partitioning
Describe the benefits of partitioning
Implement partition-wise joins
Result Cache
Describe how the SQL Result Cache operates
Identify the scenarios which benefit the most from Result Set Caching
OLAP
Explain how Oracle OLAP delivers high performance
Describe how applications can access data stored in Oracle OLAP cubes
Advanced Compression
Explain the benefits provided by Advanced Compression
Explain how Advanced Compression operates
Describe how Advanced Compression interacts with other Oracle options and utilities
Data integration
Explain Oracle's overall approach to data integration
Describe the benefits provided by ODI
Differentiate the components of ODI
Create integration data flows with ODI
Ensure data quality with OWB
Explain the concept and use of real-time data integration
Describe the architecture of Oracle's data integration solutions
Data mining and analysis
Describe the components of Oracle's Data Mining option
Describe the analytical functions provided by Oracle Data Mining
Identify use cases that can benefit from Oracle Data Mining
Identify which Oracle products use Oracle Data Mining
Sizing
Properly size all resources to be used in a data warehouse configuration
Exadata
Describe the architecture of the Sun Oracle Database Machine
Describe configuration options for an Exadata Storage Server
Explain the advantages provided by the Exadata Storage Server
Best practices for performance
Employ best practices to load incremental data into a data warehouse
Employ best practices for using Oracle features to implement high performance data warehouses -
Export import - using TOAD FOR ORACLE and ORACLE DATABASE 10G EXPRESS or s
Hi all,
Could you please kindly help me?
I am using TOAD FOR ORACLE to export a table to flat file A. The tool just supports to distinguish fields by "spaces".
And the web page of ORACLE DATABASE 10G EXPRESS to import the data from flat file A to another database. To load data to a table from a text file, the web page bases on "comma" to distinguish fields.
So could you have any suggestion for me? I need to export data via TOAD FOR ORACLE. And then import it to another database by the home page of ORACLE DATABASE 10G EXPRESS or sqlplus.
Thank you so much for your help!Dont use TOAD for exporting your data. Use PL/SQL. Below is the code given in Asktom.com that does what you want.
create or replace function dump_csv( p_query in varchar2,
p_separator in varchar2 default ',',
p_dir in varchar2 ,
p_filename in varchar2 )
return number
is
l_output utl_file.file_type;
l_theCursor integer default dbms_sql.open_cursor;
l_columnValue varchar2(2000);
l_status integer;
l_colCnt number default 0;
l_separator varchar2(10) default '';
l_cnt number default 0;
begin
l_output := utl_file.fopen( p_dir, p_filename, 'w' );
dbms_sql.parse( l_theCursor, p_query, dbms_sql.native );
for i in 1 .. 255 loop
begin
dbms_sql.define_column( l_theCursor, i, l_columnValue, 2000 );
l_colCnt := i;
exception
when others then
if ( sqlcode = -1007 ) then exit;
else
raise;
end if;
end;
end loop;
dbms_sql.define_column( l_theCursor, 1, l_columnValue, 2000 );
l_status := dbms_sql.execute(l_theCursor);
loop
exit when ( dbms_sql.fetch_rows(l_theCursor) <= 0 );
l_separator := '';
for i in 1 .. l_colCnt loop
dbms_sql.column_value( l_theCursor, i, l_columnValue );
utl_file.put( l_output, l_separator || l_columnValue );
l_separator := p_separator;
end loop;
utl_file.new_line( l_output );
l_cnt := l_cnt+1;
end loop;
dbms_sql.close_cursor(l_theCursor);
utl_file.fclose( l_output );
return l_cnt;
end dump_csv;
/Here is the link to this thread in asktom.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:95212348059 -
SAP EHP Update for Large Database
Dear Experts,
We are planning for the SAP EHP7 update for our system. Please find the system details below
Source system: SAP ERP6.0
OS: AIX
DB: Oracle 11.2.0.3
Target System: SAP ERP6.0 EHP7
OS: AIX
DB: 11.2.0.3
RAM: 32 GB
The main concern over here is the DB size. It is approximately 3TB. I have already gone through forums and notes it is mentioned the DB size is not having any impact on the SAP EHP update using SUM. However am stil thinking it will have the impact in the downtime phase.
Please advise on this.
Regards,
Raja. GHi Raja,
The main concern over here is the DB size. It is approximately 3TB. I have already gone through forums and notes it is mentioned the DB size is not having any impact on the SAP EHP update using SUM. However am stil thinking it will have the impact in the downtime phase.
Although 3TB DB size may not have direct impact on the upgrade process, downtime of the system may vary with larger database size.
Points to consider
1) DB backup before entering into downtime phase
2) Number of Programs & Tables stored in the database. ICNV Table conversions and XPRA execution will be dependent on these parameters.
Hope this helps.
Regards,
Deepak Kori -
Choose a database among various databases for export/import using.
Hello,
I am using forms 6i and I want to export several databases using
forms.
With regards to that what is the technique/code I could use to select
the database of my choice for export.
How could we get the name/services of a particular database among
various databases using Forms 6i.
Could some one give me the idea/code to get the above requirement?
Thanks
AmitWhy would you want to use Forms (a client tool) to import or export a database? Imp and exp are command line tools meant to be run on the database server.
You will probably be hitting other problems, like different database versions. For every database version you need the correct imp and exp.
If you really want to use Forms, than just make a table that hold the names of the databases. -
Hi Friends,
I'm actually starting administering a large Database 10.2.0.1.0 on Windows Server.
Do you guys have Tips or Docs saying the best practices for large Databases? I mean large as 2TB of data.
I'm good administering small and medium DBs, but some of them just got bigger and bigger!!!
Tks a lotI would like to mention below links :
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/partconc.htm
http://download.oracle.com/docs/cd/B28359_01/server.111/b32024/vldb_backup.htm
For couple of good advices and considerations for RMAN VLDB:
http://sosdba.wordpress.com/2011/02/10/oracle-backup-and-recovery-for-a-vldb-very-large-database/
Google "vldb AND RMAN in oracle"
Regards
Girish Sharma -
Sql Server Management Assistant (SSMA) Oracle okay for large database migrations?
All:
We don't have much experience with the SSMA (Oracle) tool and need some advice from those of you familiar with it. We must migrate an Oracle 11.2.0.3.0 database to SQL Server 2014. The Oracle database consists of approximately 25,000 tables and 30,000
views and related indices. The database is approximately 2.3 TB in size.
Is this do-able using the latest version of SSMA-Oracle? If so, how much horsepower would you throw at this to get it done?
Any other gotchas and advice appreciated.
Kindest Regards,
Bill
Bill DavidsonHi
Bill,
SSMA supports migrating large database of Oracle. To migrate Oracle database to SQL Server 2014, you could use the latest version:
Microsoft SQL Server Migration Assistant v6.0 for Oracle. Before the migration, you should pay attention to the points below.
1.The account that is used to connect to the Oracle database must have at least CONNECT permissions. This enables SSMA to obtain metadata from schemas owned by the connecting user. To obtain metadata for objects in other schemas and then convert objects
in those schemas, the account must have the following permissions: CREATE ANY PROCEDURE, EXECUTE ANY PROCEDURE, SELECT ANY TABLE, SELECT ANY SEQUENCE, CREATE ANY TYPE, CREATE ANY TRIGGER, SELECT ANY DICTIONARY.
2.Metadata about the Oracle database is not automatically refreshed. The metadata in Oracle Metadata Explorer is a snapshot of the metadata when you first connected, or the last time that you manually refreshed metadata. You can manually update metadata
for all schemas, a single schema, or individual database objects. For more information about the process, please refer to the similar article:
https://msdn.microsoft.com/en-us/library/hh313203(v=sql.110).
3.The account that is used to connect to SQL Server requires different permissions depending on the actions that the account performs as the following:
• To convert Oracle objects to Transact-SQL syntax, to update metadata from SQL Server, or to save converted syntax to scripts, the account must have permission to log on to the instance of SQL Server.
• To load database objects into SQL Server, the account must be a member of the sysadmin server role. This is required to install CLR assemblies.
• To migrate data to SQL Server, the account must be a member of the sysadmin server role. This is required to run the SQL Server Agent data migration packages.
• To run the code that is generated by SSMA, the account must have Execute permissions for all user-defined functions in the ssma_oracle schema of the target database. These functions provide equivalent functionality of Oracle system functions, and
are used by converted objects.
• If the account that is used to connect to SQL Server is to perform all migration tasks, the account must be a member of the sysadmin server role.
For more information about the process, please refer to the similar article:
https://msdn.microsoft.com/en-us/library/hh313158(v=sql.110)
4.Metadata about SQL Server databases is not automatically updated. The metadata in SQL Server Metadata Explorer is a snapshot of the metadata when you first connected to SQL Server, or the last time that you manually updated metadata. You can manually update
metadata for all databases, or for any single database or database object.
5.If the engine being used is Server Side Data Migration Engine, then, before you can migrate data, you must install the SSMA for Oracle Extension Pack and the Oracle providers on the computer that is running SSMA. The SQL Server Agent service must also
be running. For more information about how to install the extension pack, see Installing Server Components (OracleToSQL). And when SQL Express edition is used as the target database, only client side data migration is allowed and server side data migration
is not supported. For more information about the process, please refer to the similar article:
https://msdn.microsoft.com/en-us/library/hh313202(v=sql.110)
For how to migrate Oracle Databases to SQL Server, please refer to the similar article:
https://msdn.microsoft.com/en-us/library/hh313159(v=sql.110).aspx
Regards,
Michelle Li -
Startup immediate failed after Startup restrict
Hi,
Last Saturday i had strange issue on one of the production DB below is the alert log details.DB was not shutdown
Fri Apr 23 23:23:03 2010
Thread 1 advanced to log sequence 915 (LGWR switch)
Current log# 3 seq# 915 mem# 0: /u02/dsk02/oradata/TRADE1/redo_03_01.log
Current log# 3 seq# 915 mem# 1: /u02/dsk03/oradata/TRADE1/redo_03_02.log
Sat Apr 24 01:17:09 2010
Thread 1 advanced to log sequence 916 (LGWR switch)
Current log# 4 seq# 916 mem# 0: /u02/dsk02/oradata/TRADE1/redo_04_01.log
Current log# 4 seq# 916 mem# 1: /u02/dsk03/oradata/TRADE1/redo_04_02.log
Sat Apr 24 01:30:01 2010
Starting background process EMN0
EMN0 started with pid=28, OS id=24150
Sat Apr 24 01:30:01 2010
Shutting down instance: further logons disabled
Sat Apr 24 01:32:53 2010
Stopping background process QMNC
Sat Apr 24 01:32:53 2010
Stopping background process CJQ0
Sat Apr 24 01:32:55 2010
Stopping background process MMNL
Sat Apr 24 01:32:56 2010
Stopping background process MMON
Sat Apr 24 01:32:57 2010
Shutting down instance (immediate)
License high water mark = 105
All dispatchers and shared servers shutdown
Sat Apr 24 01:33:06 2010
ALTER DATABASE CLOSE NORMAL
Sat Apr 24 01:34:01 2010
Shutting down instance (abort)
License high water mark = 105
Instance terminated by USER, pid = 28391
Sat Apr 24 01:34:04 2010
Starting ORACLE instance (restrict)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Picked latch-free SCN scheme 3
Using LOG_ARCHIVE_DEST_10 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =36
LICENSE_MAX_USERS = 0
SYS auditing is disabled
ksdpec: called for event 13740 prior to event group initialization
Starting up ORACLE RDBMS Version: 10.2.0.4.0.
System parameters with non-default values:
processes = 300
__shared_pool_size = 503316480
__large_pool_size = 16777216
__java_pool_size = 33554432
__streams_pool_size = 16777216
sga_target = 4294967296
control_files = /u02/dsk01/oradata/TRADE1/control1.ctl, /u02/dsk02/oradata/TRADE1/control2.ctl, /u02/dsk03/oradata/TRADE1/control3.ctl
control_file_record_keep_time= 30
db_block_size = 8192
__db_cache_size = 2734686208
db_keep_cache_size = 838860800
db_writer_processes = 4
_db_block_hash_latches = 8192
compatible = 10.2.0.1.0
log_buffer = 144389632
db_file_multiblock_read_count= 8
db_recovery_file_dest = /u01/app/oracle/flash_recovery_area
db_recovery_file_dest_size= 2147483648
_disable_incremental_checkpoints= TRUE
undo_management = AUTO
undo_tablespace = undotbs1
undo_retention = 900
_kgl_large_heap_warning_threshold= 8388608
remote_login_passwordfile= EXCLUSIVE
db_domain =
global_names = FALSE
dispatchers = (PROTOCOL=TCP) (SERVICE=TRADE1XDB)
session_cached_cursors = 30
utl_file_dir = USR_LOG_DIR
job_queue_processes = 0
cursor_sharing = FORCE
background_dump_dest = /u01/app/oracle/admin/TRADE1/bdump
user_dump_dest = /u01/app/oracle/admin/TRADE1/udump
max_dump_file_size = 10240
core_dump_dest = /u01/app/oracle/admin/TRADE1/cdump
audit_file_dest = /u01/app/oracle/admin/TRADE1/adump
commit_write = NOWAIT
db_name = TRADE1
open_cursors = 500
optimizer_mode = ALL_ROWS
parallel_threads_per_cpu = 2
query_rewrite_enabled = TRUE
query_rewrite_integrity = ENFORCED
pga_aggregate_target = 1652555776
PMON started with pid=2, OS id=28619
PSP0 started with pid=3, OS id=28623
MMAN started with pid=4, OS id=28627
DBW0 started with pid=6, OS id=28631
DBW1 started with pid=7, OS id=28633
DBW2 started with pid=5, OS id=28637
DBW3 started with pid=8, OS id=28641
LGWR started with pid=9, OS id=28645
CKPT started with pid=10, OS id=28649
SMON started with pid=11, OS id=28651
RECO started with pid=13, OS id=28655
MMON started with pid=12, OS id=28657
Sat Apr 24 01:34:08 2010
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
MMNL started with pid=14, OS id=28661
Sat Apr 24 01:34:08 2010
starting up 1 shared server(s) ...
Oracle Data Guard is not available in this edition of Oracle.
Sat Apr 24 01:34:08 2010
ALTER DATABASE MOUNT
Sat Apr 24 01:34:12 2010
Setting recovery target incarnation to 4
Sat Apr 24 01:34:12 2010
Successful mount of redo thread 1, with mount id 1828538448
Sat Apr 24 01:34:12 2010
Database mounted in Exclusive Mode
Completed: ALTER DATABASE MOUNT
Sat Apr 24 01:34:12 2010
ALTER DATABASE OPEN
Sat Apr 24 01:34:12 2010
Beginning crash recovery of 1 threads
Sat Apr 24 01:34:12 2010
Started redo scan
Sat Apr 24 01:34:52 2010
Completed redo scan
934280 redo blocks read, 193963 data blocks need recovery
Sat Apr 24 01:34:56 2010
Started redo application at
Thread 1: logseq 916, block 2, scn 4605308115
Sat Apr 24 01:34:57 2010
Recovery of Online Redo Log: Thread 1 Group 4 Seq 916 Reading mem 0
Mem# 0: /u02/dsk02/oradata/TRADE1/redo_04_01.log
Mem# 1: /u02/dsk03/oradata/TRADE1/redo_04_02.log
Sat Apr 24 01:43:34 2010
Completed redo application
Sat Apr 24 01:44:14 2010
Completed crash recovery at
Thread 1: logseq 916, block 934282, scn 4605502953
193963 data blocks read, 193546 data blocks written, 934280 redo blocks read
Sat Apr 24 01:44:17 2010
Thread 1 advanced to log sequence 917 (thread open)
Thread 1 opened at log sequence 917
Current log# 1 seq# 917 mem# 0: /u02/dsk02/oradata/TRADE1/redo_01_01.log
Current log# 1 seq# 917 mem# 1: /u02/dsk03/oradata/TRADE1/redo_01_02.log
Successful open of redo thread 1
Sat Apr 24 01:44:17 2010
SMON: enabling cache recovery
Sat Apr 24 01:44:24 2010
Successfully onlined Undo Tablespace 1.
Sat Apr 24 01:44:24 2010
SMON: enabling tx recovery
Sat Apr 24 01:44:24 2010
Database Characterset is WE8ISO8859P1
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=18, OS id=1122
Sat Apr 24 01:44:35 2010
Completed: ALTER DATABASE OPEN
Sat Apr 24 01:49:15 2010
db_recovery_file_dest_size of 2048 MB is 0.36% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Sat Apr 24 02:52:29 2010
MMNL absent for 4702 secs; Foregrounds taking over
MMNL absent for 18728 secs; Foregrounds taking over
MMNL absent for 22819 secs; Foregrounds taking over
MMNL absent for 24866 secs; Foregrounds taking over
MMNL absent for 26913 secs; Foregrounds taking over
MMNL absent for 28959 secs; Foregrounds taking over
Sat Apr 24 11:27:13 2010
ALTER SYSTEM SET event='10046 trace name context forever, level 12' SCOPE=SPFILE;
System State dumped to trace file
System State dumped to trace file /u01/app/oracle/admin/TRADE1/udump/trade1_ora_3288.trc
System State dumped to trace file /u01/app/oracle/admin/TRADE1/udump/trade1_ora_3288.trc
System State dumped to trace file /u01/app/oracle/admin/TRADE1/udump/trade1_ora_3288.trc
Sat Apr 24 11:28:45 2010
System State dumped to trace file /u01/app/oracle/admin/TRADE1/udump/trade1_ora_3288.trc
Sat Apr 24 11:28:45 2010
ALTER SYSTEM SET event='10046 trace name context off' SCOPE=SPFILE;I was able to Startup the DB properly in morning.But the script failed during the follwing sequence.
shutdown abort
startup restrict
shutdown immediate
0-(TRD)@prdba001 backup_logs: cat backup_Sat.log
shutdownDb TRADE1
SQL*Plus: Release 10.2.0.4.0 - Production on Sat Apr 24 01:30:01 2010
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Release 10.2.0.4.0 - Production
sys@TRADE1> Shutdown Immediate failed - being brutal:
SQL*Plus: Release 10.2.0.4.0 - Production on Sat Apr 24 01:34:01 2010
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
idle> Connected to an idle instance.
idle> alter system checkpoint
ERROR at line 1:
ORA-01012: not logged on
idle> ORACLE instance shut down.
idle> ORACLE instance started.
Total System Global Area 4294967296 bytes
Fixed Size 2089176 bytes
Variable Size 570429224 bytes
Database Buffers 3573547008 bytes
Redo Buffers 148901888 bytes
Database mounted.
Shutdown Abort failed !!
My Script details
#!/bin/ksh
oratabFile=/var/opt/oracle/oratab
archiveBase=/u01/app/oracle/backups/
logBase=/home/oracle/log/
DAT=`date +%Y%m%d%H`
case `hostname` in
prdba001)
NFSdest=/oradumps/prdba001
prdba002)
NFSdest=/oradumps/prdba002
uatf002)
NFSdest=/oradumps/uatf002
prdfa001)
NFSdest=/oradumps/prdfa001/
prdfa002)
NFSdest=/oradumps/prdfa002/
esac
function backupOneDatabase
print Backing up database $ORACLE_SID
setFilePaths
backupDbNoCatalog
if [[ -n ""$NFSdest ]]; then
print creating tarball of backupDir
cd $archiveDir
tar cvfE - . | gzip --best | \
dd of=$remoteArchiveFile;
fi
function backupDbNoCatalog
print
print backupDb $ORACLE_SID
print
cat <<! | sqlplus /nolog
connect / as sysdba;
startup mount;
exit;
cat <<! | rman
connect target / ;
backup as compressed backupset device type disk database format '$backupFormat';
configure retention policy to redundancy=1;
delete force noprompt obsolete;
exit;
cat <<! | sqlplus /nolog
connect / as sysdba;
alter database backup controlfile to '$controlFile' ;
alter database open read only;
shutdown immediate;
create pfile='$pFile' from spfile;
function isDbUp
if ps -aef | grep -v grep | grep ora_smon_$ORACLE_SID > /dev/null ; then
print YES
else
print NO
fi
function setFilePaths
export archiveDir=$archiveBase/$ORACLE_SID
if [[ ! -d $archiveDir ]] ; then
mkdir -p $archiveDir
fi
export pFile=$archiveDir/init_${ORACLE_SID}.ora
export controlFile=$archiveDir/control_${ORACLE_SID}.ctl
export backupFormat=$archiveDir/%T_%p_%s
if [[ -f $pFile ]] ; then
mv $pFile $pFile.old
fi
if [[ -f $controlFile ]] ; then
mv $controlFile $controlFile.old
fi
localhost=`hostname`;
export remoteArchiveFile=${NFSdest}/${localhost}_${ORACLE_SID}_$DAT.tgz
function startupDb
print starting up $ORACLE_SID
cat <<! | sqlplus /nolog &
connect / as sysdba
startup open;
exit;
startupPid=$!
(sleep 120 ; kill -9 $startupPid ) &
watchDogPid=$!
wait $startupPid
kill -9 $watchDogPid # >/dev/null 2>&1
if [[ 0 != $? ]] ; then
print startup failed
exit 1
fi
print startup complete
function shutdownDb
print shutdownDb $ORACLE_SID
cat <<! | sqlplus / as sysdba &
shutdown immediate;
exit;
shutdownPid=$!
(sleep 240 ; kill -9 $shutdownPid ) &
watchDogPid=$!
wait $shutdownPid
kill -9 $watchDogPid >/dev/null 2>&1
if [[ 0 != $? ]] || [[ YES == $(isDbUp) ]] ; then
print Shutdown Immediate failed - being brutal:
cat <<! | sqlplus /nolog &
connect / as sysdba
alter system checkpoint;
shutdown abort;
startup restrict;
shutdown immediate;
exit;
shutdownPid=$!
(sleep 300 ; kill -9 $shutdownPid ) &
watchDogPid=$!
wait $shutdownPid
kill -9 $watchDogPid >/dev/null 2>&1
if [[ 0 != $? ]]; then
print Shutdown Abort failed !!
exit 1
fi
fi
print shutdown complete
integer dbIndex
IFS=:
# shutdown all databases but remember which ones were up
dbIndex=0
grep -v "^[ ]*#" $oratabFile | while read ORACLE_SID ORACLE_HOME autoStart
do
if [[ -n ""$ORACLE_SID ]] ; then
dbNames[$dbIndex]=$ORACLE_SID
dbHomes[$dbIndex]=$ORACLE_HOME
dbIsRunning[$dbIndex]=$(isDbUp)
if [[ YES = ${dbIsRunning[$dbIndex]} ]] ; then
shutdownDb;
fi
(( dbIndex = dbIndex + 1 ))
fi
done
# backup databases
dbIndex=0
while (( $dbIndex < ${#dbNames[@]} ))
do
export ORACLE_SID=${dbNames[$dbIndex]}
export ORACLE_HOME=${dbHomes[$dbIndex]}
backupOneDatabase
(( dbIndex = dbIndex + 1 ))
done
# restart databases that were running before
dbIndex=0
while (( $dbIndex < ${#dbNames[@]} ))
do
if [[ YES = ${dbIsRunning[$dbIndex]} ]] ; then
export ORACLE_SID=${dbNames[$dbIndex]}
export ORACLE_HOME=${dbHomes[$dbIndex]}
startupDb
fi
(( dbIndex = dbIndex + 1 ))
done
print Deleting Old Backupfile
/usr/bin/find $NFSdest -name "*.tgz" -mtime +1 -exec rm {} \; >/tmp/delete.log
print Deleted Old Backupfile
print Finished at `date`;
exit; -
[Solved] if(Transaction specified for a non-transactional database) then
I am getting started with BDBXML 2.4.14 transactions and XQuery update functionality and I am having some difficulty with 'node insert ...' and transactions failing with 'Transaction specified for a non-transactional database'
Thanks for helping out.
Setup:
I have coded up a singleton manager for the XmlManger with a ThreadLocal holding the transaction and a query method to execute XQueries. The setup goes like this:
environmentConfig = new EnvironmentConfig();
environmentConfig.setRunRecovery(true); environmentConfig.setTransactional(true); environmentConfig.setAllowCreate(true); environmentConfig.setRunRecovery(true); environmentConfig.setInitializeCache(true); environmentConfig.setTxnMaxActive(0); environmentConfig.setInitializeLocking(true); environmentConfig.setInitializeLogging(true); environmentConfig.setErrorStream(System.err);
environmentConfig.setLockDetectMode(LockDetectMode.MINWRITE); environmentConfig.setJoinEnvironment(true); environmentConfig.setThreaded(true);
xmlManagerConfig = new XmlManagerConfig(); xmlManagerConfig.setAdoptEnvironment(true); xmlManagerConfig.setAllowAutoOpen(true); xmlManagerConfig.setAllowExternalAccess(true);
xmlContainerConfig = new XmlContainerConfig(); xmlContainerConfig.setAllowValidation(false); xmlContainerConfig.setIndexNodes(true); xmlContainerConfig.setNodeContainer(true);
// initialize
instance.xmlManager = new XmlManager(instance.getEnvironment(), instance.getXmlManagerConfig());
instance.xmlContainer = instance.xmlManager.openContainer( containerName, instance.getXmlContainerConfig());
private ThreadLocal<XmlTransaction> transaction = new ThreadLocal<XmlTransaction>();
public XmlTransaction getTransaction() throws Exception {
if (transaction.get() == null) {
XmlTransaction t = xmlManager.createTransaction();
log.info("Transaction created, id: " + t.getTransaction().getId());
transaction.set(t);
} else if (log.isDebugEnabled()) {
log.debug("Reusing transaction, id: "
+ transaction.get().getTransaction().getId());
return transaction.get();
private XmlQueryContext createQueryContext(String docName) throws Exception {
XmlQueryContext context = xmlManager.createQueryContext(
XmlQueryContext.LiveValues, XmlQueryContext.Lazy);
List<NamespacePrefix> namespacePrefixs = documentPrefixes.get(docName);
// declare ddi namespaces
for (NamespacePrefix namespacePrefix : namespacePrefixs) {
context.setNamespace(namespacePrefix.getPrefix(), namespacePrefix
.getNamespace());
return context;
public XmlResults xQuery(String query) throws Exception {
XmlQueryExpression xmlQueryExpression = null;
XmlQueryContext xmlQueryContext = getQueryContext(docName);
try {
xmlQueryExpression = xmlManager.prepare(getTransaction(), query,
xmlQueryContext);
log.info(query.toString());
} catch (Exception e) {
if (xmlQueryContext != null) {
xmlQueryContext.delete();
throw new DDIFtpException("Error prepare query: " + query, e);
XmlResults rs = null;
try {
rs = xmlQueryExpression.execute(getTransaction(), xmlQueryContext);
// catch deadlock and implement retry
catch (Exception e) {
throw new DDIFtpException("Error on query execute of: " + query, e);
} finally {
if (xmlQueryContext != null) {
xmlQueryContext.delete();
xmlQueryExpression.delete();
return rs;
<?xml version="1.0" encoding="UTF-8"?>
<Test version="0.1">
<Project id="test-project" agency="dda">
<File id="large-doc.xml" type="ddi"/>
<File id="complex-doc.xml" type="ddi"/>
</Project>
<Project id="2nd-project" agency="test.org"/>
</Test>
Problem:
All the queries are run through the xQuery method and I do delete the XmlResults afterwards. How do I get around the 'Transaction specified for a non-transactional database' what is the transactions doing? How do I get state information out of a transaction? What am I doing wrong here?
1 First I insert a node:
Transaction created, id: -2147483647
Adding document: large-doc.xml to xml container
Reusing transaction, id: -2147483647
Working doc: ddieditor.xml
Root element: Test
Reusing transaction, id: -2147483647
insert nodes <Project id="JUnitTest" agency="test.org"></Project> into doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test
Reusing transaction, id: -2147483647
2 Then do a query:
Reusing transaction, id: -2147483647
doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project/@id
Reusing transaction, id: -2147483647
3 The same query again:
Reusing transaction, id: -2147483647
doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project/@id
Reusing transaction, id: -2147483647
4 Delete a node:
Reusing transaction, id: -2147483647
delete node for $x in doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project where $x/@id = '2nd-project' return $x
Reusing transaction, id: -2147483647
5 Then an error on query:
Reusing transaction, id: -2147483647
doc("dbxml:/ddieditor.dbxml/ddieditor.xml")/Test/Project/@id
Reusing transaction, id: -2147483647
Transaction specified for a non-transactional database
com.sleepycat.dbxml.XmlException: Error: Invalid argument, errcode = DATABASE_ERROR
at com.sleepycat.dbxml.dbxml_javaJNI.XmlResults_hasNext(Native Method)
at com.sleepycat.dbxml.XmlResults.hasNext(XmlResults.java:136)
Message was edited by:
jannikjOk got it solved by increasing the locks lockers and mutex's I allso increased the the log buffer size:
environmentConfig = new EnvironmentConfig();
// general environment
environmentConfig.setAllowCreate(true);
environmentConfig.setRunRecovery(true); // light recovery on startup
//environmentConfig.setRunFatalRecovery(true); // heavy recovery on startup
environmentConfig.setJoinEnvironment(true); // reuse of environment: ok
environmentConfig.setThreaded(true);
// log subsystem
environmentConfig.setInitializeLogging(true);
environmentConfig.setLogAutoRemove(true);
environmentConfig.setLogBufferSize(128 * 1024); // default 32KB
environmentConfig.setInitializeCache(true); // shared memory region
environmentConfig.setCacheSize(2500 * 1024 * 1024); // 250MB cache
// transaction
environmentConfig.setTransactional(true);
environmentConfig.setTxnMaxActive(0); // live forever, no timeout
// locking subsystem
environmentConfig.setInitializeLocking(true);
environmentConfig.setMutexIncrement(22);
environmentConfig.setMaxMutexes(200000);
environmentConfig.setMaxLockers(200000);
environmentConfig.setMaxLockObjects(200000); // default 1000
environmentConfig.setMaxLocks(200000);
// deadlock detection
environmentConfig.setLockDetectMode(LockDetectMode.MINWRITE);
In the docs by Oracle it is limited information given regarding the impact of these settings and their options. Can you guys point in a direction where I can find some written answers or it hands on? -
Data pump export full RAC database in window single DB by network_link
Hi Experts,
I have a window 32 bit 10.2 database.
I try to export a full rac database (350G some version with window DB) in window single database by dblink.
exp syntax as
exdpd salemanager/********@sale FULL=y DIRECTORY=dataload NETWORK_LINK=sale.net DUMPFILE=sale20100203.dmp LOGFILE=salelog20100203.log
I created a dblink with fixed instance3. It was working for two day and display message as
ORA-31693: Table data object "SALE_AUDIT"."AU_ITEM_IN" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEPOPULATE callout
ORA-01555: snapshot too old: rollback segment number with name "" too small
ORA-02063: preceding line from sale.netL
I stoped export and checked window target alert log.
I saw some message as
kupprdp: master process DM00 started with pid=16, OS id=4444
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_FULL_02', 'SYSTEM', 'KUPC$C_1_20100202235235', 'KUPC$S_1_20100202235235', 0);
Tue Feb 02 23:56:12 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=17, OS id=4024
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_FULL_01', 'SALE', 'KUPC$C_1_20100202235612', 'KUPC$S_1_20100202235612', 0);
kupprdp: worker process DW01 started with worker id=1, pid=18, OS id=2188
to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_FULL_01', 'SALE');
In RAC instance alert.log. I saw message as
SELECT /*+ NO_PARALLEL ("KU$") */ "ID","RAW_DATA","TRANSM_ID","RECEIVED_UTC_DATE ","RECEIVED_FROM","ACTION","ORAUSER",
"ORADATE" FROM RELATIONAL("SALE_AUDIT"."A U_ITEM_IN") "KU$"
How to fixed this error?
add more undotbs space in RAC instance 3 or window database?
Thanbks
Jim
Edited by: user589812 on Feb 4, 2010 10:15 AMI usually increate undo space. Is your undo retention set smaller than the time it takes to run the job? If it is, I would think you would need to do that. If not, then I would think it would be the space. You were in the process of exporting data when the job failed which is what I would have expected. Basically, DataPump want to export each table consistent to itself. Let's say that one of your tables is partitioned and it has a large partition and a smaller partition. DataPump attempts to export the larger partiitons first and it remembers the scn for that partition. When the smaller partitions are exported, it will use the scn to get the data from that partition as it would have looked like if it exported the data when the first partiiton was used. If you don't have partitioned tables, then do you know if some of the tables in the export job (I know it's full so that includes just about all of them) are having data added to them or removed from them? I can't think of anything else that would need undo while exporting data.
Dean -
Best approach to archival of large databases?
I have a large database (~300 gig) and have a data/document retention requirement that requires me to take a backup of the database once every six months to be retained for 5 years. Other backups only have to be retained as long as operationally necessary, but twice a year, I need these "reference" backups to be available, should we need to restore the data for some reason - usually historical research for data that extends beyond what's currently in the database.
What is the best approach for making these backups? My initial response would be to do a full export of the database, as this frees me from any dependencies on software versions, etc. However, an export takes a VERY long time. I can manage it by doing concurrent multiple exports by tablespace - this is able to be completed in < 1 day. Or I can back up the software directory + the database files in a cold backup.
Or is RMAN well-suited for this? So far, I've only used RMAN for my operational-type backups - for short-term data recovery needs.
What are other people doing?Thanks for your input. How would I do this? My largest table is in monthly partitions each in its own tablespace. Would the process have to be something like: alter table exchange partition-to-be-rolled-off with non-partitioned-table; then export that tablespace?
-
IPhoto 11 9.2.1 export to large MPEG4 fail
I have been exporting many slideshows to the default large MPEG4 format. All of a sudden, the exported movies are blank after the first few slides (white frames), athough the music track is there. It will export to other sizes and to QuickTime format I have done a rebuild, reboot, delete plist. Problem persists.
Thanks, ValerieI don't use a theme. A slideshow has around 30-40 slides with music. About 3-4 minute shows.
First, I used the iPhoto install that came with the new Air three months ago. No problem exporting many slideshows. Then the blank slides started appearing when I exported to large format (m4v). Seemed to work with other sizes and with Quick Time format (mov) for a while, then mov format started sending out blanks. If I rebuilt database and repaired permissions everytime I opened the program, I could export mov.
Went to Apple store and reinstalled iPhoto plus all updates. The first couple of slideshows exported fine, then the blanks appeared. Tried to rebuild/repair, but even this isn't working this time. Typically, the first 10 slides export, the rest is blank (white), although music is exported entirely.
Thanks. -
Export/Import of Database to increase the number of datafiles
My BW system is close to 400 GB and it only has 3 datafiles. Since we have 8 CPU cores, I need 8 datafiles. What is the best way to export/import the database in order to achive 8 datafiles?
With a BW system that size you can probably get away with it. You most likely do at least a few full loads so all that data will be evenly distrubuted when you drop and reload. If you can clean up some of your PSAs and log tables you can probably shrink the 3 files down a little anyway. If you do little maintenance like that every few weeks, after locking auto growth, you can probably shave 2-3 GBs off each file each week. Do that for a few months and your large files will love 20 GBs while the other ones start to grow. Rebuilding indexes also helps with that. You will be surprised how fast they will level out that way.
With regard to performance you should be fine. I certainly wouldnt do it at 10 am on Tuesday though :-). You can probably get away with it over a weekend though. It will take basically no time at all to create them and very little IO. If you choose to clean out the 3 existing ones that will take some time. I have found it takes about 1-3 hours to shrink a 150 GB datafile down to 100 GBs, that was with 8 CPUs, 30 GBs of RAM, and an SAN that I don't fully understand
Maybe you are looking for
-
Regarding uploading the file from Application Server
I am trying to upload a .csv file to Application Server through CG3Z tcode. But i am getting dump ad below.. What happened? While a text was being converted from code page '4102' to '1100', one the following occurred: - an character was d
-
Power Supply Isn't Charging the Battery
This is a rather tricky issue. It's happened before, a few weeks ago, but it had seemed to work itself out and began functioning normally. As I was playing Neverwinter Nights today, the iBook went to sleep. I had the Power Supply plugged in whilst I
-
I recently bought a used macbook (2008) and was told that it was set back to factory settings and came with all the orignal software, box etc. The first Installer disk of the Mac os X was already in the disc drive, but I'm not able to get past where
-
Hi, In the QM view of mm02 I have 5 rows containing inspection types 1,2,5,7,15. When I do the BDC recording to delete insp type 5, the BDC cursor is on position (03) RMQAM-SELEKT(03). How do I write the BDC code to select the row based on inspection
-
"Upgraded" to Lion OS X 10.7.2: Word doesn't work!
i need help figuring out how to open my word documents now that i have upgraded to Lion OS X... If I buy "pages" will that program be able to open my word documents?