Suppress Redo in an imp Process in a 9i database
Hi,
i want to import a 1 Terrabyte Table. Oracle Imp will should takes 12 days. So i want to suppress the generating of redos to increase speed.
How could i achieve this in a 9i database.
thanks a lot
Wolle
Change the Table to NOLOGGING which reduces the amount Redo Generated. Follow other options provided already.
Create indexes with multiple sessions with NOLOGGING and PARALLEL to speed up the process.
Similar Messages
-
Does buffer cache size matters during imp process ?
Hi,
sorry for maybe naive question but I cant imagine why do Oracle need buffer cache (larger = better ) during inserts only (imp process with no index creation) .
As far as I know insert is done via pga area (direct insert) .
Please clarify for me .
DB is 10.2.0.3 if that matters :).
Regards.
GregSurprising result: I tried closing the db handles with DB_NOSYNC and performance
got worse. Using a 32 Meg cache, it took about twice as long to run my test:
15800 seconds using DB->close(DB_NOSYNC) vs 8200 seconds using DB->close(0).
Here is some data from db_stat -m when using DB_NOSYNC:
40MB 1KB 900B Total cache size
1 Number of caches
1 Maximum number of caches
40MB 8KB Pool individual cache size
0 Maximum memory-mapped file size
0 Maximum open file descriptors
0 Maximum sequential buffer writes
0 Sleep after writing maximum sequential buffers
0 Requested pages mapped into the process' address space
26M Requested pages found in the cache (70%)
10M Requested pages not found in the cache (10811882)
44864 Pages created in the cache
10M Pages read into the cache (10798480)
7380761 Pages written from the cache to the backing file
3452500 Clean pages forced from the cache
7380761 Dirty pages forced from the cache
0 Dirty pages written by trickle-sync thread
10012 Current total page count
5001 Current clean page count
5011 Current dirty page count
4099 Number of hash buckets used for page location
47M Total number of times hash chains searched for a page (47428268)
13 The longest hash chain searched for a page
118M Total number of hash chain entries checked for page (118169805)
It looks like not flushing the cache regularly is forcing a lot more
dirty pages (and fewer clean pages) from the cache. Forcing a
dirty page out is slower than forcing a clean page out, of course.
Is this result reasonable?
I suppose I could try to sync less often than I have been, but more often
than never to see if that makes any difference.
When I close or sync one db handle, I assume it flushes only that portion
of the dbenv's cache, not the entire cache, right? Is there an API I can
call that would sync the entire dbenv cache (besides closing the dbenv)?
Are there any other suggestions?
Thanks,
Eric -
No offline redo logs found for processing
Hi!
I have very simple question/problem.
I have installed two Oracle systems on the same host on Windows.
I have installed the oracle software only once
If I try to execute a full online backup + redo logs, only the online redo logs will be back up. After this I get the error <b>BR0017E Offline redo log file 'C:\oracle\DEV\102\RDBMS\ARC01435_0627659303.001' not found</b>.
If I try only to back up redo logs I receive the warning "no offline redo logs found for processing".
I tried to implement some helpful SAP notes such as 490976.
E.g. I have changed the file arch<sid>.log, but I have still the same warning.
The environment seems to be ok.
It would be great, if some one have experience dealing with this problem...
Thank'sHello Eric!
Many thanks for your response.
Here the ENV-comando from SM49:
ClusterLog=C:\WINDOWS\Cluster\cluster.log
ComSpec=C:\WINDOWS\system32\cmd.exe
CPIC_MAX_CONV=10
DBMS_TYPE=ora
dbs_ora_schema=SAPSR3
dbs_ora_tnsname=DEV
FP_NO_HOST_CHECK=NO
JAVA_HOME=C:\j2sdk1.4.2_15-x64
NLS_LANG=AMERICAN_AMERICA.UTF8
NUMBER_OF_PROCESSORS=8
ORACLE_HOME=C:\oracle\SSM\102
ORACLE_SID=DEV
OS=Windows_NT
Path=C:\oracle\SSM\102\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\Intel\DMIX;C:\j2sdk1.4.2_15-
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH
PERL5LIB=C:\oracle\SSM\102\perl\5.8.3\lib\MSWin32-X64-multi-thread;C:\oracle\SSM\102\perl\5.8.3\lib;C:\oracle\SSM\102\perl\5.8.3
PPID=1952
PROCESSOR_ARCHITECTURE=AMD64
PROCESSOR_IDENTIFIER=EM64T Family 6 Model 15 Stepping 7, GenuineIntel
PROCESSOR_LEVEL=6
PROCESSOR_REVISION=0f07
PROMPT=$P$G
SAPDATA_HOME=H:\oracle\DEV SAPEXE=F:\usr\sap\DEV\SYS\exe\uc\NTAMD64 SAPLOCALHOST=sapdev
SAPSYSTEMNAME=DEV
SIDE_INFO=F:\usr\sap\DEV\DVEBMGS02\data\sideinfo.DAT
SystemDrive=C:
SystemRoot=C:\WINDOWS
TEMP=C:\WINDOWS\TEMP
TMP=C:\WINDOWS\TEMP
TNS_ADMIN=F:\usr\sap\DEV\SYS\profile\oracle
windir=C:\WINDOWS
kind regards
Thom -
Full database exp/imp between RAC and single database
Hi Experts,
we have a RAC database oracle 10GR2 with 4 node in linux. i try to duplicate rac database into single instance window database.
there are same version both database. during importing, I need to create 4 undo tablespace to keep imp processing.
How to keep one undo tablespace in single instance database?
any experience of exp/imp RAC database into single instance database to share with me?
Thanks
Jim
Edited by: user589812 on Nov 13, 2009 10:35 AMJIm,
I also want to know can we add the exclude=tablespace on the impdp command for full database exp/imp?You can't use exclude=tablespace on exp/imp. It is for datapump expdp/impdp only.
I am very insteresting in your recommadition.
But for a full database impdp, how to exclude a table during full database imp? May I have a example for this case?
I used a expdp for full database exp. but I got a exp error in expdp log as ORA-31679: Table data object "SALE"."TOAD_PLAN_TABLE" has long columns, and longs can not >be loaded/unloaded using a network linkHaving long columns in a table means that it can't be exported/imported over a network link. To exclude this, you can use the exclude expression:
expdp user/password exclude=TABLE:"= 'SALES'" ...
This will exclude all tables named sales. If you have that table in schema scott and then in schema blake, it will exclude both of them. The error that you are getting is not a fatal error, but that table will not be exported/imported.
the final message as
Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
F:\ORACLEBACKUP\SALEFULL091113.DMP
Job "SYSTEM"."SYS_EXPORT_FULL_01" completed with 1 error(s) at 16:50:26Yes, the fact that it did not export one table does not make the job fail, it will continue on exporting all other objects.
. I drop database that gerenated a expdp dump file.
and recreate blank database and then impdp again.
But I got lots of error as
ORA-39151: Table "SYSMAN"."MGMT_ARU_OUI_COMPONENTS" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
ORA-39151: Table "SYSMAN"."MGMT_BUG_ADVISORY" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
......ORA-31684: Object type TYPE_BODY:"SYSMAN"."MGMT_THRESHOLD" already exists
ORA-39111: Dependent object type TRIGGER:"SYSMAN"."SEV_ANNOTATION_INSERT_TR" skipped, base object type VIEW:"SYSMAN"."MGMT_SEVERITY_ANNOTATION" >already exists
and last line as
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 2581 error(s) at 11:54:57Yes, even though you think you have an empty database, if you have installed any apps or anything, it may create tables that could exist in your dumpfile. If you know that you want the tables from the dumpfile and not the existing ones in the database, then you can use this on the impdp command:
impdp user/password table_exists_action=replace ...
If a table that is being imported exists, DataPump will detect this, drop the table, then create the table. Then all of the dependent objects will be created. If you don't then the table and all of it's dependent objects will be skipped, (which is the default).
There are 4 options with table_exists_action
replace - I described above
skip - default, means skip the table and dependent objects like indexes, index statistics, table statistics, etc
append - keep the existing table and append the data to it, but skip dependent objects
truncate - truncate the existing table and add the data from the dumpfile, but skip dependent objects.
Hope this helps.
Dean -
I am in the process of expanding a database of chemistry journal articles. These materials are ideally acquired in two formats when both are available-- PDF and HTML. To oversimplify, PDFs are for the user to read, and derivatives of the HTML versions are for the computer to read. Both formats are, of course, readily recognized and indexed by Spotlight. Journal articles have two essential components with regards to a database: the topical content of the article itself, and the cited references to other scientific literature. While a PDF merely lists these references, the HTML version has, in addition, links to the cited items. Each link URL contains the digital object identifier (doi) for the item it points to. A doi is a unique string that points to one and only one object, and can be quite useful if rendered in a manner that enables indexing by Spotlight. Embedded URL's are, of course, ignored by Spotlight. As a result, HTML-formatted articles must be processed so that URL's are openly displayed as readable text before Spotlight will recognize them. Conversion to DOC format using MS Word, followed by conversion to RTF using Text Edit accomplishes this, but is quite labor intensive.
In the last few months, I have added about 3,500 articles to this collection, which means that any procedure for rendering URL's must be automated and able to process large batches of documents with minimal user oversight. This procedure needs to generate a separate file for each HTML document processed. Trials using Automator's "Get Specified Finder Items" and "Get Selected Finder Items", as well as "Ask For Finder Items" (along with "Get URLs From Web Pages") give unsatisfactory results. When provided with multiple input documents, these three commands generate output in which the URLs from multiple input items are merged into a single block, which yields a single file using "Create New Word Document" as the subsequent step. A one-to-one, input file to output file result can be obtained by processing one file at a time, but this requires manual selection of each item and one-at-a-time processing. What I need is a command that accepts multiple input documents, but processes them one at a time, generating a separate output for each file processed. Is there a way for Automator to do this?Hi,
With the project all done, i'm preparing for the presentation. Managed to get my hands on a HD beamer for the night (Epason TW2000) and planning to do the presentation in HD.
That of course managed to bring up some problems. I posted a thread which i'll repost here . Sorry for the repost, i normally do not intend to do this, but since this thread is actually about the same thing, i'd like to ask the same question to you. The end version is in AfterEffects, but that actually doesn't alter the question. It's about export:
"I want to export my AE project of approx 30 min containing several HD files to a Blu Ray disc. The end goal is to project the video in HD quality using the Epson EMP-TW2000 projector. This projector is HD compatible.
To project the video I need to connect the beamer to a computer capable of playing a heavy HD file (1), OR burn the project to a BRD (2) and play it using a BRplayer.
I prefer option 2, so my question is: which would be the preferred export preset?
Project specs:
- 1920x1080 sq pix (16:9)
- 25 fps
- my imported video files (Prem.Pro sequences) are also 25 fps and are Progressive (!)
To export to a BRD compatible format, do i not encounter a big problem: my projectfiles are 25 fps and progressive, and I believe that the only Bluray preset dispaying 1920x1080 with 25 fps requests an INTERLACED video (I viewed the presets found on this forum, this thread)... There is also a Progr. format, BUT then you need 30 fps (29,...).
So, is there one dimension that can be changed without changing the content of the video, and if yes which one (either the interlacing or the fps).
I'm not very familiar with the whole Blu-ray thing, I hope that someone can help me out."
Please give it a look.
Thanks,
Jef -
BPEL process initiated by a database adapter causing problems in HA environ
We are having a High Availability architecture configuration(active - active) for the BPEL System in our production environment.
The BPEL servers are clustered in the middle tier of the architecture and RAC is used in the database tier of the architecture.
We have a BPEL process which is initiated by a database adapter.
This BPEL process polls a source database, reads the data, writes it into a JMS Topic and marks flag as read back to the table.
A Message Driven Bean(MDB) listens to the JMS Topic, and writes the data in the destination database.
We have the polling process and the MDB deployed to both the nodes of the BPEL server.
We have noticed that some records are getting processed by both the nodes.
This is resulting in duplicate records at the destination tables.
Kindly advise how we can resolve this issue.The BPEL servers are configured with active - active topology and RAC is used in the database tier of the architecture.
BPEL Servers is not clustered. Load Balancer is used in front of the two nodes of the BPEL servers. -
Processing changes in Syabse database table
Can someone help me understand best way to process changes on a database table using XI?
I'm crrently doing this using webmethods where webmethods generates a xml document whenever a row is inserted in a database using adapter notification mechanism. Then I can process this document by calling webservice.Thx Naveen,
e.g. I insert a customer record in Sybase system, I would like that XI interface would insert same customer in SAP or another Database system and this should happen as a result in insert in Sybase table.
Other way I can already achive this is by having XI look in a sybase table periodicaly and see if there are new rows of data added to the table since it last looked and then process those rows accordingly. I don't want to use this approach since it keeps looking at certain intervals and most of the time there won't be anything for it to process. Also when there is something to process, it will not be processed instanteneously but only when XI looks at the table next time around.
This is whole debate about pushing data v/s polling for changes. I would like to push the info at the time data is inserted in sybase table instead of having to poll for it.
appreciate your response. -
Suprisingly two Processes running for one database
Hi all,
Today morning in one of our database servers, we see two processes running for one database.
COBLpo05 11949 1187 0 Mar 25 ? 37:50 ora_smon_BLINDEA1
COBLpo05 7789 1187 0 Mar 25 ? 5:35 ora_smon_BLINDEA1
Can anybody suggest me what is the reason behind it and what to do in this case?
Thanks in Advance.The ORACLE_HOME may have been set twice with differents values in it (i.e. extra slash) before starting twice the same db.
Please, have a look to this article from Ivan Kartik :
http://ivan.kartik.sk/index.php?show_article=40
And depending of your OS, you could check what ORACLE_HOME is used for each process :
How to Check the Environment Variables for an Oracle Process*
Doc ID: 373303.1*
Nicolas.
added the metalink note ref.
Edited by: N. Gasparotto on Sep 4, 2009 11:34 AM -
Redo apply is not done in physical standby database
Please help me to resolve the issue, tnsping are working. but getting following output in data guard configuration.
From primary database,
Code: [Select all] [Show/ hide]
SQL> select max(sequence#), thread# from v$archived_log group by thread#;
MAX(SEQUENCE#) THREAD#
124 1
94 2
SQL> col DESTINATION for a40
SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ# -
FROM V$ARCHIVE_DEST_STATUS -
WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE';SELECT DEST_ID "ID",STATUS "DB_status",DESTINATION "Archive_dest",ERROR "Error" FROM V$ARCHIVE_DEST WHERE DEST_ID =2;
select error from v$archive_dest_status where error is not null;
DESTINATION STATUS ARCHIVED_THREAD#
ARCHIVED_SEQ#
VALID 1
124
bddipdrs ERROR 0
0
SQL> SQL> SQL>
ID DB_status
Archive_dest
Error
2 ERROR
bddipdrs
ORA-16198: Timeout incurred on internal channel during remote
archival
From standby database,
Code: [Select all] [Show/ hide]
SQL> select max(sequence#), thread# from v$archived_log group by thread#;
col DESTINATION for a40
no rows selected
SQL> SQL> SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ# -
FROM V$ARCHIVE_DEST_STATUS -
WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE';SELECT DEST_ID "ID",STATUS "DB_status",DESTINATION "Archive_dest",ERROR "Error" FROM V$ARCHIVE_DEST WHERE DEST_ID =2;
DESTINATION STATUS ARCHIVED_THREAD#
ARCHIVED_SEQ#
VALID 0
0
bddipdrs VALID 0
0
VALID 0
0
SQL> SQL> SQL>
ID DB_status
Archive_dest
Error
2 VALID
bddipdrs
SQL> SQL> select error from v$archive_dest_status where error is not null;
no rows selected
From standby database,
Code: [Select all] [Show/ hide]
<msg time='2010-09-02T21:30:25.422+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DRS-DB-01' host_addr='192.168.105.101' module='oracle@DC-DB-01 (TNS V1-V3)'
pid='13010'>
<txt>RFS[2]: Possible network disconnect with primary database
</txt>
</msg>
<msg time='2010-09-02T21:31:19.259+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DRS-DB-01' host_addr='192.168.105.101' module='oracle@DC-DB-01 (TNS V1-V3)'
pid='13014'>
<txt>RFS[6]: Possible network disconnect with primary database
</txt>
</msg>
<msg time='2010-09-02T21:31:19.265+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DRS-DB-01' host_addr='192.168.105.101' module=''
pid='13014'>
<txt>Deleted Oracle managed file +RECOVERY/bddipdrs/archivelog/2010_09_02/thread_2_seq_90.289.728688311
</txt>
</msg>
<msg time='2010-09-02T21:32:10.314+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DRS-DB-01' host_addr='192.168.105.101' module='oracle@DC-DB-01 (TNS V1-V3)'
pid='13020'>
<txt>RFS[7]: Possible network disconnect with primary database
</txt>
</msg>
<msg time='2010-09-02T21:32:59.423+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DRS-DB-01' host_addr='192.168.105.101' module='oracle@DC-DB-01 (TNS V1-V3)'
pid='13026'>
<txt>RFS[10]: Possible network disconnect with primary database
</txt>
</msg>
<msg time='2010-09-02T21:32:59.430+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DRS-DB-01' host_addr='192.168.105.101' module=''
pid='13026'>
<txt>Deleted Oracle managed file +RECOVERY/bddipdrs/archivelog/2010_09_02/thread_1_seq_107.288.728688311
</txt>
</msg>
<msg time='2010-09-02T21:33:00.664+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DRS-DB-01' host_addr='192.168.105.101' module=''
pid='13519'>
<txt>Fetching gap sequence in thread 1, gap sequence 107-122
</txt>
</msg>
<msg time='2010-09-02T21:33:45.941+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DRS-DB-01' host_addr='192.168.105.101' module='oracle@DC-DB-01 (TNS V1-V3)'
pid='13012'>
<txt>RFS[3]: Possible network disconnect with primary database
</txt>
</msg>
<msg time='2010-09-02T21:33:45.947+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DRS-DB-01' host_addr='192.168.105.101' module=''
pid='13012'>
<txt>Deleted Oracle managed file +RECOVERY/bddipdrs/archivelog/2010_09_02/thread_2_seq_89.301.728688311
</txt>
</msg>
From primary database,
Code: [Select all] [Show/ hide]
<msg time='2010-09-02T21:34:23.966+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9533'>
<txt>Reclaiming FAL entry from dead process [pid 5732]
</txt>
</msg>
<msg time='2010-09-02T21:34:23.976+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9519'>
<txt>Reclaiming FAL entry from dead process [pid 5720]
</txt>
</msg>
<msg time='2010-09-02T21:34:23.992+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9511'>
<txt>Reclaiming FAL entry from dead process [pid 5734]
</txt>
</msg>
<msg time='2010-09-02T21:34:23.999+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9523'>
<txt>Reclaiming FAL entry from dead process [pid 5724]
</txt>
</msg>
<msg time='2010-09-02T21:34:24.002+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9513'>
<txt>Reclaiming FAL entry from dead process [pid 5730]
</txt>
</msg>
<msg time='2010-09-02T21:34:24.011+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9517'>
<txt>Reclaiming FAL entry from dead process [pid 5728]
</txt>
</msg>
<msg time='2010-09-02T21:34:24.013+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9509'>
<txt>Reclaiming FAL entry from dead process [pid 5738]
</txt>
</msg>
<msg time='2010-09-02T21:34:24.014+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9531'>
<txt>Reclaiming FAL entry from dead process [pid 5740]
</txt>
</msg>
<msg time='2010-09-02T21:34:24.031+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9529'>
<txt>Reclaiming FAL entry from dead process [pid 5742]
</txt>
</msg>
<msg time='2010-09-02T21:34:24.048+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9539'>
<txt>Reclaiming FAL entry from dead process [pid 5746]
</txt>
</msg>
<msg time='2010-09-02T21:34:24.058+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9547'>
<txt>Reclaiming FAL entry from dead process [pid 5750]
</txt>
</msg>
<msg time='2010-09-02T21:34:24.768+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='5697'>
<txt>ARCt: Archival started
</txt>
</msg>
<msg time='2010-09-02T21:34:24.768+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='5697'>
<txt>ARC3: STARTING ARCH PROCESSES COMPLETE
</txt>
</msg>
<msg time='2010-09-02T21:34:25.798+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='5744'>
<txt>ARCq: Standby redo logfile selected for thread 1 sequence 122 for destination LOG_ARCHIVE_DEST_2
</txt>
</msg>
<msg time='2010-09-02T21:34:25.815+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='5695'>
<txt>ARC2: Standby redo logfile selected for thread 2 sequence 88 for destination LOG_ARCHIVE_DEST_2
</txt>
</msg>
<msg time='2010-09-02T21:34:26.535+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9515'>
<txt>ARCd: Standby redo logfile selected for thread 1 sequence 110 for destination LOG_ARCHIVE_DEST_2
</txt>
</msg>
<msg time='2010-09-02T21:34:27.174+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9529'>
<txt>ARCk: Standby redo logfile selected for thread 1 sequence 120 for destination LOG_ARCHIVE_DEST_2
</txt>
</msg>
<msg time='2010-09-02T21:34:27.402+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9533'>
<txt>ARCm: Standby redo logfile selected for thread 1 sequence 112 for destination LOG_ARCHIVE_DEST_2
</txt>
</msg>
<msg time='2010-09-02T21:34:27.498+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9511'>
<txt>ARCb: Standby redo logfile selected for thread 1 sequence 114 for destination LOG_ARCHIVE_DEST_2
</txt>
</msg>
<msg time='2010-09-02T21:34:28.205+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9527'>
<txt>ARCj: Standby redo logfile selected for thread 1 sequence 111 for destination LOG_ARCHIVE_DEST_2
</txt>
</msg>
<msg time='2010-09-02T21:34:28.592+06:00' org_id='oracle' comp_id='rdbms'
client_id='' type='UNKNOWN' level='16'
host_id='DC-DB-01' host_addr='192.168.100.101' module=''
pid='9531'>
<txt>ARCl: Standby redo logfile selected for thread 1 sequence 119 for destination LOG_ARCHIVE_DEST_2
</txt>
</msg>Hi,
I am currently facing the same issue with same set of errors from primary and standby.
Mine is a 11gR2 2 node RAC on RHEL5 with a single node standby.
However in my case, whenever i switch a logfile on the primary database, only 17 MB or less is being shipped per archive where size of redo log is 500 mb.
Any help would be really appreciated.
Thanks,
Rahul Singh. -
Lost seq of redo log due to corruption and cannot recover database.
Hi!
This db I am working on is a test database running 10.2.0.3 on OEL5. Unfortunately due to some human error, we lost the redo log sequence 1_28_xxxxxx.redo. As this was a non-critical db, we didn't plan any backups for the db... and now whenever I try to open the db I get the error:
SQL> alter database open;
alter database open
ERROR at line 1:
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open
SQL> alter database open resetlogs;
alter database open resetlogs
ERROR at line 1:
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/opt/app/oracle/oradata/tadb1/system01.dbf'
SQL> recover until cancel
ORA-00279: change 510956 generated at 08/31/2010 22:00:17 needed for thread 1
ORA-00289: suggestion :
/opt/app/oracle/oradata/tadb1/archive/1_28_728336713.dbf
ORA-00280: change 510956 for thread 1 is in sequence #28
SQL> recover database until time '31-AUG-2010 22:00:00';
ORA-00283: recovery session canceled due to errors
ORA-00314: log 1 of thread 1, expected sequence# 28 doesn't match 0
ORA-00312: online log 1 thread 1: '/opt/app/oracle/oradata/tadb1/redo01.log'
Is there a way to open the database!?
Thanks,
AB007Sorry for the late response guys... had called it a night earlier... well, I tried your suggestion... but still, the database can't recover -
SQL> recover database using backup controlfile until cancel;
ORA-00279: change 510958 generated at 09/02/2010 23:56:37 needed for thread 1
ORA-00289: suggestion : /opt/app/oracle/oradata/tadb1/archive/1_1_728697397.dbf
ORA-00280: change 510958 for thread 1 is in sequence #1
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
CANCEL
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/opt/app/oracle/oradata/tadb1/system01.dbf'
ORA-01112: media recovery not started
SQL> alter database open resetlogs;
alter database open resetlogs
ERROR at line 1:
ORA-01092: ORACLE instance terminated. Disconnection forced
ALERT LOG
ALTER DATABASE RECOVER database using backup controlfile until cancel
Fri Sep 3 10:14:22 2010
Media Recovery Start
WARNING! Recovering data file 1 from a fuzzy file. If not the current file
it might be an online backup taken without entering the begin backup command.
WARNING! Recovering data file 2 from a fuzzy file. If not the current file
it might be an online backup taken without entering the begin backup command.
WARNING! Recovering data file 3 from a fuzzy file. If not the current file
it might be an online backup taken without entering the begin backup command.
WARNING! Recovering data file 4 from a fuzzy file. If not the current file
it might be an online backup taken without entering the begin backup command.
parallel recovery started with 2 processes
ORA-279 signalled during: ALTER DATABASE RECOVER database using backup controlfile until cancel ...
Fri Sep 3 10:14:25 2010
ALTER DATABASE RECOVER CANCEL
ORA-1547 signalled during: ALTER DATABASE RECOVER CANCEL ...
Fri Sep 3 10:14:26 2010
ALTER DATABASE RECOVER CANCEL
ORA-1112 signalled during: ALTER DATABASE RECOVER CANCEL ...
Fri Sep 3 10:14:43 2010
alter database open resetlogs
Fri Sep 3 10:14:43 2010
RESETLOGS is being done without consistancy checks. This may result
in a corrupted database. The database should be recreated.
RESETLOGS after incomplete recovery UNTIL CHANGE 510958
Resetting resetlogs activation ID 2129258410 (0x7ee9e7aa)
Online log /opt/app/oracle/oradata/tadb1/redo02.log: Thread 1 Group 2 was previously cleared
Online log /opt/app/oracle/oradata/tadb1/redo03.log: Thread 1 Group 3 was previously cleared
Fri Sep 3 10:14:45 2010
Setting recovery target incarnation to 3
Fri Sep 3 10:14:45 2010
Assigning activation ID 2129271722 (0x7eea1baa)
Thread 1 opened at log sequence 1
Current log# 1 seq# 1 mem# 0: /opt/app/oracle/oradata/tadb1/redo01.log
Successful open of redo thread 1
Fri Sep 3 10:14:45 2010
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Fri Sep 3 10:14:45 2010
SMON: enabling cache recovery
Fri Sep 3 10:14:45 2010
Errors in file /opt/app/oracle/admin/tadb1/udump/tadb1_ora_5949.trc:
ORA-00600: internal error code, arguments: [4000], [6], [], [], [], [], [], []
Fri Sep 3 10:14:45 2010
Errors in file /opt/app/oracle/admin/tadb1/udump/tadb1_ora_5949.trc:
ORA-00704: bootstrap process failure
ORA-00704: bootstrap process failure
ORA-00600: internal error code, arguments: [4000], [6], [], [], [], [], [], []
Fri Sep 3 10:14:45 2010
Error 704 happened during db open, shutting down database
USER: terminating instance due to error 704
Instance terminated by USER, pid = 5949
ORA-1092 signalled during: alter database open resetlogs... -
IMP-00010 when importing a database
Hello,
i've exported a database using
expdp system/password DIRECTORY=data_pump_dir DUMPFILE=full_export.dmp FULL=ybecause i formatted then the HDD and i wanted to import later the exported database. i used Oracle 10gR2, and now i installed the same 10gR2 (not an earlier version)
when i tried to import the .DMP file, i got
IMP-00010: not a valid export file, header failed verification
IMP-00000: Import terminated unsuccessfully
first i typed in command prompt imp . Then i entered the username and password (first i created the user which has exported the database, previously). and then i entered the path of the EXPDAT.DMP file, including the file.
Is it necessary to import using Oracle CLIENT 10g? now when importing i don't have a client installed, i work just on my local machine
Regards,Yes, it's the same .dmp.
Here is the export log (when i exported, i saved the log)
Export: Release 10.2.0.1.0 - Production on Sunday, 28 June, 2009 13:35:32
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Personal Oracle Database 10g Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "ROGER"."SYS_EXPORT_SCHEMA_01": roger/********
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 8.312 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCOBJ
. . exported "ROGER"."ANGAJATI" 2.470 MB 17 rows
. . exported "ROGER"."ELEVI" 57.00 KB 14 rows
. . exported "ROGER"."PONTAJE" 7.148 KB 6 rows
. . exported "ROGER"."PHOTOS" 67.30 KB 1 rows
. . exported "ROGER"."BIN_DOCS" 5.679 KB 2 rows
. . exported "ROGER"."CK_PRODUCT_ELEMENTS" 56.60 KB 1090 rows
. . exported "ROGER"."WU_TEST_TABLE" 5.273 KB 1 rows
. . exported "ROGER"."ABSENTE" 7.507 KB 13 rows
. . exported "ROGER"."ACTIVITATI_PARTENERI" 5.953 KB 1 rows
. . exported "ROGER"."ALTE_ACTIVITATI" 7.648 KB 3 rows
. . exported "ROGER"."AMANATI" 6.578 KB 2 rows
. . exported "ROGER"."ANGAJAT_FUNCTIE" 7.25 KB 23 rows
. . exported "ROGER"."AN_SCOLAR" 6.601 KB 12 rows
. . exported "ROGER"."CLASA" 5.656 KB 38 rows
. . exported "ROGER"."CLASA_PROFIL" 6.429 KB 13 rows
. . exported "ROGER"."CORIGENTE" 6.585 KB 1 rows
. . exported "ROGER"."DOTARI_SALI" 6.171 KB 13 rows
. . exported "ROGER"."ELEVI_ACTIVITATI" 5.601 KB 1 rows
. . exported "ROGER"."ELEVI_CLASA" 5.757 KB 13 rows
. . exported "ROGER"."FESTIV_PREMIERE" 6.617 KB 3 rows
. . exported "ROGER"."FUNCTII" 5.429 KB 12 rows
. . exported "ROGER"."JUDET" 6.242 KB 41 rows
. . exported "ROGER"."LOCALITATE" 7.578 KB 108 rows
. . exported "ROGER"."MATERII" 6.046 KB 17 rows
. . exported "ROGER"."MEDII_ANUALE" 6.257 KB 1 rows
. . exported "ROGER"."MEDII_ANUALE_MATERII" 6.335 KB 4 rows
. . exported "ROGER"."MEDII_SEMESTRIALE_MATERII" 6.429 KB 9 rows
. . exported "ROGER"."NOTE" 7.718 KB 30 rows
. . exported "ROGER"."ORAR" 7.937 KB 38 rows
. . exported "ROGER"."PARTENERI" 6.695 KB 3 rows
. . exported "ROGER"."PROFESORI_ACTIVITATI" 5.328 KB 5 rows
. . exported "ROGER"."PROFIL" 5.703 KB 5 rows
. . exported "ROGER"."PROFIL_ANSCOLAR" 6.234 KB 21 rows
. . exported "ROGER"."RETINERI" 6.703 KB 1 rows
. . exported "ROGER"."SALA" 5.968 KB 23 rows
. . exported "ROGER"."SALARII" 8.125 KB 1 rows
. . exported "ROGER"."SEMESTRU" 6.835 KB 24 rows
. . exported "ROGER"."SPORURI" 7.421 KB 1 rows
. . exported "ROGER"."STRADA" 8.812 KB 115 rows
. . exported "ROGER"."TRANSE_SV" 5.710 KB 7 rows
. . exported "ROGER"."TRANSFERURI_EXTERNE" 6.265 KB 1 rows
. . exported "ROGER"."TRANSFERURI_INTERNE" 5.921 KB 1 rows
. . exported "ROGER"."MEDII_SEMESTRIALE" 0 KB 0 rows
. . exported "ROGER"."REPETENTE" 0 KB 0 rows
Master table "ROGER"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
Dump file set for ROGER.SYS_EXPORT_SCHEMA_01 is:
D:\ORACLE\PRODUCT\10.2.0\ADMIN\SCOALA\DPDUMP\EXPDAT.DMP
Job "ROGER"."SYS_EXPORT_SCHEMA_01" successfully completed at 13:36:37so in my database i've had other users created by me, which i think were not exported.. -
Using Imp utility to Import a database on 10g release 2
I am using the Imp Utility on 10g release 2 to import from a 10g release 1 database. I have created a new tablesapce and need the dmp to be imported to the new tablespace. But it goes and imports to the system tablespace. The command i use is
imp <User name>/<password> file=<file_name> log =<log name> full=Yes
the user name and password i use is the username and password i gave in the installation for system tables. When i specify the tablespace imp <User name>/<password>@<tablespace> it comes with "ORACLE error 12154 encountered".
Do i need to create a new user to get the import done. How do i create it?The contents of the dump will be imported to the tablespace under which you run the import.
If your dump from Oracle 10G R1 is a complete database dump and you need to import the data of a specific user in the dump, then you can use fromuser= touser= options in the import command.
If your Oracle 10GR2 database is blank, then you need to create the user in which you can import the data. You can use a command like this.
CREATE USER sidney
IDENTIFIED BY out_standing1
DEFAULT TABLESPACE example
QUOTA 10M ON example
TEMPORARY TABLESPACE temp;
Note: sidnye, out_standing1, example, temp are all arbitrary names. You need to match these with your environment. -
How to find out the process count for a database during a particular window
Hi Team,
I want to find out the maximum process count reached for a database for an interval. Say between 1:00 to 2:00 AM IST. Please help
Database version:11.2.0.2
OS:AIX 6.1Check DBA_HIST_RESOURCE_LIMIT. Your information will be there.
Simialr information is available in DBA_HIST_SYSSTAT.
Look for stat name logons cumulative/logons current - Total number of logons since the instance started/Total number of current logons
You can joing dba_hist_snapshot with snap_id and give begin_interval_time and end_interval_time to know historic process utilization.
Regards,
Sreejith -
Getting error: ORA-04030: out of process memory with EBS database
Dear experts,
I am getting this error with the EBS 11i database during the running of the system specially in the rush hour of the work day leads to disconnect between the database server and client:
ORA-04030: out of process memory when trying to allocate 917536 bytes (joxcx callheap,ioc_allocate ufree)
OS: Windows Server 2003 32bit
Memory: 4GB RAM
Oracle database version: 9.2.0.6.0
db_cache_size=629145600
sga_max_size=1400897536
pga_aggregate_target=1073741824
sessions=800
processes=400
ThanksThis error is expected, especially if enough memory is not allocated and the system is heavily loaded, as it seems to be in your case. Pl see this MOS Doc for possible resolution
Master Note for Diagnosing OS Memory Problems and ORA-4030 [ID 1088267.1]
HTH
Srini -
Retrieve processed message payload from database
Hi,
We need to retrieve the payload of all the messages which have been processed in last 1 month.
We do not have the message archiving configured currently.
Could you please let us know how can i retrieve the payload? We can not do it manually as the voulme is really huge.
Please help with suggestions.
Thanks,
JaneIf you are not archiving the messages then i don't think you can get the payloads for last one month.
However there is a service using which you can retrieve the payload.
PI/XI: how to get a PI message from java stack (AAE) in PI 7.11 from ABAP?
You can write a wrapper around and download multiple payloads.
Maybe you are looking for
-
Help! My Acrobat keeps crashing when I view comments...
I'm running a MAC OS X (version 10.6.8), my acrobat pro XI keeps crashing when I am viewing comments from my clients. I need to view what they're saying so I can make appropriate edits to my design files...but I can't even view one without Acrobat cr
-
Background step is 'in process' status in PRD system
Hello All, I have a background step which uses function modules 'create_text' and 'SD_WF_ORDER_REJECT' to update the documents. This step is in 'in process 'state' for 2 days in prd system. I checked st22 & swu2. There is not errors found. How do i r
-
How to set the width and heigh in the popup window
Hi All, I tried to show a report in a popup window style. In the column link section, I defined the URL like the following: javascript:popupURL('f?p=&APP_ID.:128:&SESSION.::&DEBUG.::P128_PAY_RATE,P128_PAY_TERMS:#PAY_RATE#,#PAY_TERMS#'). how and where
-
Compaq presario cq56 keybaord problems (win7 x64)
Hi. i got a presario cq56, and the keyboard isnt working. no keys work. the caps lock light doesnt light up when you press the button. its almost as if the keyboard is dead but im pretty sure the keyboard is perfectly fine. i tried taking apart the l
-
After I update my phone to ios 6.1.2 the time is already incorrect. I tried to restore and hard reset the phone still not working. I even set the timzone manually it goes back to the original problem. Is there anyone who can help? thanks!