RMAN-11003: failure during parse/execution of SQL statement: alter session
without doing any changes I have started getting the following error in the RMAN logs.
i didnt any changes related to sort_area_size but getting the error below
plz help guys
RMAN logs
=====================
connected to recovery catalog database
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of allocate command at 12/03/2007 22:00:01
RMAN-03014: implicit resync of recovery catalog failed
RMAN-03009: failure of full resync command on default channel at 12/03/2007 22:00:01
RMAN-11003: failure during parse/execution of SQL statement: alter session set sort_area_size=10485760
ORA-00068: invalid value 10485760 for parameter sort_area_size, must be between 0 and 1
Hi:
It seems when you are starting RMAN it's executing some commands (one 'ALTER SESSION...'. It's seems to be a batch which has a bad value for SORT_AREA_SIZE. Find it and modify to a proper value as message shows. If you can't find start RMAN by calling directly the executable ($ORACLE_HOME/bin/rman or %ORACLE_HOME%/bin/rman.exe).
Similar Messages
-
RMAN-10006: error running SQL statement: alter session set remote_dependenc
Backups are failing with following error
RMAN-00554: initialization of internal recovery manager package failed
RMAN-12001: could not open channel default
RMAN-10008: could not create channel context
RMAN-10002: ORACLE error: ORA-00096: invalid value SIGNATURE for parameter remote_dependencies_mode, must be from among MANUAL, AUTO
RMAN-10006: error running SQL statement: alter session set remote_dependencies_mode = signature
Not able to change to signature
SQL> alter session set remote_dependencies_mode=signature;
ERROR:
ORA-00096: invalid value SIGNATURE for parameter remote_dependencies_mode, must
be from among MANUAL, AUTO
I dont see MANUAL or AUTO as valid value for this parameter (http://download.oracle.com/docs/cd/B10501_01/server.920/a96536/ch1175.htm#1023124) DB version is 9.2.0
Parameter type
String
Syntax
REMOTE_DEPENDENCIES_MODE = {TIMESTAMP | SIGNATURE}
Default value
TIMESTAMP
Parameter class
Dynamic: ALTER SESSION, ALTER SYSTEM
=======================================
I believe it could be because of following bug
"A PRE-PATCHED ORACLE IMAGE CAN BE INSTALLED IN MEMORY "
Refer: "https://metalink2.oracle.com/metalink/plsql/f?p=130:15:1613505143885559758::::p15_database_id,p15_docid,p15_show_header,p15_show_help,p15_black_frame,p15_font:BUG,4610411,1,1,1,helvetica"
I appreciate your effort in fixing this issue.
Edited by: user10610722 on Nov 25, 2008 4:37 PMHi:
It seems when you are starting RMAN it's executing some commands (one 'ALTER SESSION...'. It's seems to be a batch which has a bad value for SORT_AREA_SIZE. Find it and modify to a proper value as message shows. If you can't find start RMAN by calling directly the executable ($ORACLE_HOME/bin/rman or %ORACLE_HOME%/bin/rman.exe). -
Execution of SQL statement 'alter tablespace PSAPSR3
Dear mastah,
I trying extend tablesapce at oracle, but not succesfully, and have problem,
maybe can help this issue..
error problem add tablespace:
BR0280I BRSPACE time stamp: 2014-01-06 10.27.31
BR0370I Directory /oracle/SID/sapreorg/semxnacf created
BR0280I BRSPACE time stamp: 2014-01-06 10.27.32
BR0319I Control file copy created: /oracle/SID/sapreorg/semxnacf/cntrlSID.old 99106816
BR0280I BRSPACE time stamp: 2014-01-06 10.27.32
BR1088I Extending tablespace PSAPSR3...
BR0280I BRSPACE time stamp: 2014-01-06 10.27.51
BR0301E SQL error -59 at location BrSqlExecute-1, SQL statement:
'/* BRSPACE */ alter tablespace PSAPSR3 add datafile '/oracle/SID/sapdata16/sr3_218/sr3.data218' size 4000M autoextend off'
ORA-00059: maximum number of DB_FILES exceeded
BR1017E Execution of SQL statement 'alter tablespace PSAPSR3 add datafile '/oracle/SID/sapdata16/sr3_218/sr3.data218' size 4000M autoextend off' failed
BR0669I Cannot continue due to previous warnings or errors - you can go back to repeat the last action
BR0280I BRSPACE time stamp: 2014-01-06 10.27.51
BR0671I Enter 'b[ack]' to go back, 's[top]' to abort:
regards,
aminBR1088I Extending tablespace PSAPSR3...
BR0280I BRSPACE time stamp: 2014-01-06 10.27.51
BR0301E SQL error -59 at location BrSqlExecute-1, SQL statement:
'/* BRSPACE */ alter tablespace PSAPSR3 add datafile '/oracle/SID/sapdata16/sr3_218/sr3.data218' size 4000M autoextend off'
ORA-00059: maximum number of DB_FILES exceeded
$ oerr ora 59
00059, 00000, "maximum number of DB_FILES exceeded"
// *Cause: The value of the DB_FILES initialization parameter was exceeded.
// *Action: Increase the value of the DB_FILES parameter and warm start.
$ -
Time of execution of SQL statement
Hi,
I executed some DML statements and Select statements as user SYS for user U1. How can I find the time of execution of these statements?
Regards,
MathewYou have a couple of options to time queries.
SET TIMING ON has already been mentioned. I'll add that spooling the session to a file helps keep those timings for later reference.
You can use DBMS_UTILITY.GET_TIME to get start and end times in microseconds for PL/SQL evaluation. DBMS_PROFILER will time lines of PL/SQL code as they execute. Trace/tkprof has also already been mentioned, although tkrpof times tend to vary a bit between reported and real times and don't take OS activity into account as much as the other timings do. -
This SQL statement will give me the results listed in the first table
SELECT Count([Accepts 2].Queue) AS CountOfQueue, Date.Date
FROM [Accepts 2] INNER JOIN [Date] ON FORMAT(Date.Date,"hh")=format([Accepts 2].TimeOfAccept,"hh")
WHERE ((([Accepts 2].TimeOfAccept) Between (#1/1/2002#) And ((#12/30/2002#))))
GROUP BY Date.Date;
I set up a table where I put in the 24 hours
And this query will give you the number of cases per hour per the time specified like this
CountOfQueue Date
1 12:00:00
10 15:00:00
2 16:00:00
1 17:00:00
2 18:00:00
But I want it to give me something like this
Count Of Queue Date
1 12:00:00
0 13:00:00
0 14:00:00
10 15:00:00
2 16:00:00
and so on and so forth all the way up to 23 hundred hours
Do you know the way to modify the query to do this
or how to parse the query resultset to populate the array. For hours that are not returned simply pop a zero into the relevant array position.
THanking you in advance
STEVEHere's something that I hope will get you started:Map map = new HashMap (); // you could use a tree map if you want to sort the results
for (int i = 0; i < 24; i ++) {
map.put (i + ":00:00", new Integer (0));
ResultSet rs = ...; // your result set
while (rs.next ()) {
map.put (rs.getString ("date"), new Integer (rs.getInt ("count")));
} -
Connecting strings for execution as SQL-Statement
Hello to all.
I've the problem, that I want to write a package which handels occuring erros as far as possible automatically.
One part is to recompile invalid objects.
I can find all these objects by useing the following cursor:
CURSOR curInvalidObjects
IS SELECT OBJECT_TYPE, OWNER, OBJECT_NAME
from dba_objects a
where STATUS = 'INVALID' and
OWNER = 'BESECKE' AND
OBJECT_TYPE in ( 'PACKAGE BODY',
'PACKAGE', 'FUNCTION', 'PROCEDURE',
'TRIGGER', 'VIEW' )
order by OWNER, OBJECT_TYPE, OBJECT_NAME;
After opening the cursor I can go trough the records and recompile the objects with something like this:
alter || recInvalidObjects.OBJECT_TYPE || ' '
|| recInvalidObjects.OWNER || '.' ||
recInvalidObjects.OBJECT_NAME || compile;
But that does not work. In this way I can't get any executable SQL-Statement. It just becomes a string, but it's not executable.
I think it's a simple problem, but I tried to find anything about executable stings in the documentation I have, and I could not find anything. So can anybody give me a short hint, how to create an executable statement?
Thanks a lot.
Susanne SaalmannIf you just recompile without taking the correct order of compilation into account, you will have to run your statement a couple of times. This script solves that:
SET HEADING OFF
SET FEEDBACK OFF
SET PAGES 9999
SET TIMING OFF
SET TERMOUT ON
COLUMN noprn NOPRINT
SPOOL comp.sql
SELECT 'ALTER '||
DECODE( o.object_type, 'PACKAGE BODY', 'PACKAGE', o.object_type)||
' '||decode(o.object_type,'JAVA CLASS','"',null)||
o.object_name || decode(o.object_type,'JAVA CLASS','"',null)||
' COMPILE '||
DECODE( o.object_type, 'PACKAGE BODY', 'BODY;', ';'),
COUNT( d.name ) noprn
FROM user_objects o,
user_dependencies d
WHERE o.object_name = d.referenced_name(+)
AND o.object_type = d.referenced_type(+)
AND o.status = 'INVALID'
GROUP BY o.object_name, o.object_type
ORDER BY noprn DESC
SPOOL OFF
SET HEADING ON
SET FEEDBACK ON
SET PAGES 14
START comp.sql
SHOW USER
SELECT object_type, status, count(*)
FROM user_objects
GROUP BY object_type, status -
Execute sql command "ALTER SESSION SET..."
How can i execute command "ALTER SESSION SET NLS_DATE_FORMAT ='MM/DD/YYYY'" with JDBC
api.
And by default?
Bye
SteI'm not sure that you want to do this in Java. I would imagine that the NLS_DATE_FORMAT would be set by the Oracle DBA, and shouldn't be changed by a Java app via JDBC.
Besides, when you query the database for a date it's returned as a java.sql.Date, regardless of the settings in the database. Once you have that, you can format it according to your wishes by using java.text.DateFormat and java.text.SimpleDateFormat.
I thought the point was that the JDBC driver took care of ensuring that you got java.sql.Date objects back from queries, regardless of the NSL_DATE_FORMAT setting in the database. -
Conversion Error During the Execution of SQL Server Job
Hello,
I have a production SQL server and a test SQL server that mimics the production server. On the production server all the jobs which do a variety of things using Transact SQL and SSIS packages work flawlessly. However one of the jobs on the test server fails
at a step and issues the following error:
executed as user: DomainName\administrator. Conversion failed when converting datetime from character string. [SQLSTATE 22007] (Error 241).
The step failed.
Here is the code contained in the steps Command window (same exact code that works fine on the production server):
declare @Now datetime
declare @DB_Date datetime
declare @Trip_Err datetime
select @Now = convert(varchar,getdate(),101)
select @DB_Date = (select convert(varchar,max(ProcessDate),101) from I_Loans)
if @now <> @DB_Date
begin
truncate table I_Loans
end
else
select @Trip_err = 'error'
I don’t see anything preventing this code from executing. However, I am not a programmer. My SQL Server version is below:
Microsoft SQL Server 2005 - 9.00.5324.00 (Intel X86)
Aug 24 2012 14:24:46 Copyright (c) 1988-2005 Microsoft Corporation
Standard Edition on Windows NT 6.0 (Build 6002: Service Pack 2)
Any advice or solutions will be greatly appreciated. Please let me know if I need to provide more information.
Thank you,
Dave
David YoungCould you run this query from management studio in test?
select convert(varchar,max(ProcessDate),101) from I_Loans
It could be something to do with the ProcessDate value which had issues to convert
Regards, Ashwin Menon My Blog - http:\\sqllearnings.com -
First execution of sql statements is slow every morning
Dears,
we are running an oracle11g database (HP-UX Itanium) and have the following problem:
Every morning the first execution of statements is very slow.
After the first execution the statements are running fine.
Does anyone have an idea where this can come from?
Is it possible that the cache (shared pool, etc.) will be deleted every night (for example when new statistics are generated or something else)?
Regards,
IljaI think you are close to answering your question.
As you know, Oracle 11g has an automated job to run performance stats every night at approx. 10:00pm (until 2:00am).
This is run by the dbms_scheduler.
This could be causing the shared_pool to be flushed because it certainly uses it a lot. I have to manually flush the shared_pool every night in one of my databases before this job runs otherwise I get an ORA-01461.
But, what I'm surprised is that you have this problem only in the morning.
It seems you would want to pin your SQL in memory and perhaps set a profile for your execution.
You don't bounce your database every night, do you? -
RMAN-05501: RMAN-11003 ORA-00353: log corruption near block 2048 change
Hi Gurus,
I've posted few days ago an issue I got while recreating my Dataguard.
The Main issue was while duplicating target from active database I got this errors during the recovery process.
The Restore Process Went fine, RMAN Copied the Datafiles very well, but stop when at the moment to start the recovery process from the auxiliary db
Yesterday I took one last try,
I follow same procedure, the one described in all Oracle Docs, Google and so on ... it's not a secret I guess.
The I got the same issue, same errors.
I read soemthing about archivelogs, and the Block corruption and so on, I've tried so many things (register the log... etc etc ), and than I read something about "catalog the logfile)
and that's waht I did.
But I was just connect to the target db.
contents of Memory Script:
set until scn 1638816629;
recover
standby
clone database
delete archivelog
executing Memory Script
executing command: SET until clause
Starting recover at 14-MAY-13
starting media recovery
archived log for thread 1 with sequence 32196 is already on disk as file /archives/CMOVP/stby/1_32196_810397891.arc
archived log for thread 1 with sequence 32197 is already on disk as file /archives/CMOVP/stby/1_32197_810397891.arc
archived log for thread 1 with sequence 32198 is already on disk as file /archives/CMOVP/stby/1_32198_810397891.arc
archived log for thread 1 with sequence 32199 is already on disk as file /archives/CMOVP/stby/1_32199_810397891.arc
archived log for thread 1 with sequence 32200 is already on disk as file /archives/CMOVP/stby/1_32200_810397891.arc
archived log for thread 1 with sequence 32201 is already on disk as file /archives/CMOVP/stby/1_32201_810397891.arc
archived log for thread 1 with sequence 32202 is already on disk as file /archives/CMOVP/stby/1_32202_810397891.arc
archived log for thread 1 with sequence 32203 is already on disk as file /archives/CMOVP/stby/1_32203_810397891.arc
archived log for thread 1 with sequence 32204 is already on disk as file /archives/CMOVP/stby/1_32204_810397891.arc
archived log for thread 1 with sequence 32205 is already on disk as file /archives/CMOVP/stby/1_32205_810397891.arc
archived log for thread 1 with sequence 32206 is already on disk as file /archives/CMOVP/stby/1_32206_810397891.arc
archived log for thread 1 with sequence 32207 is already on disk as file /archives/CMOVP/stby/1_32207_810397891.arc
archived log for thread 1 with sequence 32208 is already on disk as file /archives/CMOVP/stby/1_32208_810397891.arc
archived log for thread 1 with sequence 32209 is already on disk as file /archives/CMOVP/stby/1_32209_810397891.arc
archived log for thread 1 with sequence 32210 is already on disk as file /archives/CMOVP/stby/1_32210_810397891.arc
archived log for thread 1 with sequence 32211 is already on disk as file /archives/CMOVP/stby/1_32211_810397891.arc
archived log for thread 1 with sequence 32212 is already on disk as file /archives/CMOVP/stby/1_32212_810397891.arc
archived log for thread 1 with sequence 32213 is already on disk as file /archives/CMOVP/stby/1_32213_810397891.arc
archived log for thread 1 with sequence 32214 is already on disk as file /archives/CMOVP/stby/1_32214_810397891.arc
archived log for thread 1 with sequence 32215 is already on disk as file /archives/CMOVP/stby/1_32215_810397891.arc
archived log for thread 1 with sequence 32216 is already on disk as file /archives/CMOVP/stby/1_32216_810397891.arc
archived log for thread 1 with sequence 32217 is already on disk as file /archives/CMOVP/stby/1_32217_810397891.arc
archived log for thread 1 with sequence 32218 is already on disk as file /archives/CMOVP/stby/1_32218_810397891.arc
archived log for thread 1 with sequence 32219 is already on disk as file /archives/CMOVP/stby/1_32219_810397891.arc
archived log for thread 1 with sequence 32220 is already on disk as file /archives/CMOVP/stby/1_32220_810397891.arc
archived log for thread 1 with sequence 32221 is already on disk as file /archives/CMOVP/stby/1_32221_810397891.arc
archived log for thread 1 with sequence 32222 is already on disk as file /archives/CMOVP/stby/1_32222_810397891.arc
archived log for thread 1 with sequence 32223 is already on disk as file /archives/CMOVP/stby/1_32223_810397891.arc
archived log file name=/archives/CMOVP/stby/1_32196_810397891.arc thread=1 sequence=32196
released channel: prm1
released channel: stby1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/14/2013 01:11:33
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/archives/CMOVP/stby/1_32196_810397891.arc'
ORA-00283: recovery session canceled due to errors
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 2048 change 1638686297 time 05/13/2013 22:42:03
ORA-00334: archived log: '/archives/CMOVP/stby/1_32196_810397891.arc'
################# What I did: ################################
Rma target /
RMAN>catalog archivelog '/archives/CMOVP/stby/1_32196_810397891.arc';
The I connect to target and Auxiliary again :Rman target / catalog rman/rman@rman auxiliary
and I run the last content of the failing memory script:RMAN> run
set until scn 1638816629;
recover
standby
clone database
delete archivelog
And The DB start the recovery Process and my Standby complete the recovery very weel with message "Recovery Finnish or Termintaed or Completed"
The I could configure Datagurd
And I check the process and the Log Apply was on and running fine, no gaps, perfect!!!!!
How !!! Just Cataloging an "Supposed Corrupted"archive log !!!!!!
If Any ideas, that ould be great to understand this.
Rgds
CarlosokKarol wrote:
Hi Gurus,
I've posted few days ago an issue I got while recreating my Dataguard.
The Main issue was while duplicating target from active database I got this errors during the recovery process.
The Restore Process Went fine, RMAN Copied the Datafiles very well, but stop when at the moment to start the recovery process from the auxiliary db
Yesterday I took one last try,
I follow same procedure, the one described in all Oracle Docs, Google and so on ... it's not a secret I guess.
The I got the same issue, same errors.
I read soemthing about archivelogs, and the Block corruption and so on, I've tried so many things (register the log... etc etc ), and than I read something about "catalog the logfile)
and that's waht I did.
But I was just connect to the target db.
contents of Memory Script:
set until scn 1638816629;
recover
standby
clone database
delete archivelog
executing Memory Script
executing command: SET until clause
Starting recover at 14-MAY-13
starting media recovery
archived log for thread 1 with sequence 32196 is already on disk as file /archives/CMOVP/stby/1_32196_810397891.arc
archived log for thread 1 with sequence 32197 is already on disk as file /archives/CMOVP/stby/1_32197_810397891.arc
archived log for thread 1 with sequence 32198 is already on disk as file /archives/CMOVP/stby/1_32198_810397891.arc
archived log for thread 1 with sequence 32199 is already on disk as file /archives/CMOVP/stby/1_32199_810397891.arc
archived log for thread 1 with sequence 32200 is already on disk as file /archives/CMOVP/stby/1_32200_810397891.arc
archived log for thread 1 with sequence 32201 is already on disk as file /archives/CMOVP/stby/1_32201_810397891.arc
archived log for thread 1 with sequence 32202 is already on disk as file /archives/CMOVP/stby/1_32202_810397891.arc
archived log for thread 1 with sequence 32203 is already on disk as file /archives/CMOVP/stby/1_32203_810397891.arc
archived log for thread 1 with sequence 32204 is already on disk as file /archives/CMOVP/stby/1_32204_810397891.arc
archived log for thread 1 with sequence 32205 is already on disk as file /archives/CMOVP/stby/1_32205_810397891.arc
archived log for thread 1 with sequence 32206 is already on disk as file /archives/CMOVP/stby/1_32206_810397891.arc
archived log for thread 1 with sequence 32207 is already on disk as file /archives/CMOVP/stby/1_32207_810397891.arc
archived log for thread 1 with sequence 32208 is already on disk as file /archives/CMOVP/stby/1_32208_810397891.arc
archived log for thread 1 with sequence 32209 is already on disk as file /archives/CMOVP/stby/1_32209_810397891.arc
archived log for thread 1 with sequence 32210 is already on disk as file /archives/CMOVP/stby/1_32210_810397891.arc
archived log for thread 1 with sequence 32211 is already on disk as file /archives/CMOVP/stby/1_32211_810397891.arc
archived log for thread 1 with sequence 32212 is already on disk as file /archives/CMOVP/stby/1_32212_810397891.arc
archived log for thread 1 with sequence 32213 is already on disk as file /archives/CMOVP/stby/1_32213_810397891.arc
archived log for thread 1 with sequence 32214 is already on disk as file /archives/CMOVP/stby/1_32214_810397891.arc
archived log for thread 1 with sequence 32215 is already on disk as file /archives/CMOVP/stby/1_32215_810397891.arc
archived log for thread 1 with sequence 32216 is already on disk as file /archives/CMOVP/stby/1_32216_810397891.arc
archived log for thread 1 with sequence 32217 is already on disk as file /archives/CMOVP/stby/1_32217_810397891.arc
archived log for thread 1 with sequence 32218 is already on disk as file /archives/CMOVP/stby/1_32218_810397891.arc
archived log for thread 1 with sequence 32219 is already on disk as file /archives/CMOVP/stby/1_32219_810397891.arc
archived log for thread 1 with sequence 32220 is already on disk as file /archives/CMOVP/stby/1_32220_810397891.arc
archived log for thread 1 with sequence 32221 is already on disk as file /archives/CMOVP/stby/1_32221_810397891.arc
archived log for thread 1 with sequence 32222 is already on disk as file /archives/CMOVP/stby/1_32222_810397891.arc
archived log for thread 1 with sequence 32223 is already on disk as file /archives/CMOVP/stby/1_32223_810397891.arc
archived log file name=/archives/CMOVP/stby/1_32196_810397891.arc thread=1 sequence=32196
released channel: prm1
released channel: stby1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 05/14/2013 01:11:33
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile '/archives/CMOVP/stby/1_32196_810397891.arc'
ORA-00283: recovery session canceled due to errors
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 2048 change 1638686297 time 05/13/2013 22:42:03
ORA-00334: archived log: '/archives/CMOVP/stby/1_32196_810397891.arc'
################# What I did: ################################
Rma target /
RMAN>catalog archivelog '/archives/CMOVP/stby/1_32196_810397891.arc';
The I connect to target and Auxiliary again :Rman target / catalog rman/rman@rman auxiliary
and I run the last content of the failing memory script:RMAN> run
set until scn 1638816629;
recover
standby
clone database
delete archivelog
And The DB start the recovery Process and my Standby complete the recovery very weel with message "Recovery Finnish or Termintaed or Completed"
The I could configure Datagurd
And I check the process and the Log Apply was on and running fine, no gaps, perfect!!!!!
How !!! Just Cataloging an "Supposed Corrupted"archive log !!!!!!
If Any ideas, that ould be great to understand this.
Rgds
CarlosHi,
Can you change standby database archive destination from /archives/CMOVP/stby/ other disk?
I think this problem on your disk.
Mahir
p.s. I remember you before thread, too -
Hi,
Db :11.2.0.1
Os : Aix 6
we are doing refresh from production to test server using rman.We started instance and restored the control file.
we got the following error when execute restore script.
MAN> connect target *
2> RUN
3> {
4> allocate channel t1 type 'SBT_TAPE' parms 'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo_iit6.opt)';
5> allocate channel t2 type 'SBT_TAPE' parms 'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo_iit6.opt)';
6> allocate channel t3 type 'SBT_TAPE' parms 'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo_iit6.opt)';
7> allocate channel t4 type 'SBT_TAPE' parms 'ENV=(TDPO_OPTFILE=/usr/tivoli/tsm/client/oracle/bin64/tdpo_iit6.opt)';
8> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/system01.dbf' TO '+P7_DATAGROUP01/P7/system01.dbf';
9> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/sysaux01.dbf' TO '+P7_DATAGROUP01/P7/sysaux01.dbf';
10> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/undotbs01.dbf' TO '+P7_DATAGROUP01/P7/undotbs01.dbf';
11> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/users01.dbf' TO '+P7_DATAGROUP01/P7/users01.dbf';
12> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/tools01.dbf' TO '+P7_DATAGROUP01/P7/tools01.dbf';
13> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/two_kb_tbs.dbf' TO '+P7_DATAGROUP01/P7/two_kb_tbs.dbf';
14> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/wwfdata01.dbf' TO '+P7_DATAGROUP01/P7/wwfdata01.dbf';
15> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/wwfindex01.dbf' TO '+P7_DATAGROUP01/P7/wwfindex01.dbf';
16> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/ntwksdata01.dbf' TO '+P7_DATAGROUP01/P7/ntwksdata01.dbf';
17> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/ntwksindex01.dbf' TO '+P7_DATAGROUP01/P7/ntwksindex01.dbf';
18> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/emadata01.dbf' TO '+P7_DATAGROUP01/P7/emadata01.dbf';
19> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/emaindex01.dbf' TO '+P7_DATAGROUP01/P7/emaindex01.dbf';
20> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/scpodata01.dbf' TO '+P7_DATAGROUP01/P7/scpodata01.dbf';
21> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/scpoindex01.dbf' TO '+P7_DATAGROUP01/P7/scpoindex01.dbf';
22> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/igpdata01.dbf' TO '+P7_DATAGROUP01/P7/igpdata01.dbf';
23> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/ikeadata01.dbf' TO '+P7_DATAGROUP01/P7/ikeadata01.dbf';
24> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/ikeaindex01.dbf' TO '+P7_DATAGROUP01/P7/ikeaindex01.dbf';
25> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/fedata01.dbf' TO '+P7_DATAGROUP01/P7/fedata01.dbf';
26> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/indx01.dbf' TO '+P7_DATAGROUP01/P7/indx01.dbf';
27> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/prefstat01.dbf' TO '+P7_DATAGROUP01/P7/prefstat01.dbf';
28> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/support01.dbf' TO '+P7_DATAGROUP01/P7/support01.dbf';
29> SET NEWNAME FOR DATAFILE '+IIT6_DATAGROUP01/iit6/tmpscpix01.dbf' TO '+P7_DATAGROUP01/P7/tmpscpix01.dbf';
30> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo01a.log'' TO ''+P7_REDOGROUP01/P7/redo01a.log'' ";
31> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo02a.log'' TO ''+P7_REDOGROUP01/P7/redo02a.log'' ";
32> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo03a.log'' TO ''+P7_REDOGROUP01/P7/redo03a.log'' ";
33> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo01b.log'' TO ''+P7_REDOGROUP01/P7/redo01b.log'' ";
34> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo02b.log'' TO ''+P7_REDOGROUP01/P7/redo02b.log'' ";
35> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo03b.log'' TO ''+P7_REDOGROUP01/P7/redo03b.log'' ";
36> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo04a.log'' TO ''+P7_REDOGROUP01/P7/redo04a.log'' ";
37> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo04b.log'' TO ''+P7_REDOGROUP01/P7/redo04b.log'' ";
38> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo05a.log'' TO ''+P7_REDOGROUP01/P7/redo05a.log'' ";
39> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo05b.log'' TO ''+P7_REDOGROUP01/P7/redo05b.log'' ";
40> RESTORE DATABASE until time "to_date('2012-10:28-11:15:40','YYYY-MM-DD-HH24:MI:SS')";
41> SWITCH DATAFILE ALL;
42> release channel t1;
43> release channel t2;
44> release channel t3;
45> release channel t4;
46> }
47>
connected to target database: iit6 (DBID=3947283088, not open)
using target database control file instead of recovery catalog
allocated channel: t1
channel t1: SID=4975 device type=SBT_TAPE
channel t1: Data Protection for Oracle: version 5.5.2.1
allocated channel: t2
channel t2: SID=5427 device type=SBT_TAPE
channel t2: Data Protection for Oracle: version 5.5.2.1
allocated channel: t3
channel t3: SID=5879 device type=SBT_TAPE
channel t3: Data Protection for Oracle: version 5.5.2.1
allocated channel: t4
channel t4: SID=6331 device type=SBT_TAPE
channel t4: Data Protection for Oracle: version 5.5.2.1
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
sql statement: ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo01a.log'' TO ''+P7_REDOGROUP01/P7/redo01a.log''
sql statement: ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo02a.log'' TO ''+P7_REDOGROUP01/P7/redo02a.log''
sql statement: ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo03a.log'' TO ''+P7_REDOGROUP01/P7/redo03a.log''
sql statement: ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo01b.log'' TO ''+P7_REDOGROUP01/P7/redo01b.log''
released channel: t1
released channel: t2
released channel: t3
released channel: t4
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of sql command on default channel at 10/28/2012 12:47:47
RMAN-11003: failure during parse/execution of SQL statement: ALTER DATABASE RENAME FILE '+IIT6_REDOGROUP01/iit6/redo01b.log' TO '+P7_REDOGROUP01/P7/redo01b.log'
ORA-01511: error in renaming log/data files
ORA-01516: nonexistent log file, data file, or temporary file "+IIT6_REDOGROUP01/iit6/redo01b.log"
Recovery Manager complete.
we have the following data group in target machine.
P7_ARCHGROUP01/
P7_DATAGROUP01/
P7_REDOGROUP01/
P7_REDOGROUP02/
Any suggestions.
Thanks30> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo01a.log'' TO ''+P7_REDOGROUP01/P7/redo01a.log'' ";
31> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo02a.log'' TO ''+P7_REDOGROUP01/P7/redo02a.log'' ";
32> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo03a.log'' TO ''+P7_REDOGROUP01/P7/redo03a.log'' ";
33> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo01b.log'' TO ''+P7_REDOGROUP01/P7/redo01b.log'' ";
34> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo02b.log'' TO ''+P7_REDOGROUP01/P7/redo02b.log'' ";
35> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo03b.log'' TO ''+P7_REDOGROUP01/P7/redo03b.log'' ";
36> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo04a.log'' TO ''+P7_REDOGROUP01/P7/redo04a.log'' ";
37> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo04b.log'' TO ''+P7_REDOGROUP01/P7/redo04b.log'' ";
38> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo05a.log'' TO ''+P7_REDOGROUP01/P7/redo05a.log'' ";
39> SQL "ALTER DATABASE RENAME FILE ''+IIT6_REDOGROUP01/iit6/redo05b.log'' TO ''+P7_REDOGROUP01/P7/redo05b.log'' ";Check v$logfile and see the location of the member and use
SQL>ALTER DATABASE RENAME FILE '<old path>' TO 'new path';after do this comment the above lines and run script again . -
Rman : 'failure during compilaton of command'
Hello friends,
I am new to Rman backup.
However, I attempted one for the first time, with a script file 'full_backup.sh'. I am having oracle 8i in Unix server.
Rman did run and some files have been created in the backup directory. But towards the end, there was this message.
RMAN-03022 : compiling command : backup
RMAN-03026 : error recovery releasing channel resources
RMAN- 08031 : released channel : c1
RMAN- 08031 : released channel : c2
RMAN- 08031 : released channel : c3
ERROR MESSAGE STACK FOLLOWS
RMAN - 03002 : failure during compilation of command
RMAN - 03013 : command type : backup
RMAN - 06089 : archival log /home2/oracle/arch/arch_1_14932.arc not found or out of sync with catalog.
I would like to know
1. whether I have successfully completed backup using the RMAN script
2. What is the significance of the above message.
3. Of course, I have deleted some archive log files, of which the last one could be /arch_1_14932.arc, before ever running RMAN.
please help,
regards,
thomaskprakashHi,
can you paste the content of the script on the board? well here i am not in a position to comment on anything.for a instance i guess that you have some archivelog missingm,and this is the reason the rman came up with these texts.
thanks
Alok -
Invisible index getting accessed during query execution
Hello Guys,
There is a strange problem , I am encountering . I am working on tuning the performance of one of the concurrent request in our 11i ERP System having database 11.1.0.7
I had enabled oradebug trace for the request and generated tkprof out of it. For below query which is taking time , I found that , in the trace generated , wait event is "db file sequential read" on an PO_LINES_N10 index but in the generated tkprof , for the same below query , the full table scan for PO_LINES_ALL is happening , as that table is 600 MB in size.
Below is the query ,
===============
UPDATE PO_LINES_ALL A
SET A.VENDOR_PRODUCT_NUM = (SELECT SUPPLIER_ITEM FROM APPS.IRPO_IN_BPAUPDATE_TMP C WHERE BATCH_ID = :B1 AND PROCESSED_FLAG = 'P' AND ACTION = 'UPDATE' AND C.LINE_ID =A.PO_LINE_ID AND ROWNUM = 1 AND SUPPLIER_ITEM IS NOT NULL),
LAST_UPDATE_DATE = SYSDATE
===============
Index PO_LINES_N10 is on the column LAST_UPDATE_DATE , logically for such query , index should not have got used as that indexed column is not in select / where clause.
Also, why there is discrepancy between tkprof and trace generated for the same query .
So , I decided to INVISIBLE the index PO_LINES_N10 but still that index is getting accessed in the trace file .
I have also checked the below parameter , which is false so optimizer should not make use of invisible indexes during query execution.
SQL> show parameter invisible
NAME TYPE VALUE
optimizer_use_invisible_indexes boolean FALSE
Any clue regarding this .
Thanks and Regards,
Prasad
Edited by: Prasad on Jun 15, 2011 4:39 AMHi Dom,
Sorry for the late reply , but yes , an update statement is trying to update that index even if it's invisible.
Also, it seems performance issue started appearing when this index got created , so now I have dropped that index in test environment and ran the concurrent program again with oradebug level 12 trace enabled and found bit improvement in the results .
With index dropped -> 24 records/min got processed
With index -> 14 records/min got processed
so , I am looking forward without this index in the production too but before that, I have concerns regarding tkprof output. Can we further improve the performance of this query.
Please find the below tkprof with and without index .
====================
Sql statement
====================
UPDATE PO_LINES_ALL A SET A.VENDOR_PRODUCT_NUM = (SELECT SUPPLIER_ITEM FROM
APPS.IRPO_IN_BPAUPDATE_TMP C
WHERE
BATCH_ID = :B1 AND PROCESSED_FLAG = 'P' AND ACTION = 'UPDATE' AND C.LINE_ID =
A.PO_LINE_ID AND ROWNUM = 1 AND SUPPLIER_ITEM IS NOT NULL),
LAST_UPDATE_DATE = SYSDATE
=========================
TKPROF with Index for the above query ( processed 643 records )
=========================
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 2499.64 2511.99 98158 645561632 13105579 1812777
Fetch 0 0.00 0.00 0 0 0 0
total 2 2499.64 2511.99 98158 645561632 13105579 1812777
=============================
TKPROF without Index for the above query ( processed 4452 records )
=============================
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 10746.96 10544.13 84125 3079376156 1870058 1816289
Fetch 0 0.00 0.00 0 0 0 0
total 2 10746.96 10544.13 84125 3079376156 1870058 1816289
=============================
Explain plan which is same in both the cases
=============================
Rows Row Source Operation
0 UPDATE PO_LINES_ALL (cr=3079377095 pr=84127 pw=0 time=0 us)
1816289 TABLE ACCESS FULL PO_LINES_ALL (cr=83175 pr=83026 pw=0 time=117690 us cost=11151 size=29060624 card=1816289)
0 COUNT STOPKEY (cr=3079292918 pr=20 pw=0 time=0 us)
0 TABLE ACCESS BY INDEX ROWID IRPO_IN_BPAUPDATE_TMP (cr=3079292918 pr=20 pw=0 time=0 us cost=4 size=22 card=1)
180368800 INDEX RANGE SCAN IRPO_IN_BPAUPDATE_N1 (cr=51539155 pr=3 pw=0 time=16090005 us cost=3 size=0 card=1)(object id 372721)
There is a lot increase in the CPU ,so I would like to further tune this query. I have run SQL Tuning task but didn't get any recommendations for the same.
Since in the trace , I have got db scattered read wait event for the table "PO_LINES_ALL" but disk reads are not much , so am not sure the performance improvement even if I pin this table (620 MB in size and is it feasible to pin , SGA is 5GB with sga_target set ) in the shared pool .
I have already gathers stats for the concerned tables and rebuilt the indexes .
Is there any other thing that can be performed to tune this query further and bring down CPU, time taken to execute.
Thanks a lot for your reply.
Thanks and Regards,
Prasad
Edited by: Prasad on Jun 28, 2011 3:52 AM
Edited by: Prasad on Jun 28, 2011 3:54 AM
Edited by: Prasad on Jun 28, 2011 3:56 AM -
RMAN-03002: failure of Duplicate Db command at 06/01/2013 16:47:33
sql statement: alter system reset db_unique_name scope=spfile
Oracle instance shut down
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 06/01/2013 16:47:33
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06025: no backup of archived log for thread 1 with sequence 8064 and starting SCN of 13048058320848 found to restore
i am trying to duplicate a database, at the end it is getting finished with this error.
can any one help me fixing this one?It is preferable to set an UNTIL point (to the latest available archivelog) when doing a DUPLICATE. Else, Oracle attempts a complete recovery and fails after the last archivelog it finds.
Hemant K Chitale -
Execution of SQL Query to slow
Hey all!!!
I'm using 10g express edition with apex 3.1, when I run this query:
select asig.idasig IDASIG, asig.idAsig "Código Asignatura", asig.idasig ID,substr(asig.codigoasig||' '||asig.nombre,0,40) "Asignatura", p.nombre||' '||p.apellidos "Responsable", t.usuario "Técnico", substr(ea.estado,0,22) "Estado Asignatura", ec.estado "Estado Certificado", cert.idestado
from certificado cert, asignatura asig, histasig ha, profesor p, tecnico t, estado ea, estado ec
where cert.idasig = asig.idasig and
p.idprof = asig.idprof and
t.idtecnico = asig.idtecnico and
*(cert.idasig, cert.fecha) in (select idasig, max(fecha) from certificado group by idasig) and*
ec.idestado = cert.idestado and
ha.idasig = asig.idasig and
*(ha.idasig, ha.fecha) in (select idasig, max(fecha) from histasig group by idasig) and*
ea.idestado = ha.idestado
It gets too slow, making the query gets results in approximately 149s.
Someone can give me an idea...
PD:The case is that we have another query that make almost the same but with this one I get the results in less than 1s.These are the results:
TKPROF: Release 10.2.0.1.0 - Production on Mar Feb 10 10:35:58 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: C:\oraclexe\app\oracle\admin\XE\udump\xe_ora_4724.trc
Sort options: prsela exeela fchela
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
select asig.idasig ID1, asig.idAsig ID2, asig.idasig ID3,asig.codigoasig, asig.nombre, p.nombre,p.apellidos,t.usuario,ea.estado,ec.estado,cert.idestado
from certificado cert, asignatura asig, histasig ha, profesor p, tecnico t, estado ea, estado ec
where cert.idasig = asig.idasig and
p.idprof = asig.idprof and
t.idtecnico = asig.idtecnico and
(cert.idasig, cert.fecha) in (select idasig, max(fecha) from certificado group by idasig) and
ec.idestado = cert.idestado and
ha.idasig = asig.idasig and
(ha.idasig, ha.fecha) in (select idasig, max(fecha) from histasig group by idasig) and
ea.idestado = ha.idestado
call count cpu elapsed disk query current rows
Parse 1 0.15 0.14 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 64 102.17 102.43 0 369673 0 937
total 66 102.32 102.57 0 369673 0 937
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 36
Rows Row Source Operation
937 NESTED LOOPS (cr=369673 pr=0 pw=0 time=102241172 us)
937 NESTED LOOPS (cr=368672 pr=0 pw=0 time=103609539 us)
937 NESTED LOOPS (cr=5030 pr=0 pw=0 time=97377 us)
937 NESTED LOOPS (cr=3082 pr=0 pw=0 time=72064 us)
937 NESTED LOOPS (cr=2081 pr=0 pw=0 time=55194 us)
947 HASH JOIN RIGHT SEMI (cr=44 pr=0 pw=0 time=22573 us)
947 VIEW VW_NSO_1 (cr=17 pr=0 pw=0 time=4100 us)
947 HASH GROUP BY (cr=17 pr=0 pw=0 time=3151 us)
1659 INDEX FAST FULL SCAN PK_CERTIFICADO (cr=17 pr=0 pw=0 time=1728 us)(object id 15811)
1659 MERGE JOIN (cr=27 pr=0 pw=0 time=11173 us)
39 TABLE ACCESS BY INDEX ROWID ESTADO (cr=10 pr=0 pw=0 time=286 us)
39 INDEX FULL SCAN PK_ESTADO (cr=5 pr=0 pw=0 time=95 us)(object id 15790)
1659 SORT JOIN (cr=17 pr=0 pw=0 time=7693 us)
1659 INDEX FAST FULL SCAN PK_CERTIFICADO (cr=17 pr=0 pw=0 time=42 us)(object id 15811)
937 TABLE ACCESS BY INDEX ROWID ASIGNATURA (cr=2037 pr=0 pw=0 time=31871 us)
947 INDEX UNIQUE SCAN PK_ASIGNATURA (cr=1011 pr=0 pw=0 time=16230 us)(object id 15797)
937 TABLE ACCESS BY INDEX ROWID TECNICO (cr=1001 pr=0 pw=0 time=14632 us)
937 INDEX UNIQUE SCAN PK_TECNICO (cr=64 pr=0 pw=0 time=6532 us)(object id 15777)
937 TABLE ACCESS BY INDEX ROWID PROFESOR (cr=1948 pr=0 pw=0 time=20712 us)
937 INDEX UNIQUE SCAN PK_PROFESOR (cr=1001 pr=0 pw=0 time=11554 us)(object id 15795)
937 INDEX RANGE SCAN PK_HISTASIG (cr=363642 pr=0 pw=0 time=102276685 us)(object id 15809)
937 FILTER (cr=361692 pr=0 pw=0 time=101967826 us)
64914543 HASH GROUP BY (cr=361692 pr=0 pw=0 time=139395915 us)
94619691 INDEX FAST FULL SCAN PK_HISTASIG (cr=361692 pr=0 pw=0 time=94752646 us)(object id 15809)
937 TABLE ACCESS BY INDEX ROWID ESTADO (cr=1001 pr=0 pw=0 time=39308 us)
937 INDEX UNIQUE SCAN PK_ESTADO (cr=64 pr=0 pw=0 time=15348 us)(object id 15790)
alter session set sql_trace true
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 36
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.15 0.14 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 64 102.17 102.43 0 369673 0 937
total 67 102.32 102.57 0 369673 0 937
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
2 user SQL statements in session.
0 internal SQL statements in session.
2 SQL statements in session.
Trace file: C:\oraclexe\app\oracle\admin\XE\udump\xe_ora_4724.trc
Trace file compatibility: 10.01.00
Sort options: prsela exeela fchela
1 session in tracefile.
2 user SQL statements in trace file.
0 internal SQL statements in trace file.
2 SQL statements in trace file.
2 unique SQL statements in trace file.
147 lines in trace file.
134 elapsed seconds in trace file.We had seen that the possible problems are here:
937 INDEX RANGE SCAN PK_HISTASIG (cr=363642 pr=0 pw=0 time=102276685 us)(object id 15809)
64914543 HASH GROUP BY (cr=361692 pr=0 pw=0 time=139395915 us)
94619691 INDEX FAST FULL SCAN PK_HISTASIG (cr=361692 pr=0 pw=0 time=94752646 us)(object id 15809)The time spent is too much, but we don't see how can we solve this problem, any idea?
Maybe you are looking for
-
Fetch x-number of rows into an array
I have some code that selects out x-different number of rows (the number is related to different input data). Is there a good way to select into an array? Right now I'm doing it in a horrible way: -- finds out how many entries there are in the array
-
Unable to successfully reinstall Netinfo Database
Long story short- I was digging through posts trying to figure out why I couldn't start up from my main drive. After discovering that using the Reset Password utility from the startup disk showed no list of users I deduced (from previous posts) that
-
How to buy logic pro x as a gift?
How do i purchase logic pro x as a gift?
-
BDC ( Call Session Method)
how to make bdc using call session method.
-
Installing Photoshop on Mac Pro
I have purchaed and attempted to download PhotoshopElements. None of my attempts seems to have worked since when I try to install the program I get error messages, such as it is damaged, there is no program to open it with, etc. I've read some of th