Oracle 10g XE startup time
Hello,
Correct me if I'm wrong but I've been looking for this kind of topic on this forum but couldn't find any.
On my Dell Inspiron laptop (Windows XP) with 2Go RAM and dual core Intel Centrino @ 2GHz, it takes about 50 to 60 seconds for the "OracleServiceXE" service to start...
Is it the expected average startup time ? If not, how can I disable a maximum of startup time consuming functionnalities ?
I'd like to be able to use OracleXE as a lightweight database like MySQL or Postgresql.
Thanks for your tips !
Fred
Edited by: user502327 on 6 sept. 2009 15:15
One of the biggest issues with XE startup is XE shutdown.
When XE starts up, it checks whether it crashed the last time it ran. If it crashed, it goes through a very lengthy crash recovery process.
Unfortunately most people do not shut down the database before shutting down their machine. The 'normal' Windows shutdown seems to crash the database to get it stopped quickly.
On my Dell D430 with Vista Business and 2GB RAM, after a crash, my XE starts up in about 55 seconds. After a proper DB shutdown, it seems to come back up in about 17-20 seconds.
Similar Messages
-
Oracle 10g automatic startup in Solaris 10
Good afternoon,
I've installed Oracle 10g Enterprise Edition Server on a server with Solaris 10. I'm trying to configure the automatic startup/shutdown. I've changed dbstart script. I've prepared dbora script and I've created the necessary link to this script in the rc*.d directories.
When I launch dbora start or dbora stop everything is ok, but when I reboot my machine nothing happens.
My dbora script is as follow:
#!/bin/sh
ORA_HOME=/u01/app/oracle/product/10.2.0/Db_1
ORA_OWNER=oracle
if[ ! -f $ORA_HOME/bin/dbstart ]
then
echo "Oracle startup: cannot start"
exit
fi
case "$1" in
'start')
su - $ORA_OWNER -c "ORA_HOME/bin/dbstart"
su - $ORA_OWNER -c "ORA_HOME/bin/emctl start dbconsole"
su - $ORA_OWNER -c "ORA_HOME/bin/isqlplusctl start"
'stop')
su - $ORA_OWNER -c "ORA_HOME/bin/isqlplusctl stop"
su - $ORA_OWNER -c "ORA_HOME/bin/emctl stop dbconsole"
su - $ORA_OWNER -c "ORA_HOME/bin/lsnrctl stop"
su - $ORA_OWNER -c "ORA_HOME/bin/dbshut"
esac
Thanks in advance.
Best regards,
MabelI think that I've found the problem but I need your help. The problem is related to the listener. The file listener.ora doesn't exist!!! In fact the listener starts perfectly when I type the command lsnrctl start, but I don't understand why this file doesn't appear :-O. And of course neither the file sqlnet.ora exists.
I've create these files and I'm going to check. But could you tell how to configure sqlnet.ora to get the file listener.trc?
Thanks in advance.
Regards,
Mabel -
What is the best way to migrate oracle 7.1 to oracle 10g with minimal time
Hi
We are going to migrate data from oracle 7.1 to oracle 10g , in 10g we have already planned partition table strategy and we want to add 2 fields in each and every table .
Which is the best mechanism to take data from 7.1 add 2 fields , may re-arrange order of fields in table .
please suggest the same my database size is 50GB and withinn 24 hours i want to finish this job
thanks
Prashant
DBA, IndiaHi Linc,
Thank you for yout suggestion.
Export the import makes sense. Do you know of any "How to..." articles for exporting from Tiger?
Thanks again,
Tim Kern -
I want to connect oracle 10g developer run time forms to oractle data base
Hey,
Would any body want to tell me, how come i can connect oracle dev 10g from its forms with oracle 10g data bases
I am waiting
byeTry asking this on the Oracle Forms forum:
http://forums.oracle.com/forums/category.jspa?categoryID=19
In any case you need to set-up a sqlnet connection from your client to your server. Use the Network Configuration Utility that comes with the Oracle Forms installation. -
JDBC connection to Oracle 10g RAC periodically times out
I've been banging my head against the wall for months now and can't figure out why this is and what's causing it.
We have 6x CF8 servers in our environment. 3 of which work perfectly and the other 3 have the following problem. All 6 machines were installed at the same time and followed the exact same installation plan.
When I configure Oracle RAC data source, some of the machines time-out connecting to Oracle from time-to-time.
Config:
Solaris 9 on both CF and Oracle
CF8 Enterprise with the latest updater.
Apache 2 (not that it's relevant)
6 machines, load-balanced (not clustered), identical install and configuration.
data source config:
JDBC URL: jdbc:macromedia:oracle://10.0.0.3:1521;serviceName=dbname.ourdomain.com;AlternateServers= (10.0.0.4:1521);LoadBalancing=true
DRIVER CLASS: macromedia.jdbc.MacromediaDriver
The problem:
Every few minutes, CF starts hanging requests that deal with a specific RAC only data source. After about 30 seconds, all requests bail and generate this error in cfserver.log:
A non-SQL error occurred while requesting a connection from dbsource.
Timed out trying to establish connection
This happens with any RAC data source on the "bad" servers while the "good" servers don't have this problem. The "bad" server doesn't have any problems with direct (non-rac) Oracle data source.
Already tried:
Moving server connections around on a switch (rulling out bad switch port)
Copying driver from the healthy server (but it's the same installer anyway)
Changed from RAC to normal Oracle type data source - works perfectly. So at the moment I have 3 servers connecting to a specific oracle instance and the other 3 connecting to RAC.
Tried googling and searching forums and even Oracle metalink - nothing I could see relevant to this.
It's a shame that after spending a ton of money on CF8 upgrades and Oracle RAC, we can't really utilize fail-over on the database connection.
Any takers?
Thanks,
HenryI have the following in my CLASSPATH:
C:\Ora10g1\product\10.2.0\db_1\jdbc\lib\jdbc.jar;
C:\Ora10g1\product\10.2.0\db_1\jdbc\lib\ojdbc14.jar;
C:\Ora10g1\product\10.2.0\db_1\jlib\jndi.jar;
C:\Ora10g1\product\10.2.0\db_1\jlib\orai18n.jar;
Still 'Cannot find type 'oracle.jdbc.pool.OracleDataSource'
Thanks -
Oracle 10g Database startup problem Please help
i am trying to start database but getting error;
SQL> startup
ORACLE instance started.
Total System Global Area 281018368 bytes
Fixed Size 788776 bytes
Variable Size 229373656 bytes
Database Buffers 50331648 bytes
Redo Buffers 524288 bytes
Database mounted.
ORA-16038: log 2 sequence# 44103 cannot be archived
ORA-19809: limit exceeded for recovery files
ORA-00312: online log 2 thread 1: 'D:\ORADATA\ASDB\REDO02.LOG'
Please guide and help
Thanks,
Waheed.The error stack is interpreted as follows:
ORA-16038: log string sequence# string cannot be archived
Cause: An attempt was made to archive the named file, but the file could not be archived. Examine the secondary error messages to determine the cause of the error.
Action: No action is required
ORA-19809: limit exceeded for recovery files
Cause: The limit for recovery files specified by the DB_RECOVERY_FILE_DEST_SIZE was exceeded.
Action: The error is accompanied by 19804. See message 19804 for further details.
ORA-19804: cannot reclaim string bytes disk space from string limit
Cause: Oracle cannot reclaim disk space of specified bytes from the DB_RECOVERY_FILE_DEST_SIZE limit.
Action: There are five possible solutions: 1) Take frequent backup of recovery area using RMAN. 2) Consider changing RMAN retention policy. 3) Consider changing RMAN archivelog deletion policy. 4) Add disk space and increase DB_RECOVERY_FILE_DEST_SIZE. 5) Delete files from recovery area using RMAN.
ORA-00312: online log string thread string: 'string'
Cause: This message reports the filename for details of another message.
Action: Other messages will accompany this message. See the associated messages for the appropriate action to take.
DB_RECOVERY_FILE_DEST_SIZE specifies (in bytes) the hard limit on the total space to be used by target database recovery files created in the flash recovery area. You should increase this value.
On the other hand, out from the context I guessed it is a 10gRx database, but you should specify this on your thread, as well as specify os details.
~ Madrid -
Oracle 10g dataguard Connect-time failover Parameter 15 minutes
Hello,
we running a system with DataGuard in Maximum Availability protection mode and one physical Standby Database. There are two connections between primary and standby. I configured listener and tnsnames to perform a Connect-time failover in case one connection is down. In first place it seemed to work fine but since it has happened that the primary always hangs for exactly 15 minutes until the failover is performed. (ALL redologs need archiving)
I think this must be the default value of a Parameter but reading docs for hours I can't find it.
Can anybody help me?
Best regards
FritzIt's like this. The connection failover I'm talking about is only between the primary database and standby database. I have two connections between them. That makes two addresses for the service (servicetostandby) specified in the log_archive_dest used by dataguard. In my case the log_archive_dest Paramater looks like this:
log_archive_dest_3 = 'SERVICE servicetostandby OPTIONAL LGWR SY
NC AFFIRM VALID_FOR=(ONLINE_LO
GFILES,PRIMARY_ROLE) DB_UNIQUE
NAME=standby REOPEN=10 NETTIME
OUT=9'
The tnsnames.ora entry looks like this:
servicetostandby =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = address1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = address2)(PORT = 1521))
(FAILOVER=true)
(LOAD_BALANCE=false)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = standby)
So if either address1 or address2 is down dataguard should use the other address to connect to the standby database.
Maybe because of the Maximum Availability protection mode the primary database does not perform if the standby database is not available. I'm not sure about that. But fact is that if I power off the standby database, the primary database hangs only for 9 seconds. That's specified in the net_time_out attribut of the log_archive_dest Parameter. So I'm quite sure that something in the oracle net service causes the problem.
With the "primary hangs" I mean: After the used connection (say address1) went down the redologfiles of the primary database get filled and then it stops for 15 minutes. The messages in the alert.log:
Thread 1 cannot allocate new log, sequence 1304
All online logs needed archiving
After 15 minutes the connection via address2 is established and everything works fine.
I hope I made myself clear this time and didn't confuse you totally.
Edited by: user633694 on Apr 14, 2009 8:13 AM -
Downloading & installing Oracle 10g
I have Downloaded Oracle 10g about 4 times, when i try 2 install it it gives me error message " Error reading setup initialisation file". How do i overcome this? I am new with Oracle i dont know if the problem is with the "setup file" or with my PC I am running XP sp2 pentium 4 processor 500 MB of Ram, 80 gig hard drive
It looks like your setup disks are corrupt. It
should just start the Oracle Universal Installer.
Before any new attempt to download, verify that the
zip file is correct. If you are using the regular
winzip, you may check the file using the Shift-T
keyword combination, or if you are using zip.exe, you
may verify it with
unzip -T zipfile
Let me know.Sorry 4 taking sometime 2 respond, i was trying 2 redo everything againg, even downloading!!... I checked the file size from oracle site and file sizes that i downloaded they are not equal. My downloads are not in zip files but just the setup file. can this be the problem? also there is a pause during down load. The download starts well then there is a long pause, i dont know why, else everything downloads easily and quickly. -
Oracle 10g DB doesn't startup artomaticlly
Oracle 10g (10.2.0.1) DB doesn't startup artomaticlly with the server.
Navigate to the Windows HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\oracle_home_name , I find the
ORA_SID_AUTOSTART already set to TRUE. In the Windows Service (local) tab , I find the OracleServiceSID
already appear there and the Startup_type already set to AUTOMATIC.
What is the reasons?
Another issue: I open the Windows Task Manager, the ORACLE.EXE process does not appear in the Task Manager.
In fact, the database is running.
Edited by: qkc on May 16, 2011 12:37 PMIt's a new created database, I don't know the auto-start condition after creating.
2 days ago, I imported a schema and created the following error messages:
The following message is from alertSID.log file:
Errors in file c:\oracle\product\10.2.0\admin\millan\udump\millan_ora_828.trc:The following message is from the trace file:
TKPROF: Release 10.2.0.1.0 - Production on Mon May 16 16:09:33 2011
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: c:\oracle\product\10.2.0\admin\millan\udump\millan_ora_828.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
Trace file: c:\oracle\product\10.2.0\admin\millan\udump\millan_ora_828.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
0 user SQL statements in trace file.
0 internal SQL statements in trace file.
0 SQL statements in trace file.
0 unique SQL statements in trace file.
162 lines in trace file.
0 elapsed seconds in trace file.Edited by: qkc on May 16, 2011 1:14 PM
Edited by: qkc on May 16, 2011 1:15 PM
Edited by: qkc on May 16, 2011 1:16 PM -
Caching configuration (Times Ten - Oracle 10g)
Hi all,
I have managed to setup Times Ten and get a cache group configured against a table in an oracle 10g instance.
That all works so great. My question is how do I define a set of cache groups and get Times Ten to load them in automatically all together. So what I am saying, is I dont want to have to enter these cache group definitions in manually via ttisql command line (as I have done this time).
Could someone please point me in the right direction with this? Also I would like to be able to set TT up such that the cache agent is automatically started. Is this possible?
Thanks,
DanHi Dan,
In the same way that you can create cache groups through a ttIsql script you can also load them from a script using the LOAD CACHE GROUP and REFRESH CACHE GROUP commands.
However, let's remember that TimesTen is a persistent database and so you should not need to re-create the cache groups nor re-load them just becasue you have stopped and started TimesTen. The cache group definitions and data will still be present as they are persistently stored. Also, if you have a READONLY or USERMANAGED cache group using AUTOREFRESH then the changes that occur in oracle are captured and refreshed to the cache group(s) automatically.
As for starting the cache agent 9or the repagent0 automatically, this kind of depends on how you are managing the startup/shutdown of the cache datastore (i.e. what itss ramPolicy is). You can set the cachePolicy for the datastore using ttAdmin as follows:
ttAdmin -cachePolicy always DSNname
this tells TimesTen that the cache agent should always be running for this datastore. However, a side effect of this is that the datastore will allways be loaded in memory when the TimesTen instance is up (i.e. the TimesTen main daemon is running). If you want to be able to start and stop the datastore manually (or from a script) then you will likely be using a ramPolicy of manual and running some script to load/unload the datastore into/from memory. If so then just add the relevant ttadmin -cacheStart|-cacheStop commands to that script.
Chris -
[help me] Oracle 10G + ASM "ORA-00376: file 5 cannot be read at this time"
We have used Oracle 10G R2 RAC + ASM on Redhat 4 (EMC cx700 Storage)
I found below errors on alert log and can't inserted, updated and deleted datas in database.
Sun May 27 01:12:34 2007
SUCCESS: diskgroup ARCH was mounted
SUCCESS: diskgroup ARCH was dismounted
SUCCESS: diskgroup ARCH was mounted
SUCCESS: diskgroup ARCH was dismounted
SUCCESS: diskgroup ARCH was mounted
SUCCESS: diskgroup ARCH was dismounted
Sun May 27 01:19:11 2007
Errors in file /oracle/product/admin/DB/udump/db3_ora_15854.trc:
ORA-00376: file 5 cannot read at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
ORA-00372: file 5 cannot be modified at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
So:
I checked and recovered data file
SQL> select name,status,file# from v$datafile where status ='RECOVER';
NAME
STATUS FILE#
+DATA/db/datafile/undotbs3.257.617849281
RECOVER 5
RMAN> run {
allocate channel t1 type 'SBT_TAPE';
allocate channel t2 type DISK;
recover datafile 5;
recover completed.
SQL> alter database datafile 5 online;
Butttttttttttttttttt:
What is going on?
I checked EMC storage, not found any disk error.
I checked alert log of ASM, no found anything.
I don't know What the problem ?
Because I had the same problem 2 days ago.
This day error on undo datafile node 3.
2 days aGo; error on undo node 4.
Please please anybody
What bug or anything wrong ?
Please introduce me
trace file:
/oracle/product/admin/DB/udump/db3_ora_15854.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
ORACLE_HOME = /oracle/product/10.2.0/db
System name: Linux
Node name: db03.domain
Release: 2.6.9-42.ELsmp
Version: #1 SMP Wed Jul 12 23:32:02 EDT 2006
Machine: x86_64
Instance name: DB3
Redo thread mounted by this instance: 3
Oracle process number: 65
Unix process pid: 15854, image: [email protected]
*** SERVICE NAME:(DB) 2007-05-27 01:19:11.574
*** SESSION ID:(591.62658) 2007-05-27 01:19:11.574
*** 2007-05-27 01:19:11.574
ksedmp: internal or fatal error
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
ORA-00372: file 5 cannot be modified at this time
ORA-01110: data file 5: '+DATA/db/datafile/undotbs3.257.617849281'
Current SQL statement for this session:
INSERT INTO DATA_ALL VALUES (:B1 ,:B2 ,:B3 ,:B4 ,:B5 ,:B6 ,:B7 ,:B8 ,:B9 ,:B10 ,:B11 ,:B12 ,:B13 ,:B14 ,:B15 ,:B16 ,:B17 ,:B18 ,:B19 ,:B20 ,:B21 ,:B22 ,:B23 ,:B24 ,:B25 ,:B26 ,:B27 ,:B28 ,:B29 ,:B30 ,:B31 ,:B32 ,:B33 ,:B34 ,:B35 ,:B36 ,:B37 ,:B38 ,:B39 ,:B40 ,:B41 ,:B42 ,:B43 ,:B44 ,:B45 ,:B46 ,:B47 ,:B48 ,:B49 ,:B50 )
----- PL/SQL Call Stack -----
object line object
handle number name
0x21dc2b4b8 780 package body MGR.AC
0x21e4815b0 3 anonymous block
----- Call Stack Trace -----
calling call entry argument values in hex
location type point (? means dubious value)
ksedst()+31 call ksedst1() 000000000 ? 000000001 ?
7FBFFF7290 ? 7FBFFF72F0 ?
7FBFFF7230 ? 000000000 ?
ksedmp()+610 call ksedst() 000000000 ? 000000001 ?
7FBFFF7290 ? 7FBFFF72F0 ?
7FBFFF7230 ? 000000000 ?
ksupop()+3581 call ksedmp() 000000003 ? 000000001 ?
7FBFFF7290 ? 7FBFFF72F0 ?
7FBFFF7230 ? 000000000 ?
opiodr()+3899 call ksupop() 000000002 ? 000000001 ?
7FBFFF7290 ? 7FBFFF72F0 ?
7FBFFF7230 ? 000000000 ?
rpidrus()+198 call opiodr() 000000066 ? 000000006 ?
.......etc.............................
Message was edited by:
HunterX (Surachart Opun)To me it looks like your undotbs on node3 is filled and not marking old undo as expired. Use this query to find out if it is labeling old undo as expired.
SELECT tablespace_name, status, COUNT(*) AS HOW_MANY
FROM dba_undo_extents
GROUP BY tablespace_name, status;
Another thing I noticed from your alertlog is it is only doing that on undotbs3, which I assumes is on node3.
1) try recreate undotbs on node3
2) take node3 out of service (stop nodeapps, ASM, instance and finally CRS on node3) and see if you can do DML on your database.
-Moid -
Oracle 10g: How to reduce the select * time
Hello,
We have 10 million entries in our database (Oracle 10g in windows 32 bit machine.)
The execution of 'select * ....' takes 3 to 4 hour. Is there any way to reduce this time.
Is any tool available which can read the oracle export data and produce the output in text file format.
or any idea ?
Thanks
With Regards
Hemant.hem_kec wrote:
Hello EdStevens
Is that 3 to 4 hours scrolling data past your screen?Answer: The Oracle is taking 3-4 hr to produce the output.
OK, let me try again. Where is the output being directed? To the screen? To a file?
The reason I ask is that often people will say "It takes n minutes to run this query" when in fact Oracle is producing the result set in a few seconds and what is taking n minutes is to run the results past the screen.
You should take a statspack report while the query is running and see where it is spending its time.
>
That's a different problem. I assume by "export data" you mean a .dmp file created by exp or expdp? If so what do you hope to achieve by outputting it in text format? What is the business problem to be solved?Answer: Since customer want to read all 10 milion entries. so we are think if we can dump (Oracle export) the data and using some tool (not Oracle) to read the data so there is no wait for the customer.As stated, a dmp file is oracle proprietary binary file that exists solely to transport data across platforms/versions. It is not suitable for your purpose. You are far better off finding where the current query is spending its time than looking for some kludge. 10 million rows of data is still 10 million rows of data. Do you think extracting it from the db, storeing it in some other format, and having some other tool read it off of disk in that format is going to be faster than simply selecting it from the db -- asking the db to do what it was designed to do?
>
>
Thanks
With Regards
Hemant. -
Procedure execution time difference in Oacle 9i and Oracle 10g
Hi,
My procedure is taking time on
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 is 14 min.
same procedure is taking time on oracle Release 9.2.0.1.0 is 1 min.
1) Data is same in both environment.
2) Number of records are same 485 rows for cursor select statement.
3)Please guide me how to reduce the time in oracle 10g for procedure?
i have checked the explain plan for that cursor query it is different in both enviroment.
so i have analysis that procedure is taking time on cursor fetch into statement in oracle 10g.
example:-
create or replace procedure myproc
CURSOR cur_list
IS select num
from tbl
where exist(select.......
EXECUTE IMMEDIATE 'ALTER SESSION SET SQL_TRACE = TRUE';
EXECUTE IMMEDIATE 'ALTER SESSION SET TIMED_STATISTICS = TRUE';
OPEN cur_list;
LOOP
FETCH cur_list INTO cur_list; -----My procedure is taking time in this statement only for some list number. there are 485 list number.
end loop;
TRACE file for oracle 10g is look like this:-
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.37 0.46 0 2 0 0
Fetch 486 747.07 730.14 1340 56500700 0 485
total 488 747.45 730.60 1340 56500702 0 485
ORACLE 9i EXPLAIN PLAN FOR cursor query:-
Plan
SELECT STATEMENT CHOOSECost: 2 Bytes: 144 Cardinality: 12
18 INDEX RANGE SCAN UNIQUE LISL.LISL_LIST_PK Cost: 2 Bytes: 144 Cardinality: 12
17 UNION-ALL
2 FILTER
1 TABLE ACCESS FULL SLD.P Cost: 12 Bytes: 36 Cardinality: 1
16 NESTED LOOPS Cost: 171 Bytes: 141 Cardinality: 1
11 NESTED LOOPS Cost: 169 Bytes: 94 Cardinality: 1
8 NESTED LOOPS Cost: 168 Bytes: 78 Cardinality: 1
6 NESTED LOOPS Cost: 168 Bytes: 62 Cardinality: 1
4 TABLE ACCESS BY INDEX ROWID SLD.L Cost: 168 Bytes: 49 Cardinality: 1
3 INDEX RANGE SCAN UNIQUE SLD.PK_L Cost: 162 Cardinality: 9
5 INDEX UNIQUE SCAN UNIQUE SLD.SYS_C0025717 Bytes: 45,760 Cardinality: 3,520
7 INDEX UNIQUE SCAN UNIQUE SLD.PRP Bytes: 63,904 Cardinality: 3,994
10 TABLE ACCESS BY INDEX ROWID SLD.P Cost: 1 Bytes: 10,480 Cardinality: 655
9 INDEX UNIQUE SCAN UNIQUE SLD.PK_P Cardinality: 9
15 TABLE ACCESS BY INDEX ROWID SLD.GRP_E Cost: 2 Bytes: 9,447 Cardinality: 201
14 INDEX UNIQUE SCAN UNIQUE SLD.PRP_E Cost: 1 Cardinality: 29
13 TABLE ACCESS BY INDEX ROWID SLD.E Cost: 2 Bytes: 16 Cardinality: 1
12 INDEX UNIQUE SCAN UNIQUE SLD.SYS_C0025717 Cost: 1 Cardinality: 14,078
ORACLE 10G EXPLAIN PLAN FOR cursor query:-
SELECT STATEMENT ALL_ROWSCost: 206,103 Bytes: 12 Cardinality: 1
18 FILTER
1 INDEX FAST FULL SCAN INDEX (UNIQUE) LISL.LISL_LIST_PK Cost: 2 Bytes: 8,232 Cardinality: 686
17 UNION-ALL
3 FILTER
2 TABLE ACCESS FULL TABLE SLD.P Cost: 26 Bytes: 72 Cardinality: 2
16 NESTED LOOPS Cost: 574 Bytes: 157 Cardinality: 1
14 NESTED LOOPS Cost: 574 Bytes: 141 Cardinality: 1
12 NESTED LOOPS Cost: 574 Bytes: 128 Cardinality: 1
9 NESTED LOOPS Cost: 573 Bytes: 112 Cardinality: 1
6 HASH JOIN RIGHT SEMI Cost: 563 Bytes: 315 Cardinality: 5
4 TABLE ACCESS FULL TABLE SLD.E Cost: 80 Bytes: 223,120 Cardinality: 13,945
5 TABLE ACCESS FULL TABLE SLD.GRP_E Cost: 481 Bytes: 3,238,582 Cardinality: 68,906
8 TABLE ACCESS BY INDEX ROWID TABLE SLD.L Cost: 2 Bytes: 49 Cardinality: 1
7 INDEX UNIQUE SCAN INDEX (UNIQUE) SLD.PK_L Cost: 1 Cardinality: 1
11 TABLE ACCESS BY INDEX ROWID TABLE SLD.P Cost: 1 Bytes: 16 Cardinality: 1
10 INDEX UNIQUE SCAN INDEX (UNIQUE) SLD.PK_P Cost: 0 Cardinality: 1
13 INDEX UNIQUE SCAN INDEX (UNIQUE) SLD.SYS_C0011870 Cost: 0 Bytes: 13 Cardinality: 1
15 INDEX UNIQUE SCAN INDEX (UNIQUE) SLD.PRP Cost: 0 Bytes: 16 Cardinality: 1
so Please guide me how to reduce the time in oracle 10g for procedure?
1) Is this envrionment setting parameter?
2) I have to tune the query? but which is executing fine on oracle 9i?
so how to decrease the execution time?
Thanks in advance.SELECT l_nr
FROM x.ls b
WHERE b.cd = '01'
AND b.co_code = '001'
AND EXISTS (
SELECT T_L
FROM g.C
WHERE C_cd = '01'
AND C_co_code = '001'
AND C_flg = 'A'
AND C_eff_dt <= sysdate
AND C_end_dt >=
sysdate
AND C_type_code <> 1
AND C_type_code <> 1
AND targt_ls_type = 'C'
AND T_L <> 9999
AND T_L = b.l_nr
UNION ALL
SELECT l.T_L
FROM g.C C,
g.ep_e B,
g.ep ep,
g.e A,
g.lk_in l
WHERE l.cd = '01'
AND l.co_code = '001'
AND l.cd = C.C_cd
AND l.co_code = C.C_co_code
AND l.C_nbr = C.C_nbr
AND l.targt_ls_type = 'C'
AND lk_in_eff_dt <=
sysdate
AND lk_in_end_dt >=
( sysdate
+ 1
AND ( (logic_delte_flg = '0')
OR ( logic_delte_flg IN ('1', '3')
AND lk_in_eff_dt <> lk_in_end_dt
AND l.cd = ep.C_cd
AND l.co_code = ep.C_co_code
AND l.C_nbr = ep.C_nbr
AND l.ep_nbr = ep.ep_nbr
AND l.cd = A.e_cd
AND l.co_code = A.e_co_code
AND l.e_nbr = A.e_nbr
AND l.cd = B.cd
AND l.co_code = B.co_code
AND l.C_nbr = B.C_nbr
AND l.ep_nbr = B.ep_nbr
AND l.e_nbr = B.e_nbr
AND l.ep_e_rev_nbr = B.ep_e_rev_nbr
AND B.flg = 'A'
AND EXISTS (
SELECT A.e_nbr
FROM g.e A
WHERE A.e_cd = B.cd
AND A.e_co_code = B.co_code
AND A.e_nbr = B.e_nbr
AND A.e_type_code ^= 8)
AND C_type_code <> 10
AND C.C_type_code <> 13
AND l.T_L = b.l_nr)
--yes index is same -
Insert statement taking time on oracle 10g
Hi,
My procedure taking time in following statement while database upgrading from oracle 9i to oracle 10g.
I m using oracle version 10.2.0.4.0.
cust_item is matiralize view in procedure and it is refreshing in the procedure
Index is dropping before inserting data into cust_item_tbl TABLE and after inserting data index is created.
There are almost 6 lac records into MV which are going to insert into TABLE.
In 9i below insert statement is taking 1 hr time to insert while in 10g it is taking 2.30 hrs.
EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL QUERY';
EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';
INSERT INTO /*+ APPEND PARALLEL */ cust_item_tbl NOLOGGING
(SELECT /*+ PARALLEL */
ctry_code, co_code, srce_loc_nbr, srce_loc_type_code,
cust_nbr, item_nbr, lu_eff_dt,
0, 0, 0, lu_end_dt,
bus_seg_code, 0, rt_nbr, 0, '', 0, '', SYSDATE, '', SYSDATE,
'', 0, ' ',
case
when cust_nbr in (select distinct cust_nbr from aml.log_t where CTRY_CODE = p_country_code and co_code = p_company_code)
THEN
case
when trunc(sysdate) NOT BETWEEN trunc(lu_eff_dt) AND trunc(lu_end_dt)
then NVL((select cases_per_pallet from cust_item c where c.ctry_code = a.ctry_code and c.co_code = a.co_code
and c.cust_nbr = a.cust_nbr and c.GTIN_CO_PREFX = a.GTIN_CO_PREFX and c.GTIN_ITEM_REF_NBR = a.GTIN_ITEM_REF_NBR
and c.GTIN_CK_DIGIT = a.GTIN_CK_DIGIT and trunc(sysdate) BETWEEN trunc(c.lu_eff_dt) AND trunc(c.lu_end_dt) and rownum = 1),
a.cases_per_pallet)
else cases_per_pallet
end
else cases_per_pallet
END cases_per_pallet,
cases_per_layer
FROM cust_item a
WHERE a.ctry_code = p_country_code ----varible passing by procedure
AND a.co_code = p_company_code ----varible passing by procedure
AND a.ROWID =
(SELECT MAX (b.ROWID)
FROM cust_item b
WHERE b.ctry_code = a.ctry_code
AND b.co_code = a.co_code
AND b.ctry_code = p_country_code ----varible passing by procedure
AND b.co_code = p_company_code ----varible passing by procedure
AND b.srce_loc_nbr = a.srce_loc_nbr
AND b.srce_loc_type_code = a.srce_loc_type_code
AND b.cust_nbr = a.cust_nbr
AND b.item_nbr = a.item_nbr
AND b.lu_eff_dt = a.lu_eff_dt));explain plan of oracle 10g
Plan
INSERT STATEMENT CHOOSECost: 133,310 Bytes: 248 Cardinality: 1
5 FILTER
4 HASH GROUP BY Cost: 133,310 Bytes: 248 Cardinality: 1
3 HASH JOIN Cost: 132,424 Bytes: 1,273,090,640 Cardinality: 5,133,430
1 INDEX FAST FULL SCAN INDEX MFIPROCESS.INDX_TEMP_CUST_AUTH_PERF_MV Cost: 10,026 Bytes: 554,410,440 Cardinality: 5,133,430
2 MAT_VIEW ACCESS FULL MAT_VIEW MFIPROCESS.TEMP_CUST_AUTH_PERF_MV Cost: 24,570 Bytes: 718,680,200 Cardinality: 5,133,430 can you please look into the issue?
Thanks.According to the execution plan you posted parallelism is not taking place - no parallel operations listed
Check the hint syntax. In particular, "PARALLEL" does not look right.
Running queries in parallel can either help performance, hurt performance, or do nothing for performance. In your case a parallel index scan on MFIPROCESS.INDX_TEMP_CUST_AUTH_PERF_MV using the PARALLEL_INDEX hint and the PARALLEL hint specifying the table for MAT_VIEW MFIPROCESS.TEMP_CUST_AUTH_PERF_MV Cost might help, something like (untested)
select /*+ PARALLEL_INDEX(INDX_TEMP_CST_AUTH_PERF_MV) PARALLEL(TEMP_CUST_AUTHPERF_MV) */Is query rewrite causing the MVs to be read? If so hinting the query will be tricky -
Hi
My company need to set database real time backup for Oracle 10g release 2 (Standard Edition).
I see in Oracle Database 10g Product Family [Oracle Database 10g Product Family|http://www.oracle.com/technology/products/database/oracle10g/pdf/twp_general_10gdb_product_family.pdf] not support the Data Guard.
Can I do it in Oracle 10g SE ?
Thanks in advancehi,
trouble is the cost of EE can be quite expensive
another way of creating a psuedo (hope I spelt that correctly) dataguard would be to script a method of shipping archive files to a DR server.
Live System
=========
BEGIN
IF ORACLE ONLINE THEN
SWITCH LOG;
END IF;
IF ANOTHER INSTANCE OF SCRIPT RUNNING (LOCAL OR REMOTE) THEN
ERROR;
EXIT;
END IF;
IF (STANDBY NOT ONLINE) THEN
ERROR;
SEND EMAIL;
EXIT;
END IF;
GET LAST SHIPPED LOG FILE TIMESTAMP FROM THE REMOTE SITE OR FROM LOCAL SERVER;
WHILE LOG > SHIPPED LOG FILE TIMESTAMP DO
TRANSFER LOG FILE TO REMOTE SITE;
DO CHECKSUM ON LOG FILES;
IF NOT MATCH THEN
RETRY TRANSFER;
IF FAILURE 3 TIME THEN
ERROR;
SEND EMAIL;
END IF;
END IF;
' GOT THIS FAR SO ALL IS OK
MARK THE LOG FILE AS TRANSFERRED;
WEND;
DR System
========
BEGIN
CREATE A LOCK SEMAPHORE; - Only one instance allowed
CHECK LAST APPLIED LOG FILE;
IF CONNECTION TO LIVE SERVER OK THEN
CHECKSUM NEXT LOG FILE WITH LIVE SYSTEM;
IF CHECKSUM NOT SAME THEN
IF LOG FILE ON ITS WAY FROM LIVE THEN
EXIT;
END IF;
WHILE LOGFILE EXISTS DO
GET NEXT LOG FILE FROM LIVE;
WEND;
END IF;
ELSE
' Extra work
APPLY NEXT LOG FILE
END IF;
APPLY LOG FILES;
WORKOUT NO OF LOG FILE APPLIED;
KEEP SPECIFIED PERCENTAGE LOG FILES;
DELETE EXTRA LOG FILES;
TRANSFER LAST APPLIED LOG FILE ATTRIBUTES TO LIVE SERVER;
EXIT;
END; I cannot give you the actual code that I have as there are some more bits that I have added for our client but what it does is give you an over view of the process involved for creating a pseudo DG environment.
regards
Alan
Maybe you are looking for
-
I need help to run this package and i got some error
create or replace PACKAGE BODY SUBS_INS_API_sun AS PROCEDURE SUBSCRIBER_INS_sun (SOURCE_SYS_ID IN VARCHAR2, TRACKING_ID IN VARCHAR2, ACCOUNT_NO IN VARCHAR2, prepaidActDevDetails_tab IN prepaidactdvcdetailsobj_sun, ERROR_CODE OUT V
-
Best Practice for Tranport request Naming
Hi, We are using SolMan 4.0 during implementation of ECC 6.0. We have placed the blueprint and we are in configuration phase. We have a IMG project created in the DEV system and was assinged in Solution Manager project under System Landscape->IMG Pr
-
WS-Addressing in case of Adapter Services
How can I use WS-Addressing to dynamically bind Adapter Services (JMS / File), etc? Because obviously here, the EndpointReference cannot be serialized as SOAP Headers (as SOAP is not used). What needs to be done in this case? Is there a way of serial
-
I've sent a contact request to someone and she has asked me to resend it, but I can't because the request is still active and pending. It's occurred to me that a contact request may identify me by my Skype name, and she wouldn't know that was a reque
-
Workflow Manager 1.0 with SharePoint 2013
Hi Guys, I understand as per below link that minimum three servers are required to provide high availability for Workflow Manager 1.0 in SharePoint 2013. http://blogs.msdn.com/b/sanjaynarang/archive/2013/04/06/sizing-and-capacity-planning-for-sharepo