Performance issue (Oracle 10.2.0.3.0)
Hi All,
I have written the following procedure. But It's taking more than 5 hours for execution .
I have created Index as follows.
Create Index to speed up the research of the first code with sdn 'null' :
CREATE INDEX TMP.IDX_RED_COAT_2 ON TMP.RED_COAT
(CODE_YEAR, SUBSTR("CODE",1,2), NVL("SDN",'0'))
NOLOGGING
TABLESPACE TPOCLIENT_INDEX_5M_01
PCTFREE 0
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 237M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
NOPARALLEL;and the procedure is
CREATE OR REPLACE PROCEDURE TMP.pr_delight
AS
v_promo red_coat.code%TYPE;
v_date DATE;
CURSOR c1 (v_date DATE)
IS
SELECT DISTINCT ref_v.sdn sdn,
ref_v.recharge_type recharge_type,
ref_v.REFERENCE REFERENCE,
icat.langcode lang_code
FROM tmp.DEF_view ref_v,
inf_tmp icat
WHERE ref_v.sdn = icat.cardnum
AND ref_v.currency IN (
SELECT emp_id_d
FROM tmp.emp
WHERE emp_id_h = 'CURRENCY'
AND emp_txt = 'EURO')
AND ref_v.recharge_amount >= 100
AND ref_v.date_exec > v_date
AND ref_v.sdn NOT IN (
SELECT gprs_sdn
FROM stage.ppa_gprs
WHERE ppa_gprs.prom_idct =
(SELECT emp_txt
FROM tmp.emp
WHERE emp_id_h = 'LOT'
AND emp_id= 'LOT_ID'))
AND icat.profiled NOT IN ('KADOR', 'DINHG');
rec c1%ROWTYPE;
BEGIN
v_promo := NULL;
v_date := NULL;
SELECT TO_DATE (emp_txt, 'DD-MON-YYYY')
INTO v_date
FROM tmp.emp
WHERE emp_id_h = 'LOT'
AND emp_id = 'LAST_DATE'
AND emp_id_t = 'D';
OPEN c1 (v_date);
LOOP
FETCH c1 INTO rec;
EXIT WHEN c1%NOTFOUND;
SELECT code
INTO v_promo
FROM tmp.red_coat
WHERE SUBSTR (code, 1, 2) = TO_CHAR (SYSDATE, 'MM')
AND code_year = TO_CHAR (SYSDATE, 'YYYY')
AND nvl(sdn,0) =0
AND ROWNUM = 1;
UPDATE red_coat
SET sdn = SUBSTR (rec.sdn, 3),
REFERENCE = rec.REFERENCE,
recharge_type = rec.recharge_type,
assign_date = TRUNC (SYSDATE),
lang_code = rec.lang_code
WHERE code = v_promo;
COMMIT;
END LOOP;
UPDATE tmp.emp
SET emp_txt = TO_CHAR (SYSDATE, 'DD-MON-YYYY')
WHERE emp_id_h = 'LOT'
AND emp_id = 'LAST_DATE'
AND emp_id_t = 'D';
COMMIT;
CLOSE c1;
EXCEPTION
WHEN OTHERS
THEN
UPDATE tmp.emp
SET emp_txt = TO_CHAR (SYSDATE, 'DD-MON-YYYY')
WHERE emp_id_h = 'LOT'
AND emp_id = 'LAST_DATE'
AND emp_id_t = 'D';
COMMIT;
CLOSE c1;
END pr_delight;can any one please look into this and correct the code and the way to improve the performance of this procedure.
Thank you,
I remember attending to this procedure performance problem earlier, a couple of weeks ago.
I also remember suggesting you to go away with cursors and use one UPDATE statement with JOINS.
There are many places where you can modify the code. Here are a few, apart from the single UPDATE statement suggestion.
SELECT DISTINCT ref_v.sdn sdn,
ref_v.recharge_type recharge_type,
ref_v.REFERENCE REFERENCE,
icat.langcode lang_code
FROM tmp.DEF_view ref_v,
inf_tmp icat
WHERE ref_v.sdn = icat.cardnum
/* Use EXISTS in place of IN and with a correlated sub-query */
AND ref_v.currency IN (
SELECT emp_id_d
FROM tmp.emp
WHERE emp_id_h = 'CURRENCY'
AND emp_txt = 'EURO')
AND ref_v.recharge_amount >= 100
AND ref_v.date_exec > v_date
/* Use NOT EXISTS in place of NOT IN and with a correlated sub-query */
AND ref_v.sdn NOT IN (
SELECT gprs_sdn
FROM stage.ppa_gprs
WHERE ppa_gprs.prom_idct =
/* Do a JOIN with stage.ppa_gprs and tmp.emp instead of sub-query */
(SELECT emp_txt
FROM tmp.emp
WHERE emp_id_h = 'LOT'
AND emp_id= 'LOT_ID'))
AND icat.profiled NOT IN ('KADOR', 'DINHG');You can make this SELECT part of the cursor SELECT or even get the v_date from first sub-query in the cursor SELECT.
SELECT TO_DATE (emp_txt, 'DD-MON-YYYY')
INTO v_date
FROM tmp.emp
WHERE emp_id_h = 'LOT'
AND emp_id = 'LAST_DATE'
AND emp_id_t = 'D';Why is this SELECT inside the LOOP? Bring it out of the loop or make it part of the UPDATE statement.
SELECT code
INTO v_promo
FROM tmp.red_coat
WHERE SUBSTR (code, 1, 2) = TO_CHAR (SYSDATE, 'MM')
AND code_year = TO_CHAR (SYSDATE, 'YYYY')
AND nvl(sdn,0) =0
AND ROWNUM = 1;Why are you commiting inside the loop? How is your cursor valid after commit?
Cheers
Sarma.
Similar Messages
-
Performance issues (Oracle 9i Solaris 9)
Hi Guys,
How do I tell if my database is performing at its optimum level. We seem to be having perfomance issues on one of our applications. There are saying it's the database, network, etc.
Thank you.Hi,
In order to determine whether or not your Database is having performance Issues,you will need to install and execute Statspack. Statspack is utility which provides information about the Performance Parameters of Oracle Database.
If you are already using statspack report for performance analysis post the snapshot of the report.........
Regards,
Prosenjit Mukherjee. -
Performance issues with Oracle EE 9.2.0.4 and RedHat 2.1
Hello,
I am having some serious performance issues with Oracle Enterprise Edition 9.2.0.4 and RedHat Linux 2.1. The processor goes berserk at 100% for long (some 5 min.) periods of time, and all the ram memory gets used.
Some environment characteristics:
Machine: Intel Pentium IV 2.0GHz with 1GB of RAM.
OS: RedHat Linux 2.1 Enterprise.
Oracle: Oracle Enterprise Edition 9.2.0.4
Application: We have a small web-application with 10 users (for now) and very basic queries (all in stored procedures). Also we use the latest version of ODP.NET with default connection settings (some low pooling, etc).
Does anyone know what could be going on?
Is anybody else having this similar behavior?
We change from SQL-Server so we are not the world expert on the matter. But we want a reliable system nonetheless.
Please help us out, gives some tips, tricks, or guides
Thanks to all,
FrankThank you very much and sorry I couldnt write sooner. It seems that the administrator doesnt see the kswap going on so much, so I dont really know what is going on.
We are looking at some queries and some indexing but this is nuts, if I had some poor queries, which we dont really, the server would show pick right?
But he goes crazy and has two oracle processes taking all the resources. There seems to be little swapping going on.
Son now what? They are all ready talking about MS-SQL please help me out here, this is crazy!!!
We have, may be the most powerful combinations here. What is oracle doing?
We even kill the Working Process of the IIS and have no one do anything with the database and still dose two processes going on.
Can some one help me?
Thanks,
Frank -
Performance issue in oracle 11.1.0.7 version
Hi ,
In production environment we have some cronjobs are scheduled, they will run every Saturday. One of the cronjob is taking more time to finish the job.
Previous oracle version is 10.2.0.4, that time it was taking 36hrs to complete it. After upgrading to 11gr1, now it's taking 47hrs some time 50hrs to finish.
I have asked my production DBA take AWR report after finish the cronjob.
Now he sent the AWR report, but i don't know how to read it. Can you please help me to read the AWR reports, and i need to give some recommendations to reduce the overall running time.
I don't know how to attach the AWR report here.
Please help me on this.
Thanks
Shravan KumarHi,
Now he sent the AWR report, but i don't know how to read it. Can you please help me to read the AWR reports, and i need to give some recommendations to reduce the overall running time.An't you a DBA? Probably you should seek help of you DBA to read the AWR and mean while you should also have AWR of 10g where this job was running previously so that you can compare the things.
Did you do a testing before upgrade? you SHOULD have done a thorough testing of your applications/reports before the upgrade and resolve the performance issues before the production upgrade.
Mean while you do investigation, you can set optimizer_features_enable=10.2.0.4 for only cron job session to check whether you job returns to same 26 hours time
alter session set optimizer_features_enable='10.2.0.4';Salman -
Database migrated from Oracle 10g to 11g Discoverer report performance issu
Hi All,
We are now getting issue in Discoverer Report performance as the report is keep on running when database got upgrade from 10g to 11g.
In database 10g the report is working fine but the same report is not working fine in 11g.
The query i have changed as I have passed the date format TO_CHAR("DD-MON-YYYY" and removed the NVL & TRUNC function from the existing query.
The report is now working fine in Database 11g backhand but when I am using the same query in Discoverer it is not working and report is keep on running.
Please advise.
Regards,Pl post exact OS, database and Discoverer versions. After the upgrade, have statistics been updated ? Have you traced the Discoverer query to determine where the performance issue is ?
How To Find Oracle Discoverer Diagnostic and Tracing Guides [ID 290658.1]
How To Enable SQL Tracing For Discoverer Sessions [ID 133055.1]
Discoverer 11g: Performance degradation after Upgrade to Database 11g [ID 1514929.1]
HTH
Srini -
Performance Issue in Oracle EBS
Hi Group,
I am working in a performance issue at customer site, let me explain the behaviour.
There is one node for the database and other for the application.
Application server is running all the services.
EBS version is 12.1.3 and database version is: 11.1.0.7 with AIX both servers..
Customer has added memory to both servers (database and application) initially they had 32 Gbytes, now they have 128 Gbytes.
Today, I have increased memory parameters for the database and also I have increased JVM's proceesses from 1 to 2 for Forms and OAcore, both JVM's are 1024M.
The behaviour is when users are navigating inside of the form, and they push the down button quickly the form gets thinking (reloading and waiting 1 or 2 minutes to response), it is no particular for a specific form, it is just happening in several forms.
Gathering statistics job is scheduled every weekend, I am not sure what can be the problem, I have collected a trace of the form and uploaded it to Oracle Support with no success or advice.
I have just send a ping command and the reponse time between servers is below to 5 ms.
I have several activities in mind like:
- OATM conversion.
- ASM implementation.
- Upgrade to 11.2.0.4.
Has anybody had this behaviour?, any advice about this problem will be really appreciated.
Thanks in advance.
Kind regards,
Francisco Mtz.Hi Bashar, thank you very much for your quick response.
If both servers are on the same network then the ping should not exceed 2 ms.
If I remember, I did a ping last Wednesday, and there were some peaks over 5 ms.
Have you checked the network performance between the clients and the application server?
Also, I did a ping from the PC to the application and database, and it was responding in less than 1 ms.
What is the status of the CPU usage on both servers?
There aren't overhead in the CPU side, I tested it (scrolling getting frozen) with no users in the application.
Did this happen after you performed the hardware upgrade?
Yes, it happened after changing some memory parameters in the JVM and the database.
Oracle has suggested to apply the latest Forms patches according to this Note: Doc ID 437878.1
Thanks in advance.
Kind regards,
Francisco Mtz. -
Hardware Configuration:
Regarding Oracle Performance Issue.
Configuration 1
================
SunV880 - Sunfire
32 GB RAM
14 numbers of 36GB hard disk
8 CPUs
CPU Speed 750MZ.
Software Configuration:
Oracle 8i
OS version - Solaris 8
Customized our own application - Namex
Configuration 2
================
Intel PIII - 750 MZ
2 GB RAM
2 CPUS
Software configuration
Oracle 8i
OS version linux 6.2
Customized our own application - Namex (multi threaded application)
We installed the oracle application in all hard disks. All tables
are splited in to separate hard disks.
OS installed in 1 hard disk.
namex application installed in 1 hard disk
Oracle installed in 1 hard disk.
All tables are splited in to other hard disks.
We are trying to insert some user databases in oracle table. We
achieved up to 150 records/second in Sun server. But in lower
configuration our application inserts up to 100 records/second.
(configuration 2)
We want improve our inserting database records/per rate
in Sun Server.
How to tune our oracle application parameter values in init.ora
file. Our application tries to insert up to 500 records per second.
But I can't able to achieve this value.
init.ora file
=============
db_name = "namex"
instance_name = namex64
service_names = namex64
control_files = ("/disk1/oracle64/OraHome1/oradata/Namex64/control01.ctl", "/disk1/oracle64/OraHome1/oradata/namex64/control02.ctl", "/disk1/oracle64/OraHome1/oradata/namex64/control03.ctl")
open_cursors = 300
max_enabled_roles = 145
#db_block_buffers = 20480
db_block_buffers = 604800
#shared_pool_size = 419430400
shared_pool_size = 8000000000
#log_buffer = 163840000
log_buffer = 2147467264
#large_pool_size = 614400
java_pool_size = 0
log_checkpoint_interval = 10000
log_checkpoint_timeout = 1800
processes = 1014
# audit_trail = false # if you want auditing
# timed_statistics = false # if you want timed statistics
timed_statistics = true # if you want timed statistics
# max_dump_file_size = 10000 # limit trace file size to 5M each
# Uncommenting the lines below will cause automatic archiving if archiving has
# been enabled using ALTER DATABASE ARCHIVELOG.
# log_archive_start = true
# log_archive_dest_1 = "location=/disk1/oracle64/OraHome1/admin/namex64/arch"
# log_archive_format = arch_%t_%s.arc
#DBCA uses the default database value (30) for max_rollback_segments
#100 rollback segments (or more) may be required in the future
#Uncomment the following entry when additional rollback segments are created and made online
#max_rollback_segments = 500
# If using private rollback segments, place lines of the following
# form in each of your instance-specific init.ora files:
#rollback_segments = ( RBS0, RBS1, RBS2, RBS3, RBS4, RBS5, RBS6, RBS7, RBS8, RBS9, RBS10, RBS11, RBS12, RBS13, RBS14, RBS15, RBS16, RBS17, RBS18, RBS19, RBS20, RBS21, RBS22, RBS23, RBS24, RBS25, RBS26, RBS27, RBS28 )
# Global Naming -- enforce that a dblink has same name as the db it connects to
# global_names = false
# Uncomment the following line if you wish to enable the Oracle Trace product
# to trace server activity. This enables scheduling of server collections
# from the Oracle Enterprise Manager Console.
# Also, if the oracle_trace_collection_name parameter is non-null,
# every session will write to the named collection, as well as enabling you
# to schedule future collections from the console.
# oracle_trace_enable = true
# define directories to store trace and alert files
background_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/bdump
core_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/cdump
#Uncomment this parameter to enable resource management for your database.
#The SYSTEM_PLAN is provided by default with the database.
#Change the plan name if you have created your own resource plan.# resource_manager_plan = system_plan
user_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/udump
db_block_size = 16384
remote_login_passwordfile = exclusive
os_authent_prefix = ""
compatible = "8.0.5"
#sort_area_size = 65536
sort_area_size = 1024000000
sort_area_retained_size = 65536
DB_WRITER_PROCESSES=4
How to improve my performance activities on Oracle server.
Please guide me regarding this issue.
If anyone wants more info, please let me know.
Best regards,
SenthilkumarAre you sure that it is not an application constraint ? i.e. the application can't handle so much data per second ? (application locks, threads )
Have you tried to write a simple test program, which inserts predefined data (which your application inserts) the same data, only changing keys ?
Then comparing the values from the 1st and the 2nd configuration ?
Did you check the way your application is communicating with oracle ? If it is TCP/ip (even on the local machine) then this is your main problem.
And one more thing, do you know if your application is able to run the load (inserts) of data on different threads (i.e. in parallel), because if is not, you won't be able to push the speed higher because your constraint is the speed of a single CPU. Consider running several process, which loads the data.
We had the same problem ot AIX machines with 4 cpus. Monitoring the machine, we found that only 25% (1 cpu) where in use. We had to run 4 processes to push the speed up. Check your system's overal load while running the 'load' (inserts).
log_checkpoint_interval = 10000
Check if this value is appropriate. Maybe you should set it to 0 (infinite). This will disable checkpoints on a 'number of undo record' basis. Checpoints will occure only on log switch.
How much redo files per redo groups do you have ? What is their size ? Are they on different disks ? How much redo data is generated by a single 'record' inserted ?
Hope i helped at least a little. -
Oracle BPM 11.1.1.5 Performance issues
Hi,
I have 2 Node Cluster of Oracle SOA 11.1.1.5 installed. Have a separate cluster for SOA/BPM, BAM, OSB, WSM. Here are the OS and Java versions
Java Vendor: HP
Java Version: 1.6.0.14
OS: HP-UX
OS Version: B.11.31
Running into an issue where Oracle BPM is performing very slow the task forms takes forever to come up and performing any actions takes too long to proceed som time it keeps timing out. I have over 100 SOA Process running plus some BPM Process but BMP Workspace application is running too slow as have customized tasks forms. But the same works a bit better in other instance which is non-clustered. Any idea what can be done on the server level to get around the performance issues. So far have modified Audit Levels and reduced soa-infra bpm log have the following settings in setDomainEnv.sh for USER_MEM_ARGS=-Xms2000m -Xmx6000m -XX:PermSize=1000m
But not help. Any idea what else to look into to get around these bpm performance issues. Here is what I have in setSOADomainEnv.sh
# 8395254: add -da:org.apache.xmlbeans... in EXTRA_JAVA_PROPERTIES
EXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES} -da:org.apache.xmlbeans..."
XENGINE_DIR="${SOA_ORACLE_HOME}/soa/thirdparty/edifecs/XEngine"
DEFAULT_MEM_ARGS="-Xms512m -Xmx1024m"
PORT_MEM_ARGS="-Xms768m -Xmx1536m"
if [ "${JAVA_VENDOR}" != "Oracle" ] ; then
DEFAULT_MEM_ARGS="${DEFAULT_MEM_ARGS} -XX:PermSize=128m -XX:MaxPermSize=512m"
PORT_MEM_ARGS="${PORT_MEM_ARGS} -XX:PermSize=256m -XX:MaxPermSize=512m"
fi
#========================================================
# setup LD_LIBRARY_PATH if directory is present...
#========================================================
if [ -d ${XENGINE_DIR}/bin ]; then
LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${XENGINE_DIR}/bin"
export LD_LIBRARY_PATH
fi
#========================================================
# setup platform specific environment variables
#========================================================
case ${PLATFORM_TYPE} in
# AIX
AIX)
if [ -d ${XENGINE_DIR}/bin ]; then
LIBPATH="${LIBPATH}:${XENGINE_DIR}/bin"
export LIBPATH
fi
USER_MEM_ARGS=${PORT_MEM_ARGS}
export USER_MEM_ARGS
# Fix for 7828060
POST_CLASSPATH=${POST_CLASSPATH}:${SOA_ORACLE_HOME}/soa/modules/soa-ibm-addon.jar
# Fix for 7520915 and 8264518 and 8305217
EXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES} -Djavax.xml.datatype.DatatypeFactory=org.apache.xerces.jaxp.datatype.DatatypeFactoryImpl -Djava.endorsed.dirs=${SOA_ORACLE_HOME}/bam/modules/org.apache.xalan_2.7.1"
export EXTRA_JAVA_PROPERTIES
# HPUX
HP-UX)
if [ -d ${XENGINE_DIR}/bin ]; then
SHLIB_PATH="${SHLIB_PATH}:${XENGINE_DIR}/bin"
export SHLIB_PATH
fi
LD_LIBRARY_PATH="${XENGINE_DIR}/bin:${LD_LIBRARY_PATH}"
export LD_LIBRARY_PATH
USER_MEM_ARGS="-d64 ${PORT_MEM_ARGS}"
export USER_MEM_ARGS
;;And here is what I have in setDomainEnv.sh
XMS_SUN_64BIT="256"
export XMS_SUN_64BIT
XMS_SUN_32BIT="256"
export XMS_SUN_32BIT
XMX_SUN_64BIT="512"
export XMX_SUN_64BIT
XMX_SUN_32BIT="512"
export XMX_SUN_32BIT
XMS_JROCKIT_64BIT="256"
export XMS_JROCKIT_64BIT
XMS_JROCKIT_32BIT="256"
export XMS_JROCKIT_32BIT
XMX_JROCKIT_64BIT="512"
export XMX_JROCKIT_64BIT
XMX_JROCKIT_32BIT="512"
export XMX_JROCKIT_32BIT
if [ "${JAVA_VENDOR}" = "Sun" ] ; then
WLS_MEM_ARGS_64BIT="-Xms256m -Xmx512m"
export WLS_MEM_ARGS_64BIT
WLS_MEM_ARGS_32BIT="-Xms256m -Xmx512m"
export WLS_MEM_ARGS_32BIT
else
WLS_MEM_ARGS_64BIT="-Xms512m -Xmx512m"
export WLS_MEM_ARGS_64BIT
WLS_MEM_ARGS_32BIT="-Xms512m -Xmx512m"
export WLS_MEM_ARGS_32BIT
fi
if [ "${JAVA_VENDOR}" = "Oracle" ] ; then
CUSTOM_MEM_ARGS_64BIT="-Xms${XMS_JROCKIT_64BIT}m -Xmx${XMX_JROCKIT_64BIT}m"
export CUSTOM_MEM_ARGS_64BIT
CUSTOM_MEM_ARGS_32BIT="-Xms${XMS_JROCKIT_32BIT}m -Xmx${XMX_JROCKIT_32BIT}m"
export CUSTOM_MEM_ARGS_32BIT
else
CUSTOM_MEM_ARGS_64BIT="-Xms${XMS_SUN_64BIT}m -Xmx${XMX_SUN_64BIT}m"
export CUSTOM_MEM_ARGS_64BIT
CUSTOM_MEM_ARGS_32BIT="-Xms${XMS_SUN_32BIT}m -Xmx${XMX_SUN_32BIT}m"
export CUSTOM_MEM_ARGS_32BIT
fi
MEM_ARGS_64BIT="${CUSTOM_MEM_ARGS_64BIT}"
export MEM_ARGS_64BIT
MEM_ARGS_32BIT="${CUSTOM_MEM_ARGS_32BIT}"
export MEM_ARGS_32BIT
if [ "${JAVA_USE_64BIT}" = "true" ] ; then
MEM_ARGS="${MEM_ARGS_64BIT}"
export MEM_ARGS
else
MEM_ARGS="${MEM_ARGS_32BIT}"
export MEM_ARGS
fi
MEM_PERM_SIZE_64BIT="-XX:PermSize=128m"
export MEM_PERM_SIZE_64BIT
MEM_PERM_SIZE_32BIT="-XX:PermSize=128m"
export MEM_PERM_SIZE_32BIT
if [ "${JAVA_USE_64BIT}" = "true" ] ; then
MEM_PERM_SIZE="${MEM_PERM_SIZE_64BIT}"
export MEM_PERM_SIZE
else
MEM_PERM_SIZE="${MEM_PERM_SIZE_32BIT}"
export MEM_PERM_SIZE
fi
MEM_MAX_PERM_SIZE_64BIT="-XX:MaxPermSize=512m"
export MEM_MAX_PERM_SIZE_64BIT
MEM_MAX_PERM_SIZE_32BIT="-XX:MaxPermSize=512m"
export MEM_MAX_PERM_SIZE_32BIT
if [ "${JAVA_USE_64BIT}" = "true" ] ; then
MEM_MAX_PERM_SIZE="${MEM_MAX_PERM_SIZE_64BIT}"
export MEM_MAX_PERM_SIZE
else
MEM_MAX_PERM_SIZE="${MEM_MAX_PERM_SIZE_32BIT}"
export MEM_MAX_PERM_SIZE
fi
if [ "${JAVA_VENDOR}" = "Sun" ] ; then
if [ "${PRODUCTION_MODE}" = "" ] ; then
MEM_DEV_ARGS="-XX:CompileThreshold=8000 ${MEM_PERM_SIZE} "
export MEM_DEV_ARGS
fi
fi
# Had to have a separate test here BECAUSE of immediate variable expansion on windows
if [ "${JAVA_VENDOR}" = "Sun" ] ; then
MEM_ARGS="${MEM_ARGS} ${MEM_DEV_ARGS} ${MEM_MAX_PERM_SIZE}"
export MEM_ARGS
fi
if [ "${JAVA_VENDOR}" = "HP" ] ; then
MEM_ARGS="${MEM_ARGS} ${MEM_MAX_PERM_SIZE}"
export MEM_ARGS
fi
if [ "${JAVA_VENDOR}" = "Apple" ] ; then
MEM_ARGS="${MEM_ARGS} ${MEM_MAX_PERM_SIZE}"
export MEM_ARGS
fi
if [ "${debugFlag}" = "true" ] ; then
JAVA_OPTIONS="${JAVA_OPTIONS} -da:org.apache.xmlbeans... "
export JAVA_OPTIONS
fi
export USER_MEM_ARGS="-Xms4g -Xmx6g -XX:PermSize=2g -XX:+UseParallelGC -XX:+UseParallelOldGC"Here is the output of the top command
Load averages: 0.08, 0.06, 0.06
315 processes: 205 sleeping, 110 running
Cpu states:
CPU LOAD USER NICE SYS IDLE BLOCK SWAIT INTR SSYS
0 0.06 4.0% 0.2% 1.0% 94.8% 0.0% 0.0% 0.0% 0.0%
2 0.07 3.6% 5.0% 0.2% 91.2% 0.0% 0.0% 0.0% 0.0%
4 0.07 2.0% 0.2% 0.0% 97.8% 0.0% 0.0% 0.0% 0.0%
6 0.07 2.4% 0.2% 0.8% 96.6% 0.0% 0.0% 0.0% 0.0%
8 0.09 1.2% 0.2% 10.9% 87.7% 0.0% 0.0% 0.0% 0.0%
10 0.12 3.0% 0.0% 11.1% 85.9% 0.0% 0.0% 0.0% 0.0%
12 0.08 3.0% 0.2% 6.6% 90.3% 0.0% 0.0% 0.0% 0.0%
14 0.09 4.2% 1.2% 0.8% 93.8% 0.0% 0.0% 0.0% 0.0%
avg 0.08 3.0% 1.0% 3.8% 92.2% 0.0% 0.0% 0.0% 0.0%
System Page Size: 4Kbytes
Memory: 38663044K (38379156K) real, 149420048K (148978096K) virtual, 26349848K f
ree Page# 1/63
CPU TTY PID USERNAME PRI NI SIZE RES STATE TIME %WCPU %CPU COMMAND
4 ? 3926 root 152 20 213M 70040K run 694:47 10.05 10.04 cimprovagt
4 ? 6855 user1 152 20 7704M 1125M run 4:36 9.31 9.29 java
0 ? 6126 user2 152 20 2790M 1863M run 22:57 4.16 4.15 javaHere is the Memory on the box
Memory: 98132 MB (95.83 GB)
ThanksAfter changing JVM settigns for soa cluster it's a bit better but still slow so wondering what other tweaks can be done on the JVM side. Here is what is for the SOA Cluster
USER_MEM_ARGS="-server -Xms12928m -Xmx12928m -XX:PermSize=3072m -Xmn3232m -XX:+SXTElimination -XX:+UseParallelGC -XX:+UseParNewGC -XX:+ExplicitGCInvokesConcurrent -XX:-TraceClassLoading -XX:-TraceClassUnloading"
We are running multiple instances on the same boxes and the total RAM on the machine is 95 GB on each box which is being shared across 4 cluster environments. For now other 3 cluster environments have the SOA Cluster JVM setting as
USER_MEM_ARGS="-server -Xms4096m -Xmx4096m -XX:PermSize=1024m -Xmn1152m -XX:+SXTElimination -XX:+UseParallelGC -XX:+UseParNewGC -XX:+ExplicitGCInvokesConcurrent -XX:-TraceClassLoading -XX:-TraceClassUnloading"
Any help on what else I can tweak or set in JVM to get a better performance.
Thanks -
Performance issue related to OWM? Oracle version is 10.2.0.4
The optimizer picks hash join instead of nested loop for the queries with OWM tables, which causes full table scan everywhere. I wonder if it happens in your databases as well, or just us. If you did and knew what to do to solve this, it would be great appriciated! I did log in a SR to Oracle but it usually takes months to reach the solution.
Thanks for any possible answers!Ha, sounded like you knew what I was talking about :)
I thought the issue must've had something to do with OWM because some complicate queries have no performance issue while they're regular tables. There's a batch job which took an hour to run now it takes 4.5 hours. I just rewrote the job to move the queries from OWM to regular tables, it takes 20 minutes. However today when I tried to get explain plans for some queries involve regular tables with large amount of data, I got the same full table scan problem with hash join. So I'm convinced that it probably is not OWM. But the patch for removing bug fix didn't help with the situation here.
I was hoping that other companies might have this problem and had a way to work around. If it's not OWM, I'm surprised that this only happens in our system.
Thanks for the reply anyway! -
Oracle 11g Migration performance issue
Hello,
There a performance issue with Migration from Oracle 10g(10.2.0.5) to Oracle 11g(11.2.0.2).
Its very simple statement hanging for more than a day and later found that query plan is very very bad. Example of the query is given below:
INSERT INTO TABLE_XYZ
SELECT F1,F2,F3
FROM TABLE_AB, TABLE_BC
WHERE F1=F4;
While looking at cost in Explain plan :
on 10g --> 62567
0n 11g --> 9879652356776
Strange thing is that
Scenario 1: if I issue just query as shown below, will display rows immediately :
SELECT F1,F2,F3
FROM TABLE_AB, TABLE_BC
WHERE F1=F4;
Scenario 2: If I create a table as shown below, will work correctly.
CREATE TABLE TABLE_XYZ AS
SELECT F1,F2,F3
FROM TABLE_AB, TABLE_BC
WHERE F1=F4;
What could be the issue here with INSERT INTO <TAB> SELECT <COL> FROM <TAB1>?Table:
CREATE TABLE AVN_WRK_F_RENEWAL_TRANS_T
"PKSRCSYSTEMID" NUMBER(4,0) NOT NULL ENABLE,
"PKCOMPANYCODE" VARCHAR2(8 CHAR) NOT NULL ENABLE,
"PKBRANCHCODE" VARCHAR2(8 CHAR) NOT NULL ENABLE,
"PKLINEOFBUSINESS" NUMBER(4,0) NOT NULL ENABLE,
"PKPRODUCINGOFFICELIST" VARCHAR2(2 CHAR) NOT NULL ENABLE,
"PKPRODUCINGOFFICE" VARCHAR2(8 CHAR) NOT NULL ENABLE,
"PKEXPIRYYR" NUMBER(4,0) NOT NULL ENABLE,
"PKEXPIRYMTH" NUMBER(2,0) NOT NULL ENABLE,
"CURRENTEXPIRYCOUNT" NUMBER,
"CURRENTRENEWEDCOUNT" NUMBER,
"PREVIOUSEXPIRYCOUNT" NUMBER,
"PREVIOUSRENEWEDCOUNT" NUMBER
SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT
TABLESPACE "XYZ" ;
Explain Plan(With Insert Statement and Query):_
INSERT STATEMENT, GOAL = ALL_ROWS Cost=9110025395866 Cardinality=78120 Bytes=11952360
LOAD TABLE CONVENTIONAL Object owner=ODS Object name=AVN_WRK_F_RENEWAL_TRANS
NESTED LOOPS OUTER Cost=9110025395866 Cardinality=78120 Bytes=11952360
TABLE ACCESS FULL Object owner=ODS Object name=AVN_WRK_F_RENEWAL_TRANS_1ST Cost=115 Cardinality=78120 Bytes=2499840
VIEW PUSHED PREDICATE Object owner=ODS Cost=116615788 Cardinality=1 Bytes=121
SORT GROUP BY Cost=116615788 Cardinality=3594 Bytes=406122
VIEW Object owner=SYS Object name=VW_DAG_1 Cost=116615787 Cardinality=20168 Bytes=2278984
SORT GROUP BY Cost=116615787 Cardinality=20168 Bytes=4073936
NESTED LOOPS OUTER Cost=116614896 Cardinality=20168 Bytes=4073936
VIEW Object owner=SYS Cost=5722 Cardinality=20168 Bytes=2157976
NESTED LOOPS Cost=5722 Cardinality=20168 Bytes=2097472
HASH JOIN Cost=924 Cardinality=1199 Bytes=100716
NESTED LOOPS
NESTED LOOPS Cost=181 Cardinality=1199 Bytes=80333
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=159 Cardinality=1199 Bytes=39567
INDEX RANGE SCAN Object owner=ODS Object name=IX_INWPOLDTLS_SYSCOMPANYBRANCH Cost=7 Cardinality=1199
INDEX UNIQUE SCAN Object owner=ODS Object name=PK_AVN_D_MASTERPOLICYDETAILS Cost=0 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=1 Cardinality=1 Bytes=34
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=288498 Bytes=4904466
VIEW PUSHED PREDICATE Object owner=ODS Cost=4 Cardinality=1 Bytes=20
FILTER
SORT AGGREGATE Cardinality=1 Bytes=21
TABLE ACCESS BY GLOBAL INDEX ROWID Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=4 Cardinality=1 Bytes=21
INDEX RANGE SCAN Object owner=ODS Object name=PK_AVN_F_TRANSACTIONS Cost=3 Cardinality=1
VIEW PUSHED PREDICATE Object owner=ODS Cost=5782 Cardinality=1 Bytes=95
SORT GROUP BY Cost=5782 Cardinality=2485 Bytes=216195
VIEW Object owner=SYS Object name=VW_DAG_0 Cost=5781 Cardinality=2485 Bytes=216195
SORT GROUP BY Cost=5781 Cardinality=2485 Bytes=278320
HASH JOIN Cost=5780 Cardinality=2485 Bytes=278320
VIEW Object owner=SYS Object name=VW_GBC_15 Cost=925 Cardinality=1199 Bytes=73139
SORT GROUP BY Cost=925 Cardinality=1199 Bytes=100716
HASH JOIN Cost=924 Cardinality=1199 Bytes=100716
NESTED LOOPS
NESTED LOOPS Cost=181 Cardinality=1199 Bytes=80333
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=159 Cardinality=1199 Bytes=39567
INDEX RANGE SCAN Object owner=ODS Object name=IX_INWPOLDTLS_SYSCOMPANYBRANCH Cost=7 Cardinality=1199
INDEX UNIQUE SCAN Object owner=ODS Object name=PK_AVN_D_MASTERPOLICYDETAILS Cost=0 Cardinality=1
TABLE ACCESS BY INDEX ROWID Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=1 Cardinality=1 Bytes=34
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=288498 Bytes=4904466
VIEW Object owner=SYS Object name=VW_GBF_16 Cost=4854 Cardinality=75507 Bytes=3850857
SORT GROUP BY Cost=4854 Cardinality=75507 Bytes=2340717
VIEW Object owner=ODS Cost=4207 Cardinality=75507 Bytes=2340717
SORT GROUP BY Cost=4207 Cardinality=75507 Bytes=1585647
PARTITION HASH ALL Cost=3713 Cardinality=75936 Bytes=1594656
TABLE ACCESS FULL Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=3713 Cardinality=75936 Bytes=1594656
Explain Plan(Only Query):_
SELECT STATEMENT, GOAL = ALL_ROWS Cost=62783 Cardinality=89964 Bytes=17632944
HASH JOIN OUTER Cost=62783 Cardinality=89964 Bytes=17632944
TABLE ACCESS FULL Object owner=ODS Object name=AVN_WRK_F_RENEWAL_TRANS_1ST Cost=138 Cardinality=89964 Bytes=2878848
VIEW Object owner=ODS Cost=60556 Cardinality=227882 Bytes=37372648
HASH GROUP BY Cost=60556 Cardinality=227882 Bytes=26434312
VIEW Object owner=SYS Object name=VW_DAG_1 Cost=54600 Cardinality=227882 Bytes=26434312
HASH GROUP BY Cost=54600 Cardinality=227882 Bytes=36005356
HASH JOIN OUTER Cost=46664 Cardinality=227882 Bytes=36005356
VIEW Object owner=SYS Cost=18270 Cardinality=227882 Bytes=16635386
HASH JOIN Cost=18270 Cardinality=227882 Bytes=32587126
HASH JOIN Cost=12147 Cardinality=34667 Bytes=2912028
HASH JOIN Cost=10076 Cardinality=34667 Bytes=2322689
TABLE ACCESS FULL Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=137 Cardinality=34667 Bytes=1178678
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=9934 Cardinality=820724 Bytes=27083892
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=866377 Bytes=14728409
VIEW Object owner=ODS Cost=5195 Cardinality=227882 Bytes=13445038
HASH GROUP BY Cost=5195 Cardinality=227882 Bytes=4785522
PARTITION HASH ALL Cost=3717 Cardinality=227882 Bytes=4785522
TABLE ACCESS FULL Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=3717 Cardinality=227882 Bytes=4785522
VIEW Object owner=ODS Cost=26427 Cardinality=227882 Bytes=19369970
HASH GROUP BY Cost=26427 Cardinality=227882 Bytes=18686324
VIEW Object owner=SYS Object name=VW_DAG_0 Cost=26427 Cardinality=227882 Bytes=18686324
HASH GROUP BY Cost=26427 Cardinality=227882 Bytes=25294902
HASH JOIN Cost=20687 Cardinality=227882 Bytes=25294902
VIEW Object owner=SYS Object name=VW_GBC_15 Cost=12826 Cardinality=34667 Bytes=2080020
HASH GROUP BY Cost=12826 Cardinality=34667 Bytes=2912028
HASH JOIN Cost=12147 Cardinality=34667 Bytes=2912028
HASH JOIN Cost=10076 Cardinality=34667 Bytes=2322689
TABLE ACCESS FULL Object owner=ODS Object name=AVN_D_MASTERPOLICYDETAILS Cost=137 Cardinality=34667 Bytes=1178678
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYDETAILS Cost=9934 Cardinality=820724 Bytes=27083892
TABLE ACCESS FULL Object owner=ODS Object name=INWARDSPOLICYLOBMAPPING Cost=741 Cardinality=866377 Bytes=14728409
VIEW Object owner=SYS Object name=VW_GBF_16 Cost=7059 Cardinality=227882 Bytes=11621982
HASH GROUP BY Cost=7059 Cardinality=227882 Bytes=6836460
VIEW Object owner=ODS Cost=5195 Cardinality=227882 Bytes=6836460
HASH GROUP BY Cost=5195 Cardinality=227882 Bytes=4785522
PARTITION HASH ALL Cost=3717 Cardinality=227882 Bytes=4785522
TABLE ACCESS FULL Object owner=ODS Object name=AVN_F_TRANSACTIONS Cost=3717 Cardinality=227882 Bytes=4785522 -
Hi,
We were using Oracle 9i in Solaris 5.8 and it was working fine with some minor performance issues. We formatted the Solaris server with new Solaris 5.10 and installed Oracle 10g.
Now we are experiencing some performance issues in Oracle 10g. This issue is arising when using through Websphere 5.1.
We have analyzed the schema, index is rebuild, SGA is 4.5 GB, PGA is 2.0 GB, Solaris RAM is 16 GB. Also we are having some Mat Views (possibly this may cause performance issues - not sure) due to refresh.
Also I have changed some parameters in init.ora file like query_rewrite = STALE_TOLERATED, open_cursors = 1500 etc.
Is is something due to driver from which the data is accessed. I guess it is not utilizing the indexes on the table.
Can anyone please suggest, what could be the issue ?<p>There are a lot of changes to the optimizer in the upgrade from 9i to 10g, and you need to be aware of them. There are also a number of changes to the default stats collection mechanism, so after your upgrade your statistics (hence execution paths) could change dramatically.
</p>
<p>
Greg Rahn has a useful entry on his blog about stats collection, and the blog al,so points to an Oracle white paper which will give you a lot of ideas about where the optimizer changes - which may help you spot your critical issues.
</p>
<p>Otherwise, follow triggb's advice about using Statspack to find the SQL that is the most expensive - it's reasonably likely to be this SQL that has changed execution plans in the upgrade.
</p>
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk -
Oracle BPEL 11G performance issue
Hi
We are facing performance issues in executing our composite process in Oracle SOA 11g .
We have installed an admin server and 2 managed servers in cluster in same box. The machine utilization reached almost 95% when i started the admin server and 2 managed server (min n max size of heap given as 1GB each in start up). So i shut down one managed server and increased the JVM size of other to 2 GB and found that the heap size reaches 1.5 GB on start up (observed the heap size using Jconsole)
The machine capacity is windows server with 4 GB RAM.
Our process requries multiple records to be processed which are retrieved using Database query.
We have created 2 composites
the first composite has 2 BPEL process. the First BPEL 1 executes the DB query and retrieves the result and based on result retrieved we invoke the second BPEL2
which does around 4 DB calls and passed the result to the next composite. The final BPEL process 3 has multiple select and update query involving DB intensive process.
When we retrieve 500 records from the BPEL 1 and process , half way through we face out of memory exception. So we are using throttling but even then while executing the process of BPEL3 we are facing out of memory excetion.
Can you let me know how to find the memory space taken from heap by each BPEL process during it execution. Where in console can i get the memory used details so that i can find which BPEL sis consuming more memory and we can work on optimising.
Actually we are expecting around 1Lakh and above messages per day and need to check on how this process can handle and also how to increase or determine the capacity of the windows box.
any immediate help is highly appreciated
thanksAlways raise a case with Oracle Support for such issues.
Regards,
Anuj -
Oracle Apps Database severe Performance Issue
Hi Gurus,
This is regarding a severe performance issue running in our Production E-Business Suite Instance.
its an R12.1.3 setup installed with 11.2.0.1 Database. All the servers are Solaris Sparc 64 (Solaris 10)
Let me brief you about the instance first:
2 Node Application
- Main Application Server hosting web/forms/concurrent/admin servers
- iSupplier server hosting web services (placed in DMZ, used by external suppliers via Internet)
1 Node Database Server
Database Server Specs
Memory: 144G phys mem 20G total swap
- CPUs (8Px4cores, 2Px2cores)
- I/O - fiber channel hard disk (hitachi SAN Storage) - 7 DATA_TOPs (7 drives with RAID 5) - current DB size 1.6 TB
- at peak load, around 1000 concurrent forms session and 2000 web sessions.
We have been facing some serious performance issues and we raised an SR with Oracle Support.
The Support analyzed a bunch of AWR Reports we provided them and they asked us to increase the DB_CACHE from its current usage of 27G to 40G
So, we changed SGA_TARGET from 35G to 50G and PGA was increased from 35G to 40G as v$pgastat was also suggesting some lack of memory.
We made these changes last night.
Today morning we observed the following:
1. after start of office hours, we checked in the home page of EM DB Console that ADDM was showing reduced impact due to lack of SGA memory which seemed to be a good sign. Earlier it was around 25% which was now at 12%.
However, negative aspects were:
1. lot of swapping was reported by the System Administrators on the DB Server
2. High CPU Usage
3. EM DB Console showed a lot of "Concurrency Wait Class" events ...throughout the day lot of blocking sessions were reported which were making other sessions to wait.
in the AWR Report, following foreground reports were listed:
Top 5 Timed Foreground Events
Event
Waits
Time(s)
Avg wait (ms)
% DB time
Wait Class
DB CPU
132,577
61.46
library cache lock
3,539
40,683
11496
18.86
Concurrency
library cache: mutex X
4,014,083
21,011
5
9.74
Concurrency
db file sequential read
4,138,014
20,767
5
9.63
User I/O
latch free
381,916
5,897
15
2.73
Other
This is showing "library cache lock" events as the main culprit apart from the usual suspect, the CPU.
I am attaching the AWR Report. Please let me know if i should revert back the memory changes or is there anything else i could do.
Please help us resolving it because the performance is going worst.
Regards,
Muneer.Pl do not post duplicates - Oracle Apps Database severe Performance Issue
For all critical production issues, pl work with Support thru SRs - using the forums to troubleshoot production issues is not wise -
Oracle 9i reading BLOB performance issues
Windows XP Pro SP2
JDK 1.5.0_05
Oracle 9i
Oracle Thin Driver for JDK 1.4 v.10.2.0.1.0
DBCP v.1.2.1
Spring v1.2.7 (I am using the JDBC template for convenience)
I have run into serious performance issues reading BLOBs from Oracle using oracle's JDBC thin driver. I am not sure if it a constraint/mis-configuration with oracle or a JDBC problem.
I am hoping that someone has some experience accessing multi-MB BLOBs under heavy volume.
We are considering using Oracle 8 or 9 as a document repository. It will end up storing hundreds of thousands of PDFs that can be as large as 30 MBs. We don't have access to Oracle 10.
TESTS
I am running tests against Oracle 8 and 9 to simulate single and multi-threaded document access. Out goal is to get a sense of KBps throughput and BLOB data access contention.
DATA
There is a single test table with 100 rows. Each row has a PK id and a BLOB field. The blobs range in size from a few dozen KB to 12MB. They represent a valid sample of production data. The total data size is approx. 121 MBs.
Single Threaded Test
The test selects a single blob object at a time and then reads the contents of the blob's binary input stream in 2 KB chunks. At the end of the test, it will have accessed all 100 blobs and streamed all 121 MBs. The test harness is JUnit.
8i Results: On 8i it starts and terminates successfully on a steady and reliable basis. The throughput hovers around 4.8 MBps.
9i Results: Similar reliability to 8i. The throughput is about 30% better.
Multi-Threaded Test
The multi-threaded test uses the same "blob reader" functionality used in the single threaded test. However, it spawns 8 threads each running a separate "blob reader".
8i Results: The tests successfully complete on a reliable basis. The aggregate throughput of all 8 threads is a bit more than 4.8 MBps.
9i Results: Erratic. The tests were highly erratic on 9i. Threads would intermittently lock when accessing a BLOB's output stream. Sometimes they lock accessing data from the same row, othertimes it is distinct rows. The number and the timing of the thread "locks" is indeterminate. When the test completed successfully the aggregate throughput of the 8 threads was approx. 5.4 MBps.
I would be more than happy to post code or the data model if that would help.
CarlosHi Murphy16,
Try investigate where are the principal issues in your RAC system.
Check:
* Expensive SQL's;
* Sorts in disks;
* Wait Events;
* Interconnect hardware issues;
* Applications doing unnecessary manual LOCKs (SQL);
* If SGA is adequatly sized (take care to not use of SWAP space "DISK");
* Backup's and unnecessary jobs running at business time (Realocate this jobs and backups to night window or a less intensive work hour at database);
* Rebuild indexes and identify tables that must be reorganized (fragmentation);
* Verify another software consuming resources on your server;
Please give us more info about your environment. The steps above are general, but you can use to guide u in basic performance issues.
Regards,
Rodrigo Mufalani
http://mufalani.blogspot.com -
Oracle 10.2.0.3 performance issue
Hi all,
I have a performance issue in the database I currently maintain.
Here's the specifications:
- Windows 2003 Server 64bit
- Oracle 10g 10.2.0.3 patch 31
- Application Server 10gR2 OC4J (Forms and Report Services)
The server was re-installed about 3 weeks ago after it got viruses.
I believe my setup on the memory parameters were fine because for two weeks the system ran fine and the end of day process is also increasing in terms of time.
However, starting 2 days ago in the middle of no where, the end of day process started to get really slow (from 11 minutes became 1 hour).
The IT standby in my client will normally do an analyze schema and after that every thing will be back to normal again.
I turned off the GATHER_STATS_JOB 2 weeks ago and replaced it with DBMS_STATS.GATHER_DATABASE_STATS which I scheduled to execute every Friday.
This problem occurred even before the server was re-installed and as usual, analyze schema (GATHER_SCHEMA_STATS) will fix it.
I don't have much options currently, and analyzing the schema seems to be the only solution which normally we never do for other clients (at not everyday analyzing schema).
I hope any of you could help me out with any solution on how to trace the exact problem on this.
Thank you,
AdhikaHi Satish,
This end of day process basically will insert some values to certain tables and also generating reports afterward.
I managed to get the AWR report within the time of the end of day process.
sql execute elapsed time is on the top of Time Model Statistics.
What I don't understand here is why is this issue happened only after 2 weeks?
I suspected that this might be because Oracle pick the wrong execution plan, and normally a wrong execution plan caused by outdated statistics on the indexes and tables, but what I cannot understand is why this is happening on Monday morning where in Friday night GATHER_DATABASE_STATS ran successfully.
Yesterday when the analyze schema was executed again, this morning I got another email saying that the performance issue occurred again.
The top most wait events are: db file sequential read, db file scattered read, log file parallel write, LNS Wait on SENDREQ, and log file sequential read.
Thank you,
Adhika
Maybe you are looking for
-
Converted OVM Server Image to VirtualBox, now won't boot.
Hi I have a Linux image(.img) from OracleVM server which I had to convert to VirtualBox format(vdi) for portability. I used the command - VBoxManage convertfromraw System.img System.vdi Now the converted image would not come up with some errors durin
-
New icloud calendar created on iphone not syncing to icloud or mac
OK from the SBIFF website at sbiff.org (if you love film, check it out!) I downloaded their full ics calendar of events to my iphone 5 calendar. On the iPhone 5, I imported that calendar data into a new iCloud calendar called SBIFF. All events impo
-
I have unreliable internet and need to know how to save LR
I am a full time RVer and often I am in locations with no internet access. Is the a way I can download and save the LR & PS tutorials to my hard drives? I have tried realplayer, but unsuccessful. Can you recommend software to do this reliably?
-
I am trying to configure a receiver mail adapter.. Our system is using microsoft exchange server and i specified the following, URL: Microsoft Exchange Server and I get the following exception in Adapter monitoring, Mail: error occured: com.sap.aii.a
-
Dear gurus Below is my select statement. Im having problem with statement. the problem is that the table vbfa have some entries like this 800 1400004654 10 3900012235 10 M 424,672.68 800 1400004654 10 3900012257