Performance issues with Oracle EE 9.2.0.4 and RedHat 2.1
Hello,
I am having some serious performance issues with Oracle Enterprise Edition 9.2.0.4 and RedHat Linux 2.1. The processor goes berserk at 100% for long (some 5 min.) periods of time, and all the ram memory gets used.
Some environment characteristics:
Machine: Intel Pentium IV 2.0GHz with 1GB of RAM.
OS: RedHat Linux 2.1 Enterprise.
Oracle: Oracle Enterprise Edition 9.2.0.4
Application: We have a small web-application with 10 users (for now) and very basic queries (all in stored procedures). Also we use the latest version of ODP.NET with default connection settings (some low pooling, etc).
Does anyone know what could be going on?
Is anybody else having this similar behavior?
We change from SQL-Server so we are not the world expert on the matter. But we want a reliable system nonetheless.
Please help us out, gives some tips, tricks, or guides
Thanks to all,
Frank
Thank you very much and sorry I couldnt write sooner. It seems that the administrator doesnt see the kswap going on so much, so I dont really know what is going on.
We are looking at some queries and some indexing but this is nuts, if I had some poor queries, which we dont really, the server would show pick right?
But he goes crazy and has two oracle processes taking all the resources. There seems to be little swapping going on.
Son now what? They are all ready talking about MS-SQL please help me out here, this is crazy!!!
We have, may be the most powerful combinations here. What is oracle doing?
We even kill the Working Process of the IIS and have no one do anything with the database and still dose two processes going on.
Can some one help me?
Thanks,
Frank
Similar Messages
-
Performance issue with Oracle data source
Hi all,
I've a rather strange problem that I'm stuck on need some assistance on.
I have a rules file which drags data in via an SQL data source thats an Oracle server. If I cut/paste the 3 sections of "select" "from" and "where" into SQL-Developer and run the query, it takes less than 1 second to complete. When I run the "load data" with this rule file or even use the "Retrieve" with the rules file edit, it takes up to an hour to complete/retrieve the data.
The table in question being used has millions of rows and I'm using one of the indexed fields to retrieve the data. It's as if the Essbase/Rule file is ognoring the index, or I have a config issue with the ODBC settings on the server that is causing the problem.
ODBC.INI file entry for the Oracle server as follows (changed any sensitive info to xxx or 999).
[XXX]
Driver=/opt/data01/hyperion/common/ODBC-64/Merant/5.2/lib/ARora22.so
Description=DataDirect 5.2 Oracle Wire Protocol
AlternateServers=
ApplicationUsingThreads=1
ArraySize=60000
CachedCursorLimit=32
CachedDescLimit=0
CatalogIncludesSynonyms=1
CatalogOptions=0
ConnectionRetryCount=0
ConnectionRetryDelay=3
DefaultLongDataBuffLen=1024
DescribeAtPrepare=0
EnableDescribeParam=0
EnableNcharSupport=0
EnableScrollableCursors=1
EnableStaticCursorsForLongData=0
EnableTimestampWithTimeZone=0
HostName=999.999.999.999
LoadBalancing=0
LocalTimeZoneOffset=
LockTimeOut=-1
LogonID=xxx
Password=xxx
PortNumber=1521
ProcedureRetResults=0
ReportCodePageConversionErrors=0
ServiceType=0
ServiceName=xxx
SID=
TimeEscapeMapping=0
UseCurrentSchema=1
Can anyone please advise on this lack of performance.
Thanks in advance
BagpussOne other thing that I've seen is that if your Oracle data source and Essbase server are in different geographic locations, you can get some delay when it retrieves data over the WAN. I guess there is some handshaking going on when passing the data from Oracle to Essbase (either by record or groups of records) that is slowed WAY down over the WAN.
Our solution to this was remove teh query out of the load rule, run it via SQL+ on a command line at the geographic location where the Oracle database is, then ftp the resulting file to where the Essbase server is.
With upwards of 6 million records being retrieved, it took around 4 hours in the load rule, but running the query via command line took 10 minutes, then the ftp took less than 5. -
Performance issue with Oracle Text index
Hi Experts,
We are on Oracle 11.2..0.3 on Solaris 10. I have implemented Oracle Text in our environment and I am facing a strange performance issue that is happening in our environment.
One sql having CONTAINS clause is taking forever - more than 20 minutes and still does not complete. This sql has a contains clause and an exists clause and a not exists clause.
Now if I remove the exists clause and a not exists clause , it completes fast. but with those two clauses it is just taking forever. It is late night so i am not able to post the table and sql query details and will do so tomorrow but based on this general description, are there any pointers for me to review?
sql query doing fine:
SELECT
U.CLNT_OID, U.USR_OID, S.MAILADDR
FROM
access_usr U
INNER JOIN access_sia S
ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
WHERE U.CLNT_OID = 'ABCX32S'
AND CONTAINS(LAST_NAME , 'TO%' ) >0
--sql query that hangs forever:
SELECT
U.CLNT_OID, U.USR_OID, S.MAILADDR
FROM
access_usr U
INNER JOIN access_sia S
ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
WHERE U.CLNT_OID = 'ABCX32S'
AND CONTAINS(LAST_NAME , 'TO%' ) >0
and exists (--one clause here wiht a few table joins)
and not exists (--one clause here wiht a few table joins);
--Now another strange thing I found is if instead of 'TO%' in this sql, if I were to use 'ZZ%' or 'L1%' it works fast but for 'TO%' it goes slow with those two exists not exists clauses!
I will be most thankful for the inputs.
OrauserNHi Barbara,
First of all, thanks a lot for reviewing the issue.
Unluckily making the change to empty_stoplist did not work out. I am today copying the entire sql here that has this issue and will be most thankful for more insights/pointers on what can be done.
Here is the entire sql:
SELECT U.CLNT_OID,
U.USR_OID,
S.EMAILADDRESS,
U.FIRST_NAME,
U.LAST_NAME,
S.JOBCODE,
S.LOCATION,
S.DEPARTMENT,
S.ASSOCIATEID,
S.ENTERPRISECOMPANYCODE,
S.EMPLOYEEID,
S.PAYGROUP,
S.PRODUCTLOCALE
FROM ACCESS_USR U
INNER JOIN
ACCESS_SIA S
ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
WHERE U.CLNT_OID = 'G39NY3D25942TXDA'
AND EXISTS
(SELECT 1
FROM ACCESS_USR_GROUP_XREF UGX
INNER JOIN ACCESS_GROUP RELG
ON RELG.CLNT_OID = UGX.CLNT_OID
AND RELG.GROUP_OID = UGX.GROUP_OID
INNER JOIN ACCESS_GROUP G
ON G.CLNT_OID = RELG.CLNT_OID
AND G.GROUP_TYPE_OID = RELG.GROUP_TYPE_OID
WHERE UGX.CLNT_OID = U.CLNT_OID
AND UGX.USR_OID = U.USR_OID
AND G.GROUP_OID = 920512943
AND UGX.INCLUDED = 1)
AND NOT EXISTS
(SELECT 1
FROM ACCESS_USR_GROUP_XREF UGX
INNER JOIN
ACCESS_GROUP G
ON G.CLNT_OID = UGX.CLNT_OID
AND G.GROUP_OID = UGX.GROUP_OID
WHERE UGX.CLNT_OID = U.CLNT_OID
AND UGX.USR_OID = U.USR_OID
AND G.GROUP_OID = 920512943
AND UGX.INCLUDED = 1)
AND CONTAINS (U.LAST_NAME, 'Bon%') > 0;
Like I said before if the EXISTS and NOT EXISTS clause are removed it works in sub-second. But with those EXISTS and NOT EXISTS CLAUSE IT TAKES ANY WHERE FROM 25 minutes to more than one hour.
NOte also that it was not TO% but Bon% in the CONTAINS clause that is giving the issue - sorry that was wrong on my part.
Also please see below the ORACLE TEXT index defined on the table ACCESS_USER:
--definition of preferences used in the index:
SET SERVEROUTPUT ON size unlimited
WHENEVER SQLERROR EXIT SQL.SQLCODE
DECLARE
v_err VARCHAR2 (1000);
v_sqlcode NUMBER;
v_count NUMBER;
BEGIN
ctxsys.ctx_ddl.create_preference ('cust_lexer', 'BASIC_LEXER');
ctxsys.ctx_ddl.set_attribute ('cust_lexer', 'base_letter', 'YES'); -- removes diacritics
EXCEPTION
WHEN OTHERS
THEN
v_err := SQLERRM;
v_sqlcode := SQLCODE;
v_count := INSTR (v_err, 'DRG-10701');
IF v_count > 0
THEN
DBMS_OUTPUT.put_line (
'The required preference named CUST_LEXER with BASIC LEXER is already set up');
ELSE
RAISE;
END IF;
END;
DECLARE
v_err VARCHAR2 (1000);
v_sqlcode NUMBER;
v_count NUMBER;
BEGIN
ctxsys.ctx_ddl.create_preference ('cust_wl', 'BASIC_WORDLIST');
ctxsys.ctx_ddl.set_attribute ('cust_wl', 'SUBSTRING_INDEX', 'true'); -- to improve performance
EXCEPTION
WHEN OTHERS
THEN
v_err := SQLERRM;
v_sqlcode := SQLCODE;
v_count := INSTR (v_err, 'DRG-10701');
IF v_count > 0
THEN
DBMS_OUTPUT.put_line (
'The required preference named CUST_WL with BASIC WORDLIST is already set up');
ELSE
RAISE;
END IF;
END;
--now below is the code of the index:
CREATE INDEX ACCESS_USR_IDX3 ON ACCESS_USR
(FIRST_NAME)
INDEXTYPE IS CTXSYS.CONTEXT
PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
CREATE INDEX ACCESS_USR_IDX4 ON ACCESS_USR
(LAST_NAME)
INDEXTYPE IS CTXSYS.CONTEXT
PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
The strange thing is that, like I said, If I remove the exists clause the query returns very fast. Also if I modify the query to use only one NOT EXISTS clause and remove the other EXISTS clause it returns in less than one second. Also if I remove the EXISTS clause and use only the NOT EXISTS clause it returns in less than 4 seconds. But with both clauses it runs forever!
When I tried to get dbms_xplan.display_cursor to get the query plan (for the case of both exists and not exists clause in the query), it said that previous statement's sql id was 0 or something like that so that I was not able to see the query plan. I will keep trying to get this plan (it takes 25 minutes to one hour each time but will get this info soon). Again any pointers are most helpful.
Regards
OrauserN -
View objects performance issue with oracle seeded tables
While i am writing a view object on a oracle seeded tables like MTL_PARAMETERS, its taking more time to show in the oaf page.I am trying to display all these view object columns in detail disclosure of advanced table. My Application is taking more than two minutes to display the view columns of the query which is returning just 200 rows. Please help me how to improve performance when my query using seeded tables.
This issue is happening only in R12 view object and advanced tables.
Edited by: vlsn on Jun 24, 2012 11:36 PMHi All,
Here is architecture of my application:
Java application creates XML from the screen values and then inserts that XML
into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
2. it calls Product SP and then in product SP we have business logic. Product SP
does the execution and then inserts response into Response GTT.
3. Response XML is created by using XML generation function and response GTT.
I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
Regards,
Vikas Kumar -
Performance issue with joins on table VBAK, VBEP, VBKD and VBAP
hi all,
i have a report where there is a join on all 4 tables VBAK, VBEP, VBKD and VBAP.
the report is giving performance issues because of this join.
all the key fields are used for the joining of tables. but some of the non-key fields like vbap-vstel, vbap-abgru and vbep-wadat are also part of select query and are getting filled.
because of these there is a performance issue.
is there any way i can improve the performance of the join select query?
i am trying "for all entries" clause...
kindly provide any alternative if possible.
thanks.Hi,
Pls perform some of the below steps as applicable for the performance improvement:
a) Remove join on all the tables and put joins only on header and item (VBAK & VBAP).
b) code should have separate select for VBEP and VBKD.
c) remove the non key fields from the where clause. Once you retrieve data from the database into the internal table, sort the table and delete the entries which are not part of the non-key fields like vstel, abgru and wadat.
d) last option is you can create index in the VBAP & VBEP table with respect to the fields vstel, abgru & wadat ( not advisable)
e) buffering option on database tables also possible.
f) select only the fields into the internal table that are applicable for the processing logic and also the select query should contaian the field names in the same order as mentioned in the database table.
Hope this helps.
Regards
JLN -
Performance issue with Oracle Global Temporary table
Hi
Oracle version : 10.2.0.3.0 - Production
We have an application in Java / Oracle. Users request comes in XML and oracle parser parses it and inserts it into Global temporary tables and then Business Stored procedure picks data from these GTT's and do the required processing.
in the end data required response data is again inserted into response GTT's from which Response XML is generated.
Question : Is the use of Global temporary tables in Oracle degrades performance as we have large number of GTT's in our application approx. 5-600 such tables.
Regards,
Vikas KumarHi All,
Here is architecture of my application:
Java application creates XML from the screen values and then inserts that XML
into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
2. it calls Product SP and then in product SP we have business logic. Product SP
does the execution and then inserts response into Response GTT.
3. Response XML is created by using XML generation function and response GTT.
I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
Regards,
Vikas Kumar -
Hi,
Installed
Oracle Content Services 10.1.2.3 and Oracle AS (SOA Suite) 10.1.3.3.
Goal
Deploying custom webservices on the Oracle AS which invoke the Content Services webservices API. These custom webservices are invoked from ESB processes to upload content.
What does work
The pre-installed oc4j home of Oracle AS 10.1.3.3 has no pre-defined shared library for Oracle Content Services. So if we create a shared library containing the Oracle Content Services 10.1.2.3 jars (axis.jar, commons-discovery-0.2.jar, commons-logging-1.0.3.jar, content-ws-client.jar, http_client.jar and wsdl4j-1.5.1.jar) and include this shared library during deployment of our custom webservices, content is nicely uploaded without problems.
Problem
However, we created another oc4j instance (oc4j_services) on which these webservices should deployed and run instead of the oc4j home instance. This time Oracle AS already creates a pre-defined shared library for Content Services containing a 10.1.3 version. This library is not compatible with Content Services 10.1.2.3.
We tried to create another shared library with the same name (oracle.ifs.client) but another version number (10.1.2.3) and import it during deployment of our webservices (and exclude the 10.1.3 version), we also placed the Content Services 10.1.2.3 jars into the oc4j home applib and oc4j_services applib directory. Nonetheless, we either get a classnotfound or exceptioninitialization error.
Anyone has a solution to this?
Regards, RonaldThe log file shows:
ERROR [AJPRequestHandler-RMICallHandler-6]: oracle.classloader.util.AnnotatedNoClassDefFoundError:
Missing class: oracle.ifs.fdk.RemoteLoginManagerServiceLocator
Dependent class: oracle.ifs.examples.ws.WsConnection
Loader: FAC_WS_Content_Services.root:0.0.0
Code-Source: /oracle/as_soa/j2ee/oc4j_services/applications/FAC_WS_Content_Services/FAC_WS_Content_Services.jar
Configuration: <ejb> in /oracle/as_soa/j2ee/oc4j_services/applications/FAC_WS_Content_Services
The missing class is available from the following locations:
1. Code-Source: /oracle/as_soa/j2ee/oc4j_services/shared-lib/oracle.ifs.client/10.1.2.3.0/content-ws-client.jar (from <code-source> in /oracle/as_soa/j2ee/oc4j_services/config/server.xml)
This code-source is available in loader oracle.ifs.client:10.1.2.3.0. This shared-library can be imported by the "FAC_WS_Content_Services" application.
But, if I select the shared library in Oracle AS EM, it states that my application (FAC_WS_Content_Services) does import this shared library.
Regards, Ronald -
Performance issues with Logic Pro 9
Hi,
I'm experiencing quite big performance issues with Logic Pro 9:
bad midi timing and sync, user interface is kinda slowing and less reactive, ...
Mac Pro 2x2.26 Ghz Quad, 4Gb ram, early 2009, shipped with 10.5.6 now upgraded to 10.5.8
Should I upgrade to 10.6.2?
ThanxHi,
I'm using the same machine ...
I never get any performance issues
I bought my mac in March 2009 and I was ben ready to work in less that 3 days without issue...
so more details about your setup are required to answare!
4GB ram???
I suggest to check your ram!!!
Ram must be compatible with Xeon Nehalem Try specification!
if you add 1GB of ram from nor clear manufactor.. i suggest to disconnect the addition 1GB RAM
G -
Performance issue with Crystal when upgrading Oracle to 11g
Dear,
I am facing performance issue in crystal report and oracle 11g as below:
In ther report server, I have created a ODBC for connect to another Oracle 11g server. also in report server I have created and published a folder to content all of my crystal report. These report can connect to oracle 11g server via ODBC.
and I have a tomcat server to run my application in my application I refer to report folder in report server.
This way can work with SQL server and oracle 9 or 10g but it facing performance issue in oracle 11g.
please let me know the root cause.
Notes: report server, tomcate server are win 32bit, but oracle is in win 64bit, and i have upgraded DataDirect connect ODBC version 6.1 but the issue can not resolve.
Please help me to solve it.
Thanks so much,
AnhHi Anh,
Use a third party ODBC test tool now. SQL Plus will be using the Native Oracle client so you can't compare performance.
Download our old tool called SQLCON: https://smpdl.sap-ag.de/~sapidp/012002523100006252882008E/sqlcon32.zip
Connect and then click on the SQL tab and paste in the SQL from the report and time that test.
I believe the issue is because the Oracle client is 64 bit, you should install the 32 bit Oracle Client. If using the 64 bit client then the client must thunk ( convert 64 bit data to 32 bit data format ) which is going to take more time.
If you can use OLE DB or using the Oracle Server driver ( native driver ) should be faster. ODBC puts another layer on top of the Oracle client so it too takes time to communicate between the layers.
Thank you
Don -
Performance issue with view selection after migration from oracle to MaxDb
Hello,
After the migration from oracle to MaxDb we have serious performance issues with a lot of our tableview selections.
Does anybody know about this problem and how to solve it ??
Best regards !!!
Gert-JanHello Gert-Jan,
most probably you need additional indexes to get better performance.
Using the command monitor you can identify the long running SQL statements and check the optimizer access strategy. Then you can decide which indexes might help.
If this is about an SAP system, you can find additional information about performance analysis in SAP notes 725489 and 819641.
SAP Hosting provides the so-called service 'MaxDB Migration Support' to help you in such cases. The service description can be found here:
http://www.saphosting.de/mediacenter/pdfs/solutionbriefs/MaxDB_de.pdf
http://www.saphosting.com/mediacenter/pdfs/solutionbriefs/maxDB-migration-support_en.pdf.
Best regards,
Melanie Handreck -
Performance issues with pipelined table functions
I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
TYPE resultset_typ IS REF CURSOR;
TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
TYPE table_typ IS TABLE OF row_typ;
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ;
c_default_limit CONSTANT PLS_INTEGER := 100;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ
IS
o_resultset resultset_typ;
BEGIN
OPEN o_resultset FOR
SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB;
RETURN o_resultset;
END base_query;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
IS
aa_source_data table_typ;-- := table_typ ();
BEGIN
LOOP
FETCH p_source_data
BULK COLLECT INTO aa_source_data
LIMIT p_limit_size;
EXIT WHEN aa_source_data.COUNT = 0;
/* Process the batch of (p_limit_size) records... */
FOR i IN 1 .. aa_source_data.COUNT
LOOP
PIPE ROW (aa_source_data (i));
END LOOP;
END LOOP;
CLOSE p_source_data;
RETURN;
END processor;
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT /*+ PARALLEL(t, 5) */ colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM TABLE (processor (base_query (argA, argB),100)) t
GROUP BY colC
ORDER BY colC
END with_pipeline;
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN 1 END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM (SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB)
GROUP BY colC
ORDER BY colC;
END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PMEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Performance issues with the Vouchers index build in SES
Hi All,
We are currently performing an upgrade for: PS FSCM 9.1 to PS FSCM 9.2.
As a part of the upgrade, Client wants Oracle SES to be deployed for some modules including, Purchasing, Payables (Vouchers)
We are facing severe performance issues with the Vouchers index build. (Volume of data = approx. 8.5 million rows of data)
The index creation process runs for over 5 days.
Can you please share any information or issues that you may have faced on your project and how they were addressed?Check the following logs for errors:
1. The message log from the process scheduler
2. search_server1-diagnostic.log in /search_server1/logs directory
If the build is getting stuck while crawling then we typically have to increase the Java Heap size for the Weblogic instance for SES> -
Performance issues with the Tuxedo MQ Adapter
We are experimenting some performance issues with the MQ Adapter. For example, we are seeing that the MQ Adapter takes from 10 to 100 ms in reading a single message from the queue and sending to the Tuxedo service. The Tuxedo service takes 80 ms in its execution so there is a considerable waste of time in the MQ adapter that we cannot explain.
Also, we have looked a lot of rollback transactions on the MQ adapter, for example we got 980 rollback transactions for 15736 transactions sent and only the MQ adapter is involved in the rollback. However, the operations are executed properly. The error we got is
135027.122.hqtux101!MQI_QMTESX01.7636.1.0: gtrid x0 x4ec1491f x25b59: LIBTUX_CAT:376: ERROR: tpabort: xa_rollback returned XA_RBROLLBACK.
I am looking for information at Oracle site, but I have not found nothing. Could you or someone from your team help me?Hi Todd,
We have 6 MQI adapters reading from 5 different queues, but in this case we are writing in only one queue.
Someone from Oracle told us that the XA_RBROLLBACK occurs because we have 6 MQ adapters that are reading from the same queues and when one adapter finds a message and try to get that message, it can occurs that other MQ Adapter gets it before. In this case, the MQ adapter rollbacks the transaction. Even when we got some XA_RBROLLBACK errors, we don´t lose message. Also, I read something about that when XA sends a xa_end call to MQ adapter, it actually does the rollback, so when the MQ adapter receives the xa_rollback call, it answers with XA_RBROLLBACK. Is that true?
However, I am more worried about the performance. We are putting a request message in a MQ queue and waiting for the reply. In some cases, it takes 150ms and in other cases it takes much more longer (more than 400ms). The average is 300ms. MQ adapter calls a service (txgralms0) which lasts 110ms in average.
This is our configuration:
"MQI_QMTESX01" SRVGRP="g03000" SRVID=3000
CLOPT="-- -C /tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg"
RQPERM=0600 REPLYQ=N RPPERM=0600 MIN=6 MAX=6 CONV=N
SYSTEM_ACCESS=FASTPATH
MAXGEN=1 GRACE=86400 RESTART=N
MINDISPATCHTHREADS=0 MAXDISPATCHTHREADS=1 THREADSTACKSIZE=0
SICACHEENTRIESMAX="500"
/tuxedo/qt/txqgral00/control/src/MQI_QMTESX01.cfg:
*SERVER
MINMSGLEVEL=0
MAXMSGLEVEL=0
DEFMAXMSGLEN=4096
TPESVCFAILDATA=Y
*QUEUE_MANAGER
LQMID=QMTESX01
NAME=QMTESX01
*SERVICE
NAME=txgralms0
FORMAT=MQSTR
TRAN=N
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KGCRQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KGCPQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KPSAQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KPINQ01
*QUEUE
LQMID=QMTESX01
MQNAME=QAT.Q.NACAR.TO.TUX.KDECQ01
Thanks in advance,
Marling -
Performance issues with data warehouse loads
We have performance issues with our data warehouse load ETL process. I have run
analyze and dbms_stats and checked database environment. What other things can I do to optimize performance? I cannot use statspack since we are running Oracle 8i. Thanks
ScottHi,
you should analyze the db after you have loaded the tables.
Do you use sequences to generate PKs? Do you have a lot of indexex and/or triggers on the tables?
If yes:
make sure your sequence caches (alter sequence s cache 10000)
Drop all unneeded indexes while loading and disable trigger if possible.
How big is your Redo Log Buffer? When loading a large amount of data it may be an option to enlarge this buffer.
Do you have more then one DBWR Process? Writing parallel can speed up things when a checkpoint is needed.
Is it possible using a direct load? Or do you already direct load?
Dim -
Performance issue with Jdeveloper
Hi Guys,
I am experiencing strange performance issue with Jdeveloper 10.1.3.3.0.4157. There are many other threads regarding the performance issue in this forum, but the problem I have is a little bit different.
I have two computers: one is Athlon 3200+ with Vista and another one is P4 dual core 6400 with XP (service pack 2). Both of them have 2GB memory.
I am running the same simple project on both computer, but only the one with Vista has the problem. The problem is very similar to the problem mentioned in the thread:
Re: IDE has become extremely slow?
But it's much worse. It only happens only on JSF pages. Basically, any operations on the JSF pages are very slow. Loading the page, changing the attributes of a button in source editor, or even clicking the items in the design view take forever to run.
The first weird thing is that it may use 100% CPU, but it never recover, which means the 100% CPU usage never stops or when it stops, the Jdeveloper stops responding.
The second weird thing is that the project is not big. Actually, it's very small. The problem started to happen since last week. There are not big changes during the period. The only thing I can say is that we created two more JSF pages.
The third weird thing is that the same project never happened on the P4+XP box. When I open the project on the P4+XP box, it’s always fast and no CPU spike.
Any advises are welcome!
Thanks,
StevenHi Guys,
I re-made a simple test project for this problem and now I now always reproduce the problem in JDeveloper on both system (XP & Vista). Everytime I open this jspx file in the source editor and try to scroll up/down the source file, or manually delete an attribute, JDeveloepr will hang and the CPU usage is 0%.
Here is the content of the test file:
<?xml version='1.0' encoding='windows-1252'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.0"
xmlns:h="http://java.sun.com/jsf/html"
xmlns:f="http://java.sun.com/jsf/core"
xmlns:af="http://xmlns.oracle.com/adf/faces"
xmlns:afh="http://xmlns.oracle.com/adf/faces/html">
<jsp:output omit-xml-declaration="true" doctype-root-element="HTML"
doctype-system="http://www.w3.org/TR/html4/loose.dtd"
doctype-public="-//W3C//DTD HTML 4.01 Transitional//EN"/>
<jsp:directive.page contentType="text/html;charset=windows-1252"/>
<f:view>
<afh:html binding="#{backing_streettypedetail.html1}" id="html1">
<afh:head title="streettypedetail"
binding="#{backing_streettypedetail.head1}" id="head1">
<meta http-equiv="Content-Type"
content="text/html; charset=windows-1252"/>
</afh:head>
<afh:body binding="#{backing_streettypedetail.body1}" id="body1">
<af:messages binding="#{backing_streettypedetail.messages1}"
id="messages1"/>
<h:form binding="#{backing_streettypedetail.form1}" id="form1">
<af:panelForm binding="#{backing_streettypedetail.panelForm1}"
id="panelForm1">
<af:inputText value="#{bindings.streetTypeID.inputValue}"
label="#{bindings.streetTypeID.label}"
required="#{bindings.streetTypeID.mandatory}"
columns="#{bindings.streetTypeID.displayWidth}"
binding="#{backing_streettypedetail.inputText1}"
id="inputText1">
<af:validator binding="#{bindings.streetTypeID.validator}"/>
</af:inputText>
<af:inputText value="#{bindings.description.inputValue}"
label="#{bindings.description.label}"
required="#{bindings.description.mandatory}"
columns="#{bindings.description.displayWidth}"
binding="#{backing_streettypedetail.inputText2}"
id="inputText2">
<af:validator binding="#{bindings.description.validator}"/>
</af:inputText>
<af:inputText value="#{bindings.abbr.inputValue}"
label="#{bindings.abbr.label}"
required="#{bindings.abbr.mandatory}"
columns="#{bindings.abbr.displayWidth}"
binding="#{backing_streettypedetail.inputText3}"
id="inputText3">
<af:validator binding="#{bindings.abbr.validator}"/>
</af:inputText>
<f:facet name="footer">
<h:panelGroup binding="#{backing_streettypedetail.panelGroup1}"
id="panelGroup1">
<af:commandButton text="Save"
binding="#{backing_streettypedetail.saveButton}"
id="saveButton"
actionListener="#{bindings.mergeEntity.execute}"
action="#{userState.retrieveReturnNavigationRule}"
disabled="#{!bindings.mergeEntity.enabled}"
partialSubmit="false">
<af:setActionListener from="#{true}"
to="#{userState.refresh}"/>
</af:commandButton>
<af:commandButton text="Cancel"
binding="#{backing_streettypedetail.cancelButton}"
action="#{userState.retrieveReturnNavigationRule}"
id="cancelButton">
<af:setActionListener from="#{false}"
to="#{userState.refresh}"/>
</af:commandButton>
</h:panelGroup>
</f:facet>
</af:panelForm>
</h:form>
</afh:body>
</afh:html>
</f:view>
<!--oracle-jdev-comment:auto-binding-backing-bean-name:backing_streettypedetail-->
</jsp:root>
Can anybody take a look at the file and let me know what's wrong with it?
Thanks in advance.
Steven
Maybe you are looking for
-
Windows XP SP2 failing to install on new P35 Neo2
First post, so I hope I've got it all straight.. if not, bear with me :D I've built a brand new rig (components in sig), everything is connected, and I try to install Windows XP with built-in SP2. The installation is halting with the "Stop 0x0000007
-
Restoring eMail messages using Time Machine?
My lady uses MS Office for the Mac and accidentally deleted email messages can these messages be restored using Time Machine?
-
Gamers Club Unlocked Welcome Coupons
How quick are we supposed to be seeing our welcome coupons from Joining the GCU? I'm most interested in using the coupons as soon as possible. Thank you.
-
Up until this morning, Feb. 20/12 I have had no trouble sending a PDF file to my local printer. Now when I direct a print job to the printer it asks me to save it to a file only. Will not print.
-
Reduced video quality when slowed down
Is it normal for the video quality of slowed down footage to be less than the original file. I think the orignal copied file format was MOV (JVC Camcorder seems to generate .MOV and .MOI files) which imports to iMovie09 fine and good quality. When I