Query Runtime is very high
Dear All,
I have a cube in which I have derived fiscal period, fiscal year from Billing date. I have two queries on that cube one is based on fiscal period and other one is based on billing date. Both queries were running fine. From last few days when i am trying to run query based on billing date it hangs. The other one is working fine. Even when I am drilling down to billling date from fiscal period or directly looking through cube -> manage -> content I am getting results faster. What may be the reasons. Pls guide.
regards:
jitendra
Edited by: Jitendra Gupta on Oct 11, 2010 9:59 AM
Thanks for ur response.
I already regenerated in RSRT and tried but it was not working, but this time its working fine. God knows what happened.
Anyways thanks.
Regards:
Jitendra
Similar Messages
-
Issue with Query OLAP time very high
Hello Guyz
I ran my query in RSRT, and noticed tht the QOLAPTIME was almost 432 seconds (the query had crossed the 65556 records limit by then). The DBTIME was 70 secs.
1. Are the above times in seconds?
2. What are the performance techs I can implement, to improve the OLAP time? Since I think aggregates, indexing and partitioning only improve the DB time?
3. I already have cache active on this query.
Any suggestions?
Please don't post the same question across the different forums
Edited by: Moderator on Jul 8, 2009 11:46 AMHello,
One more thing, do any of the standard tech. of Indexing, Partitioning, Aggregate creation help in decreasing the OLAP time?
These tech will be helpful for DB time but no use for OLAP time. And RKF didn't cost extra OLAP time but CKF and CELL calculation do.
In your post you said there are more than 65535 rows. That's the main cause of high OLAP time according to my experience. Why users want so many rows? It's almost impossible to read. You can imaging how long it would take to transfer so many data from bw server to user (result in high OLAP time).
Please reduce the lines by filter or something else. If you can't reduce the line number, I don't think the OLAP time would be low.
Regards,
Frank -
XML select query causing very high CPU usage.
Hi All,
In our Oracle 10.2.0.4 Two node RAC we are facing very high CPU usage....and all of the top CPU consuming processes are executing this below sql...also these statements are waiting for some gc wiat events as shown below.
SELECT B.PACKET_ID FROM CM_PACKET_ALT_KEY B, CM_ALT_KEY_TYPE C, TABLE(XMLSEQUENCE ( EXTRACT (:B1 , '/AlternateKeys/AlternateKey') )) T
WHERE B.ALT_KEY_TYPE_ID = C.ALT_KEY_TYPE_ID AND C.ALT_KEY_TYPE_NAME = EXTRACTVALUE (VALUE (T), '/AlternateKey/@keyType')
AND B.ALT_KEY_VALUE = EXTRACTVALUE (VALUE (T), '/AlternateKey')
AND NVL (B.CHILD_BROKER_CODE, '6209870F57C254D6E04400306E4A78B0') =
NVL (EXTRACTVALUE (VALUE (T), '/AlternateKey/@broker'), '6209870F57C254D6E04400306E4A78B0')
SQL> select sid,event,state from gv$session where state='WAITING' and event not like '%SQL*Net%';
SID EVENT STATE
66 jobq slave wait WAITING
124 gc buffer busy WAITING
143 gc buffer busy WAITING
147 db file sequential read WAITING
222 Streams AQ: qmn slave idle wait WAITING
266 gc buffer busy WAITING
280 gc buffer busy WAITING
314 gc cr request WAITING
317 gc buffer busy WAITING
392 gc buffer busy WAITING
428 gc buffer busy WAITING
471 gc buffer busy WAITING
518 Streams AQ: waiting for time management or cleanup tasks WAITING
524 Streams AQ: qmn coordinator idle wait WAITING
527 rdbms ipc message WAITING
528 rdbms ipc message WAITING
532 rdbms ipc message WAITING
537 rdbms ipc message WAITING
538 rdbms ipc message WAITING
539 rdbms ipc message WAITING
540 rdbms ipc message WAITING
541 smon timer WAITING
542 rdbms ipc message WAITING
543 rdbms ipc message WAITING
544 rdbms ipc message WAITING
545 rdbms ipc message WAITING
546 rdbms ipc message WAITING
547 gcs remote message WAITING
548 gcs remote message WAITING
549 gcs remote message WAITING
550 gcs remote message WAITING
551 ges remote message WAITING
552 rdbms ipc message WAITING
553 rdbms ipc message WAITING
554 DIAG idle wait WAITING
555 pmon timer WAITING
79 jobq slave wait WAITING
117 gc buffer busy WAITING
163 PX Deq: Execute Reply WAITING
205 db file parallel read WAITING
247 gc current request WAITING
279 jobq slave wait WAITING
319 LNS ASYNC end of log WAITING
343 jobq slave wait WAITING
348 direct path read WAITING
372 db file scattered read WAITING
475 jobq slave wait WAITING
494 gc cr request WAITING
516 Streams AQ: qmn slave idle wait WAITING
518 Streams AQ: waiting for time management or cleanup tasks WAITING
523 Streams AQ: qmn coordinator idle wait WAITING
528 rdbms ipc message WAITING
529 rdbms ipc message WAITING
530 Streams AQ: waiting for messages in the queue WAITING
532 rdbms ipc message WAITING
537 rdbms ipc message WAITING
538 rdbms ipc message WAITING
539 rdbms ipc message WAITING
540 rdbms ipc message WAITING
541 smon timer WAITING
542 rdbms ipc message WAITING
543 rdbms ipc message WAITING
544 rdbms ipc message WAITING
545 rdbms ipc message WAITING
546 rdbms ipc message WAITING
547 gcs remote message WAITING
548 gcs remote message WAITING
549 gcs remote message WAITING
550 gcs remote message WAITING
551 ges remote message WAITING
552 rdbms ipc message WAITING
553 rdbms ipc message WAITING
554 DIAG idle wait WAITING
555 pmon timer WAITINGI am not at all able to understand what this SQL is...i think its related to some XML datatype.
Also not able to generate execution plan for this sql using explain plan- getting error(ORA-00932: inconsistent datatypes: expected - got -)
Please help me in this issue...
How can i generate execution plan?
Does this type of XML based query will cause high GC wiat events and buffer busy wait events?
How can i tune this query?
How can i find that this is the only query causing High CPU usage?
Our servers are having 64 GB RAM and 16 CPU's..
OS is Solaris 5.10 with UDP as protocol for interconnect..
-YasserI found some more xml queries as shown below.
SELECT XMLELEMENT("Resource", XMLATTRIBUTES(RAWTOHEX(RMR.RESOURCE_ID) AS "resourceID", RMO.OWNER_CODE AS "ownerCode", RMR.MIME_TYPE AS "mimeType",RMR.FILE_SIZE AS "fileSize", RMR.RESOURCE_STATUS AS "status"), (SELECT XMLAGG(XMLELEMENT("ResourceLocation", XMLATTRIBUTES(RAWTOHEX(RMRP.REPOSITORY_ID) AS "repositoryID", RAWTOHEX(DIRECTORY_ID) AS "directoryID", RESOURCE_STATE AS "state", RMRO.RETRIEVAL_SEQ AS "sequence"), XMLFOREST(FULL_PATH AS "RemotePath"))ORDER BY RMRO.RETRIEVAL_SEQ) FROM RM_RESOURCE_PATH RMRP, RM_RETRIEVAL_ORDER RMRO, RM_LOCATION RML WHERE RMRP.RESOURCE_ID = RMR.RESOURCE_ID AND RMRP.REPOSITORY_ID = RMRO.REPOSITORY_ID AND RMRO.LOCATION_ID = RML.LOCATION_ID AND RML.LOCATION_CODE = :B2 ) AS "Locations") FROM RM_RESOURCE RMR, RM_OWNER RMO WHERE RMR.OWNER_ID = RMO.OWNER_ID AND RMR.RESOURCE_ID = HEXTORAW(:B1 )
SELECT XMLELEMENT ( "Resources", XMLAGG(XMLELEMENT ( "Resource", XMLATTRIBUTES (B.RESOURCE_ID AS "id"), XMLELEMENT ("ContentType", C.CONTENT_TYPE_CODE), XMLELEMENT ("TextExtractStatus", B.TEXT_EXTRACTED_STATUS), XMLELEMENT ("MimeType", B.MIME_TYPE), XMLELEMENT ("NumberPages", TO_CHAR (B.NUM_PAGES)), XMLELEMENT ("FileSize", TO_CHAR (B.FILE_SIZE)), XMLELEMENT ("Status", B.STATUS), XMLELEMENT ("ContentFormat", D.CONTENT_FORMAT_CODE), G.ALTKEY )) ) FROM CM_PACKET A, CM_RESOURCE B, CM_REF_CONTENT_TYPE C, CM_REF_CONTENT_FORMAT D, ( SELECT XMLELEMENT ( "AlternateKeys", XMLAGG(XMLELEMENT ( "AlternateKey", XMLATTRIBUTES ( H.ALT_KEY_TYPE_NAME AS "keyType", E.CHILD_BROKER_CODE AS "broker", E.VERSION AS "version" ), E.ALT_KEY_VALUE )) ) ALTKEY, E.RESOURCE_ID RES_ID FROM CM_RESOURCE_ALT_KEY E, CM_RESOURCE F, CM_ALT_KEY_TYPE H WHERE E.RESOURCE_ID = F.RESOURCE_ID(+) AND F.PACKET_ID = HEXTORAW (:B1 ) AN
D E.ALT_KEY_TYPE_ID = H.ALT_KEY_TYPE_ID GROUP BY E.RESOURCE_ID) G WHERE A.PACKET_ID = HEXTORAW (:B1
SELECT XMLELEMENT ("Tagging", XMLAGG (GROUPEDCAT)) FROM ( SELECT XMLELEMENT ( "TaggingCategory", XMLATTRIBUTES (CATEGORY1 AS "categoryType"), XMLAGG (LISTVALUES) ) GROUPEDCAT FROM (SELECT EXTRACTVALUE ( VALUE (T), '/TaggingCategory/@categoryType' ) CATEGORY1, XMLCONCAT(EXTRACT ( VALUE (T), '/TaggingCategory/TaggingValue' )) LISTVALUES FROM TABLE(XMLSEQUENCE(EXTRACT ( :B1 , '/Tagging/TaggingCategory' ))) T) GROUP BY CATEGORY1)
SELECT XMLCONCAT ( :B2 , DI_CONTENT_PKG.GET_ENUM_TAGGING_FN (:B1 ) ) FROM DUAL
SELECT XMLCONCAT (:B2 , :B1 ) FROM DUAL
SELECT * FROM EQ_RAW_TAG_ERROR A WHERE TAG_LIST_ID = :B2 AND EXTRACTVALUE (A.RAW_TAG_XML, '/TaggingValues/TaggingValue/Value' ) = :B1 AND A.STATUS = '
NR'
SELECT RAWTOHEX (S.PACKET_ID) AS PACKET_ID, PS.PACKET_STATUS_DESC, S.LAST_UPDATE AS LAST_UPDATE, S.USER_ID, S.USER_COMMENT, MAX (T.ALT_KEY_VALUE) AS ALTKEY, 'Y' AS IS_PACKET FROM EQ_PACKET S, CM_PACKET_ALT_KEY T, CM_REF_PACKET_STATUS PS WHERE S.STATUS_ID = PS.PACKET_STATUS_ID AND S.PACKET_ID = T.PACKET_ID AND NOT EXISTS (SELECT 1 FROM CM_RESOURCE RES WHERE RES.PACKET_ID = S.PACKET_ID AND EXISTS (SELECT 1 FROM CM_REF_CONTENT_FORMAT CF WHERE CF.CONTENT_FORMAT_ID = RES.CONTENT_FORMAT AND CF.CONTENT_FORMAT_CODE = 'I_FILE')) GROUP BY RAWTOHEX (S.PACKET_ID), PS.PACKET_STATUS_DESC, S.LAST_UPDATE, S.USER_ID, S.USER_COMMENT UNION SELECT RAWTOHEX (A.FATAL_ERROR_ID) AS PACKET_ID, C.PACKET_STATUS_DESC, A.OCCURRENCE_DATE AS LAST_UPDATE, '' AS USER_ID, '' AS USER_COMMENT, RAWTOHEX (A.FATAL_ERROR_ID) AS ALTKEY, 'N' AS IS_PACKET FROM EQ_FATAL_ERROR A, EQ_ERROR_MSG B, CM_REF_PACKET_STATUS C, EQ_SEVERITYD WHERE A.PACKET_ID IS NULL AND A.STATUS = 'NR' AND A.ERROR_MSG_ID = B.ERROR_MSG_ID AND B.SEVERITY_I
SELECT /*+ INDEX(e) INDEX(a) INDEX(c)*/ XMLAGG(XMLELEMENT ( "TaggingCategory", XMLATTRIBUTES ( G.TAG_CATEGORY_CODE AS "categoryType" ), XMLELEMENT ("TaggingValue", XMLATTRIBUTES (C.IS_PRIMARY AS "primary", H.ORIGIN_CODE AS "origin"), XMLAGG(XMLELEMENT ( "Value", XMLATTRIBUTES ( F.TAG_LIST_CODE AS "listType" ), E.TAG_VALUE )) ) )) FROM TABLE (CAST (:B1 AS T_TAG_MAP_HIERARCHY_TAB)) A, TABLE (CAST (:B2 AS T_ENUM_TAG_TAB)) C, REM_TAG_VALUE E, REM_TAG_LIST F, REM_TAG_CATEGORY G, CM_ORIGIN H WHERE E.TAG_VALUE_ID = C.TAG_VALUE_ID AND F.TAG_LIST_ID = E.TAG_LIST_ID AND G.TAGGING_CATEGORY_ID = F.TAGGING_CATEGORY_ID AND H.ORIGIN_ID = C.ORIGIN_ID AND C.ENUM_TAG_ID = A.MAPPED_ENUM_TAG_ID GROUP BY C.IS_PRIMARY, H.ORIGIN_CODE, G.TAG_CATEGORY_CODE START WITH A.MAPPED_ENUM_TAG_ID = HEXTORAW (:B3 ) CONNECT BY PRIOR A.MAPPED_ENUM_TAG_ID = A.ENUM_TAG_ID
SELECT /*+ INDEX(e) */ XMLAGG(XMLELEMENT ( "TaggingCategory", XMLATTRIBUTES ( G.TAG_CATEGORY_CODE AS "categoryType" ), XMLELEMENT ( "TaggingValue", XMLATTRIBUTES (C.IS_PRIMARY AS "primary", H.ORIGIN_CODE AS "origin"), XMLAGG(XMLCONCAT ( XMLELEMENT ( "Value", XMLATTRIBUTES ( F.TAG_LIST_CODE AS "listType" ), E.TAG_VALUE ), CASE WHEN LEVEL = 1 THEN :B4 ELSE NULL END )) ) )) FROM TABLE (CAST (:B1 AS T_TAG_MAP_HIERARCHY_TAB)) A, TABLE (CAST (:B2 AS T_ENUM_TAG_TAB)) C, REM_TAG_VALUE E, REM_TAG_LIST F, REM_TAG_CATEGORY G, CM_ORIGIN H WHERE E.TAG_VALUE_ID = C.TAG_VALUE_ID AND F.TAG_LIST_ID = E.TAG_LIST_ID AND G.TAGGING_CATEGORY_ID = F.TAGGING_CATEGORY_ID AND H.ORIGIN_ID = C.ORIGIN_ID AND C.ENUM_TAG_ID = A.MAPPED_ENUM_TAG_ID GROUP BY G.TAG_CATEGORY_CODE, C.IS_PRIMARY, H.ORIGIN_CODE START WITH A.MAPPED_ENUM_TAG_ID = HEXTORAW (:B3 ) CONNECT BY PRIOR A.MAPPED_ENUM_TAG_ID = A.ENUM_TAG_IDBy observing above sql queries i found some hints forcing for index usage..
I think xml schema is created already...and its progressing as you stated above. Please correct if i am wrong.
I found all these sql from AWR report and all of these are very high resource consuming queries.
And i am really sorry if i am irritating you by asking all stupid questions related to xml.
-Yasser
Edited by: YasserRACDBA on Nov 17, 2009 3:39 PM
Did syntax allignment. -
RSRT statistics data - high query runtime
Hi
I have a query on a multiprovider that has a few fields with exception aggregation upto 2 levels - doc number and item. I executed the query in RSRT with "Display Statistics data" and "Do not use cache" settings from the debug options. When i go to the Aggregation Layer tab of the stats screen i see that the basic infoprovider and the aggregates are listed twice or thrice. The infoproviders have a huge volume of data and the number of records read and transported for the duplicate entries shown are the same.
Initially i thought it is because of the 2 levels of exception aggregation but then i ran another query on the same multiprovider with similar exception aggregation and found that there are no duplicate entries and the basic inproviders are read only once.
the query is running very slow and i need to tune up the performance.
I would like to know why is the infoprovider being read twice for this query alone??
Regards,
Sujaiok! Then it could be because of some other reasons why it is reading twice. This won't be happening because of nested Exception aggregarion.
Try to make the query same one by one and check what is making the query to read the infoprovider twice. -
Long Query Runtime/Web-template Loading time
Hi,
We are having a very critical performance issue, i.e. long query runtime, which is certainly not acceptable by client as well.
<b>Background Information</b>
We are using web application designer (WAD) 2004s release to design front end of our reports built in BI 7.0 system.
<b>Problem Area</b>
Loading of web template on browser
<b>Problem Analysis</b>
Query taking so long time to run, whenever we load it through portal or even directly through web application designer. Current runtime for query is more than a min. And I have noticed that 95% of runtime is taken for loading variable screen. FYI if I run query through Query Designer or BEx Analyzer, it takes 3-5 seconds to execute.
We have taken all the statistics and everything proves that query is not taking any time to execute but its the loading time which creates bottle neck.
<b>Possible Cause</b>
Web template holding 11 data providers, 5 of which are based on queries and rest are on query views. These data providers load into memory in parallel which could cause delay.
These data providers expose detailed variable screens. Out of 21 input fields, exposed by web template, 8 fields are based on hierarchy node variables and 1 on hierarchy variable. And to my knowledge each time hierarchy/hierarchy node variable loads complete hierarchy into memory whenever they are called (in other words, its not performance efficient to use hierarchies).
I request you to please consider this as matter of high priority and provide me with suggestions to remove bottle necks and make the application performance efficient. Please let me know, if you need any further information.
Thanks.
ShabbarI would recommend you see how long the query execution actually takes without running from the web template. If actually the individual query takes long time then you need to do some performance improvement on back-end side (aggregates, indexing,... and so on).
But the performance issue is only with web templates, then you need to find some notes on it, because, I remember we had to apply some notes in relations of browser taking too long time to load the selection screen in web reports.
After exhausting all the option, then I will implement precalculating the query result before hand using broadcaster.
thanks.
Wond -
Shared Pool: KGH No ACCESS Is Very High
if you run the following query it shows you a very high value for KGH NO ACCESS (around 5GB)
select * from v$sgastat where pool = 'shared pool' and (name in ('free memory', 'sql area', 'library cache', 'miscellaneous', 'row cache', 'KGH: NO ACCESS') )
that KGH means?Hi,
As you have sga_target, ASMM is enabled and could be the cause of high KGH No Access. Have a look at
Common Cause for ORA-4031 in 10gR2, Excess "KGH: NO ACCESS" Memory Allocation [Video] (Doc ID 801787.1)
Anand -
Regarding Variable entry at the query runtime
Hi
In the Query Designer they have taken many variables which are user entry but when I run query I am getting only few options where I can enter values and other variables are not displayed for entry and they are taking some value as default though thay do not have default value. When I check it in Query Properties all Variables are displayed but at the Query Runtime very few are displayed.
Please suggest me how to get all options at the Query Runtime to enter the values. It is very urgent Plz send ur suggestions soon.
Points will be assigned.
Regards
BalajiBalaji,
I suspect some of your variables have been personalized. Run the query, when it's run then select Change query - Change variables values from the Bex toolbar. Have any variables little yellow smiley faces on the right hand side of the input area? If so you may want to remove the personalization by selecting each face and select reset personalization.
Regards
Gill -
Logical Reads are very high when run as sproc and very less logical reads when run as a script
Hello
Have a question,
when i execute a sproc. i get a very high logical reads count and when i run the same sproc converted into script it has very low logical reads what does it mean..I would like you to check query plan during ad-hoc run versus stored procedure execution. As other pointed out it could be due to parameter sniffing.
Balmukund Lakhani
Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
This posting is provided "AS IS" with no warranties, and confers no rights.
My Blog |
Team Blog | @Twitter
| Facebook
Author: SQL Server 2012 AlwaysOn -
Paperback, Kindle -
Very high cpu utilization with mq broker
Hi all,
I see a very high cpu utilization (400% on 8 cpu server) when I connect consumers to OpenQ. It increase close to 100% for every consumer I add. Slowly, the consumer comes to a halt, as the producers are sending messages at a good rate too.
Environment Setup
Glassfish version 2.1
com.sun.messaging.jmq Version Information Product Compatibility Version: 4.3 Protocol Version: 4.3 Target JMS API Version: 1.1
Cluster set up using persistent storage. snippet from broker log.
Java Runtime: 1.6.0_14 Sun Microsystems Inc. /home/user/foundation/jdk-1.6/jre [06/Apr/2011:12:48:44 EDT] IMQ_HOME=/home/user/foundation/sges/imq [06/Apr/2011:12:48:44 EDT] IMQ_VARHOME=/home/user/foundation/installation/node-agent-server1/server1/imq [06/Apr/2011:12:48:44 EDT] Linux 2.6.18-164.10.1.el5xen i386 server1 (8 cpu) user [06/Apr/2011:12:48:44 EDT] Java Heap Size: max=394432k, current=193920k [06/Apr/2011:12:48:44 EDT] Arguments: -javahome /home/user/foundation/jdk-1.6 -Dimq.autocreate.queue=false -Dimq.autocreate.topic=false -Dimq.cluster.masterbroker=mq://server1:37676/ -Dimq.cluster.brokerlist=mq://server1:37676/,mq://server2:37676/ -Dimq.cluster.nowaitForMasterBroker=true -varhome /home/user/foundation/installation/node-agent-server1/server1/imq -startRmiRegistry -rmiRegistryPort 37776 -Dimq.imqcmd.user=admin -passfile /tmp/asmq5711749746025968663.tmp -save -name clusterservercom -port 37676 -bgnd -silent [06/Apr/2011:12:48:44 EDT] [B1004]: Starting the portmapper service using tcp [ 37676, 50, * ] with min threads 1 and max threads of 1 [06/Apr/2011:12:48:45 EDT] [B1060]: Loading persistent data...
I followed step in http://middlewaremagic.com/weblogic/?p=4884 to narrow it down to Threads that was causing high cpu. Both were around 94%.
Following is the stack for those threads.
"Thread-jms[224]" prio=10 tid=0xd635f400 nid=0x5665 runnable [0xd18fe000] java.lang.Thread.State: RUNNABLE at com.sun.messaging.jmq.jmsserver.data.TransactionList.isConsumedInTransaction(TransactionList.java:697) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:918) - locked <0xf3d35730> (a java.util.Collections$SynchronizedMap) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:810) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.destroyConsumer(ConsumerHandler.java:577) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.handle(ConsumerHandler.java:422) at com.sun.messaging.jmq.jmsserver.data.PacketRouter.handleMessage(PacketRouter.java:181) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.readData(IMQIPConnection.java:1489) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.process(IMQIPConnection.java:644) at com.sun.messaging.jmq.jmsserver.service.imq.OperationRunnable.process(OperationRunnable.java:170) at com.sun.messaging.jmq.jmsserver.util.pool.BasicRunnable.run(BasicRunnable.java:493) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None
"Thread-jms[214]" prio=10 tid=0xd56c8000 nid=0x566c waiting for monitor entry [0xd2838000] java.lang.Thread.State: BLOCKED (on object monitor) at com.sun.messaging.jmq.jmsserver.data.TransactionInformation.isConsumedMessage(TransactionList.java:2544) - locked <0xdbeeb538> (a com.sun.messaging.jmq.jmsserver.data.TransactionInformation) at com.sun.messaging.jmq.jmsserver.data.TransactionList.isConsumedInTransaction(TransactionList.java:697) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:918) - locked <0xe4c9abf0> (a java.util.Collections$SynchronizedMap) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:810) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.destroyConsumer(ConsumerHandler.java:577) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.handle(ConsumerHandler.java:422) at com.sun.messaging.jmq.jmsserver.data.PacketRouter.handleMessage(PacketRouter.java:181) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.readData(IMQIPConnection.java:1489) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.process(IMQIPConnection.java:644) at com.sun.messaging.jmq.jmsserver.service.imq.OperationRunnable.process(OperationRunnable.java:170) at com.sun.messaging.jmq.jmsserver.util.pool.BasicRunnable.run(BasicRunnable.java:493) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None
"Thread-jms[213]" prio=10 tid=0xd65be800 nid=0x5670 runnable [0xd1a28000] java.lang.Thread.State: RUNNABLE at com.sun.messaging.jmq.jmsserver.data.TransactionList.isConsumedInTransaction(TransactionList.java:697) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:918) - locked <0xe4c4bad8> (a java.util.Collections$SynchronizedMap) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:810) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.destroyConsumer(ConsumerHandler.java:577) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.handle(ConsumerHandler.java:422) at com.sun.messaging.jmq.jmsserver.data.PacketRouter.handleMessage(PacketRouter.java:181) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.readData(IMQIPConnection.java:1489) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.process(IMQIPConnection.java:644) at com.sun.messaging.jmq.jmsserver.service.imq.OperationRunnable.process(OperationRunnable.java:170) at com.sun.messaging.jmq.jmsserver.util.pool.BasicRunnable.run(BasicRunnable.java:493) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None
Any ideas will be appreciated.
--Thanks ak, for the response.
Yes, the messages are consumed in transactions. I set imq.txn.reapLimit=200 in Start Arguments in jvm configuration.
I verified that it is being set in the log.txt file for the broker:
-Dimq.autocreate.queue=false -Dimq.autocreate.topic=false -Dimq.txn.reapLimit=250
It did not make any difference. Do I need to set this property somewhere else ?
As far as upgrading MQ is concerned, I am using glassfish 2.1. And I think MQ 4.3 is packaged with it. Can you suggest a safe way to upgrade to OpenMQ 4.5 in a running environment. I can bring down the cluster temporarily. Can I just change the jar file somwhere to use MQ4.5 ?
Here is the snippet of the consumer code :
I create Connection in @postConstruct and close it in @preDestroy, so that I don't have to do it everytime.
private ResultMessage[] doRetrieve(String username, String password, String jndiDestination, String filter, int maxMessages, long timeout, RetrieveType type)
throws InvalidCredentialsException, InvalidFilterException, ConsumerException {
// Resources
Session session = null;
try {
if (log.isTraceEnabled()) log.trace("Creating transacted session with JMS broker.");
session = connection.createSession(true, Session.SESSION_TRANSACTED);
// Locate bound destination and create consumer
if (log.isTraceEnabled()) log.trace("Searching for named destination: " + jndiDestination);
Destination destination = (Destination) ic.lookup(jndiDestination);
if (log.isTraceEnabled()) log.trace("Creating consumer for named destination " + jndiDestination);
MessageConsumer consumer = (filter == null || filter.trim().length() == 0) ? session.createConsumer(destination) : session.createConsumer(destination, filter);
if (log.isTraceEnabled()) log.trace("Starting JMS connection.");
connection.start();
// Consume messages
if (log.isDebugEnabled()) log.trace("Creating retrieval containers.");
List<ResultMessage> processedMessages = new ArrayList<ResultMessage>(maxMessages);
BytesMessage jmsMessage = null;
for (int i = 0 ; i < maxMessages ; i++) {
// Attempt message retrieve
if (log.isTraceEnabled()) log.trace("Attempting retrieval: " + i);
switch (type) {
case BLOCKING :
jmsMessage = (BytesMessage) consumer.receive();
break;
case IMMEDIATE :
jmsMessage = (BytesMessage) consumer.receiveNoWait();
break;
case TIMED :
jmsMessage = (BytesMessage) consumer.receive(timeout);
break;
// Process retrieved message
if (jmsMessage != null) {
if (log.isTraceEnabled()) log.trace("Message retrieved\n" + jmsMessage);
// Extract message
if (log.isTraceEnabled()) log.trace("Extracting result message container from JMS message.");
byte[] extracted = new byte[(int) jmsMessage.getBodyLength()];
jmsMessage.readBytes(extracted);
// Decompress message
if (jmsMessage.propertyExists(COMPRESSED_HEADER) && jmsMessage.getBooleanProperty(COMPRESSED_HEADER)) {
if (log.isTraceEnabled()) log.trace("Decompressing message.");
extracted = decompress(extracted);
// Done processing message
if (log.isTraceEnabled()) log.trace("Message added to retrieval container.");
String signature = jmsMessage.getStringProperty(DIGITAL_SIGNATURE);
processedMessages.add(new ResultMessage(extracted, signature));
} else
if (log.isTraceEnabled()) log.trace("No message was available.");
// Package return container
if (log.isTraceEnabled()) log.trace("Packing retrieved messages to return.");
ResultMessage[] collectorMessages = new ResultMessage[processedMessages.size()];
for (int i = 0 ; i < collectorMessages.length ; i++)
collectorMessages[i] = processedMessages.get(i);
if (log.isTraceEnabled()) log.trace("Returning " + collectorMessages.length + " messages.");
return collectorMessages;
} catch (NamingException ex) {
sessionContext.setRollbackOnly();
log.error("Unable to locate named queue: " + jndiDestination, ex);
throw new ConsumerException("Unable to locate named queue: " + jndiDestination, ex);
} catch (InvalidSelectorException ex) {
sessionContext.setRollbackOnly();
log.error("Invalid filter: " + filter, ex);
throw new InvalidFilterException("Invalid filter: " + filter, ex);
} catch (IOException ex) {
sessionContext.setRollbackOnly();
log.error("Message decompression failed.", ex);
throw new ConsumerException("Message decompression failed.", ex);
} catch (GeneralSecurityException ex) {
sessionContext.setRollbackOnly();
log.error("Message decryption failed.", ex);
throw new ConsumerException("Message decryption failed.", ex);
} catch (JMSException ex) {
sessionContext.setRollbackOnly();
log.error("Unable to consumer messages.", ex);
throw new ConsumerException("Unable to consume messages.", ex);
} catch (Throwable ex) {
sessionContext.setRollbackOnly();
log.error("Unexpected error.", ex);
throw new ConsumerException("Unexpected error.", ex);
} finally {
try {
if (session != null) session.close();
} catch (JMSException ex) {
log.error("Unexpected error.", ex);
Thanks for your help.
Edited by: vineet on Apr 7, 2011 10:06 AM -
I have a query which is based on the ODS.The query is taking a lot of time to run.What parameters should i check and how should i make sure that the query will run quickly.Is it something to do with indexes in the ODS.
I saw this information which might be useful
As stated by several people, primary indices are supplied with the ODS objects. With regard to to query performance on ODS objects, the best approach is to trace (st05) the execution of a poorly performance access. When the trace is read, normally the system will spend a long time on one access (it can vary, but this is most common). A secondary index can then be created on the selective fields in the long running WHERE clause. For example, I recently improved the performance of a query which was hanging for 32 seconds to 2 seconds. The SQL looked similar to this:-
SELECT
FROM
"/BI0/APU_O3200" T_00 , "/BI0/SCURRENCY" T_01 , "/BI0/SCMMT_ITEM" T_02 , "/BI0/SFUNDS_CTR" T_03 ,
"/BI0/SPU_MEASURE" T_04 , "/BI0/SAC_DOC_TYP" T_05 , "/BI0/SDATE" T_06 , "/BI0/SAC_DOC_NO" T_07 ,
"/BI0/SPSTNG_SEQ" T_08 , "/BIC/SZTXTLG" T_09
WHERE
( T_00 . "FM_CURR" = T_01 . "CURRENCY" ) AND ( T_00 . "CMMT_ITEM" = T_02 . "CMMT_ITEM" AND T_00 .
"FM_AREA" = T_02 . "FM_AREA" ) AND ( T_00 . "FUNDS_CTR" = T_03 . "FUNDS_CTR" AND T_00 . "FM_AREA"
= T_03 . "FM_AREA" ) AND ( T_00 . "PU_MEASURE" = T_04 . "PU_MEASURE" AND T_00 . "FM_AREA" = T_04
. "FM_AREA" ) AND ( T_00 . "AC_DOC_TYP" = T_05 . "AC_DOC_TYP" ) AND ( T_00 . "PSTNG_DATE" = T_06
. "DATE0" ) AND ( T_00 . "ACDOC_NO_F" = T_07 . "AC_DOC_NO" ) AND ( T_00 . "PSTNG_SEQ" = T_08 .
"PSTNG_SEQ" ) AND ( T_00 . "/BIC/ZTXTLG" = T_09 . "/BIC/ZTXTLG" ) AND T_00 . "FISCPER3" = '004'
AND T_00 . "FISCYEAR" = '2005' AND ( T_00 . "FM_ACTDETL" = '000' OR T_00 . "FM_ACTDETL" <> '000' )
AND NOT ( T_00 . "FM_ACTDETL" = '010' ) AND T_00 . "FM_AREA" = 'F1' AND T_03 . "SID" = 2684 AND
T_04 . "SID" = 1281
GROUP BY
T_01 . "SID" , T_02 . "SID" , T_03 . "SID" , T_04 . "SID" , T_05 . "SID" , T_06 . "SID" ,
T_07 . "SID" , T_08 . "SID" , T_09 . "SID" , T_00 . "FISCPER3" , T_00 . "FM_VTYPE"
6 TABLE ACCESS FULL /BI0/APU_O3200
( Estim. Costs = 20,821 , Estim. #Rows = 1 )
Don't be put off, the only imporatnt parts are the ODS table name (in the FROM clause) - "/BI0/APU_O3200" in this case and the end of the WHERE clause. The start of the WHERE clause specifies the join conditions (which we NOT interested in) and the end gives us the (hopefully)selective fields which access the ODS object ("/BI0/APU_O3200"). Here the selective fields are:-
FISCYEAR
FM_ACTDETL
FM_AREA
A secondary index should be created on these fields to improve performance and avoid the full table scan seen above. If query runtime is still unacceptably high, then more selective fields will have to be added, which of course is also a functional issue. -
Very high ASYNC_NETWORK_IO
Hi There; I’m an ‘Accidental DBA’ with a problem (is there any other kind?).
We seem to be getting very high ASYNC_NETWORK_IO; with a WaitCount of around 20 million hits per 24 hour period (total wait time around 9 hours in that same period).
I’ve spent a lot of time researching this wait and as I understand it ASYNC_NETWORK_IO is most often caused by the application not consuming data fast enough; so we had a programmer work through the code and resolve all locations where we are using a IQueryable
in a for next loop and converting them immediately into arrays (Linq To SQL).
This seems to have made no difference.
I would like to set up an extended event trace to find which queries are generating this particular wait, but at an average of over 200 hits per second I’m concerned about performance impact such a trace might have.
I’m looking for suggestions as to how to move forward in resolving this issue.
My first question is, am I correct in thinking that the number of these waits that we are getting is excessively high?
Assuming that is the case how can I go about tracing the offending queries without hammering the system?
If (as I suspect) it is not individual queries that are causing the issue, what else should I be looking for?
Thanks in advance
Paul.Hi chaps
Thanks for the help.
We have about 100 concurrent users and they are always complaining about the application being slow, so the application is definitely not performing well.
Using activity monitor the cpu is normally at 20-60% and there are usually no more than 0-4 waiting tasks and a few hundred batch req/sec.
The only thing that stands out is the very high ASYNC_NETWORK_IO, in terms of both wait time & wait count.
Raju, thank you for the link, I have read that blog post before in my research: the server is using only about 2.5% of the available bandwidth (1gb). Our network admin assures me that the network is set-up correctly. while it is possible that there are a
few badly written queries most of them are quite lean. Almost all of our processes use Linq to SQL, there are no bulk dataloads
You also said:
>I’ve spent a lot of time researching this wait and as I understand it ASYNC_NETWORK_IO
>is most often caused by the application not consuming data fast enough; so we had a
>programmer work through the code and resolve all locations where we are using a IQueryable
>in a for next loop and converting them immediately into arrays (Linq To SQL).
I'm not familiar with what kind of SQL that generates.
I asked about a design issue of too much data before, if you are using a low-level interface then the opposite is a question as well, even for modest amounts of data if you somehow use server-side cursors, or otherwise end up sending SQL commands for
just one row at a time, then you can get slow response and high asynch waits. I'm also not sure what you mean by having the programmer "resolve" these locations.
David also asks a good question whether there are just a couple of waits that throw off the totals. A similar but more design-oriented question is whether your app has some little widget that is always tickling SQL Server for an update, 100 users issuing
a one-line query once a second, to fill some tiny counter on the user screen, can have this same kind of effect.
Finally on the perceived app slowness *again* I would ask about design issues, I've seen apps that were very cleverly doing async, background data loads on ten hidden panels while the user gazed at their data. This was very heavily loading the system
for basically no good reason, but it wasn't SQL Server's fault.
Josh -
Semaphore statistics very high please help
Dear All,
In Semaphore statistics very high .please guide .how to solved it.
we are using windows 2003 and Ecc5
Semaphore statistics Date 05.09.2008 Time 13.27.22 Monitoring Options:
Wait time entering the critical path
Residence period in the critical path
Key No. <100us <1ms <10ms <100ms <1s <10s >10s Time/ms Protected area
2 329 327 2 0 0 0 0 0 0 Message Queue (Writing Dialog Requests)
2 329 329 0 0 0 0 0 0 0
3 2,040 2,040 0 0 0 0 0 0 0 Message Queue (Read)
3 2,040 2,040 0 0 0 0 0 0 0
4 11 11 0 0 0 0 0 0 0 Terminal communication
4 11 11 0 0 0 0 0 0 0
5 15 15 0 0 0 0 0 0 0 Work process communication
5 15 15 0 0 0 0 0 0 0
6 750 750 0 0 0 0 0 0 0 Roll administration
6 750 750 0 0 0 0 0 0 0
7 1,660 1,659 1 0 0 0 0 0 0 Paging administration
7 1,660 1,189 471 0 0 0 0 0 0
12 180 180 0 0 0 0 0 0 0 Character code conversion
12 180 179 1 0 0 0 0 0 0
13 200 200 0 0 0 0 0 0 0 Update synchronization
13 200 200 0 0 0 0 0 0 0
14 2,161 2,125 1 11 18 6 0 0 1098 Presentation buffer
14 2,161 1,290 829 9 29 4 0 0 2061
16 1,028 1,026 2 0 0 0 0 0 0 Table buffer key
16 1,028 983 34 11 0 0 0 0 0
17 1 1 0 0 0 0 0 0 0 Buffer synchronization
17 1 0 0 1 0 0 0 0 0
23 1,578 1,578 0 0 0 0 0 0 0 Message queue (Writing All Other Requests)
23 1,578 1,576 2 0 0 0 0 0 0
24 11 11 0 0 0 0 0 0 0 LRU table buffer
24 11 11 0 0 0 0 0 0 0
26 1,042 1,042 0 0 0 0 0 0 0 Enqueue table
26 1,042 1,034 6 2 0 0 0 0 0
31 255 255 0 0 0 0 0 0 0 Spool administration
31 255 254 1 0 0 0 0 0 0
33 16 16 0 0 0 0 0 0 0 Extended segments management
33 16 16 0 0 0 0 0 0 0
34 2 2 0 0 0 0 0 0 0 Message buffer
34 2 2 0 0 0 0 0 0 0
38 1,809 1,809 0 0 0 0 0 0 0 CCMS Monitoring
38 1,809 1,724 85 0 0 0 0 0 0
39 1,736 1,736 0 0 0 0 0 0 0 Extended Global Memory
39 1,736 1,736 0 0 0 0 0 0 0
42 430 430 0 0 0 0 0 0 0 Statistics Buffer
42 430 430 0 0 0 0 0 0 0
43 141 141 0 0 0 0 0 0 0 Spool Cache
43 141 138 2 1 0 0 0 0 0
46 1 1 0 0 0 0 0 0 0 Profiles
46 1 1 0 0 0 0 0 0 0
48 417 417 0 0 0 0 0 0 0 EnqID
48 417 417 0 0 0 0 0 0 0
57 1,102 1,102 0 0 0 0 0 0 0 Runtime Monitor
57 1,102 1,101 1 0 0 0 0 0 0
Thanks,
Kumar
Edited by: Kumar on Sep 5, 2008 11:17 AMHello Bhaskar
The above log is of Semaphore in St02 deatils.
we are using windwos 2003 ,Ecc5 and oracle 9i.
I am geting Sem in Sm50. and by obervation i am getting number 14, 38 and 7 manny times . form note 33873
7: PAGING-Semaphore (paging administration)
14: PRES_BUF-Semaphore (presentation buffer)
38: SEM_CCMS_AS_MONI_KEY (CCMS monitoring for appl.s)
But i need sugging how to solve the problem .
the other log are os ST06 and St02.
I gont have much free space.
The problem is system is working very slow and getting hang.
In st06 the Lan high.
Lan
(sum)
Packets in/s 4,444 Errors in/s 0
Packets out/s 4,170 Errors out/s 3
Collisions 0
Memory Physical mem free Kb 1,788,888 in the system so i cant increase
IN ST02 Export/import swap is high than 10,000.
Export/import 89.72 40,000 7,528 25.77 30,000 18,855 62.85 95,972 Exp./Imp. SHM 97.89 4,096 3,238 95.94 2,000 1,999 99.95 0 0
Thanks,
kumar -
Very high memory usage with Yahoo Mail
After using Yahoo Mail for an hour or so my memory usage increases to a very high level.
Just now, after reading and deleting about 50 e-mails (newsletters etc.) I noticed Firefox 17 running slowly and checked the memory usage in Windows Task Manager (I am using XP) and it was 1.2 Gb. My older laptop only has 2 Gb of RAM. Yahoo Mail was the only thing open at the time.
I never notice this problem with Gmail which I mainly use. However I use Yahoo Mail for quite a few newsletters etc. that are less important and which I only check once a week or so.
I found the following bug report about 3 years old which almost exactly describes my problem.
https://bugzilla.mozilla.org/show_bug.cgi?id=506771
But this report involves a much earlier Firefox version, and at the end it seems to say that the problem was fixed. However it well describes my current issue with Firefox 17, especially the continual increase in memory while using the up/down arrow keys to scroll through Yahoo e-mails.
Is this normal to have to shut down and reopen Firefox every hour or so to clean out the memory? For some reason I only notice this when using Yahoo Mail. After using many other sites and having multiple tabs open for several hours I rarely reach that kind of memory usage. About the highest I've seen with other sites after a couple of hours is 600 Kb which is roughly when I start notice slower response times.See also:
*https://support.mozilla.org/kb/firefox-uses-too-much-memory-ram
Start Firefox in <u>[[Safe Mode|Safe Mode]]</u> to check if one of the extensions (Firefox/Tools > Add-ons > Extensions) or if hardware acceleration is causing the problem (switch to the DEFAULT theme: Firefox/Tools > Add-ons > Appearance).
*Do not click the Reset button on the Safe mode start window or otherwise make changes.
*https://support.mozilla.org/kb/Safe+Mode
*https://support.mozilla.org/kb/Troubleshooting+extensions+and+themes -
Very High messages not forwarded to SAP out of the Business Hours
Dear Gurus,
I work for a SAP partner and we have the Solution Manager 7.0 + EHP1, SP stack 24, running under a Windows Server Machine with MS Sqlserver Data Base and we're facing the following problem:
In between the business hours we are able to forward messages to SAP manually, however, after the business hours, messages set as 'very high' priority aren't being forwarded automatically to SAP.
We have already triple checked the configs listed on the SAP Notes 1084744, as well ran the Z-Report included in the note 1225682, however no success at all.
Would you kindly please shed a light on this matter?
Thank you in advance.
Everton.Hello Everton,
If you are a var Customer using the transaction type ZLFN and action profile ZSLFN0001_ADVANCED, the action profile should be AI_SDK_STANDARD.
I don't now if that is the case.
If your issue persists, please trigger a message in SV-SMG-SUP.
Best regards,
Guilherme -
Over the last few months my Mac has developed a worrying habit.
Within the first few minutes of starting it up (perhaps on average about 50% of the time) it will completely lock up. This may happen at the Log In screen (if I start up the Mac & then leave it for a while) or during normal use.
Often the first symptom is a very high pitched (but quiet) whining noise that seems to come from the loud speaker on the front of the Mac. The pointer may freeze at this point, or it may still be moveable for 5 or 10 seconds before it freezes. It sometimes turns into the spinning beach ball during this. Once locked up the only way I can restart the Mac is to hold down the power button on the front for a few seconds to completely reboot the machine.
Once the Mac has restarted, it usually behaves normally, almost always for the rest of the day. The initial lock up & resulting restart only normally seem to happen the first time I use the Mac that day.
The only peripherals attached to the Mac (apart from the display, keyboard & mouse) are an ADSL modem, a USB printer and a pair of Apple Pro speakers, and this setup hasn't changed since long before the problems started, so I'm confident that I can discount the peripherals causing problems. I doubt that unplugging the speakers, for example, would have any effect.
I've run Disc Utility, OnyX and DiscWarrior without anything major cropping up. My instincts (I've been troubleshooting Mac problems for 16 years) tell me that I have a fundamental hardware problem, possibly with one of the 4 RAM DIMMs installed.
The RAM configuration is shown in the attached screen grab.
I'm considering removing one DIMM, running with 1.5GB of RAM rather than 2GB for a while, and repeating with a different DIMM removed each time until I can hopefully isolate the dodgy DIMM.
Do people feel this is a sensible approach, or should I try something else first?
Many thanks.Sometimes visual inpection will show bulging tops/sides., my guess is if it is it's most likely in the PSU.
Possible cheap fix, You can convert an ATX PSU for use on a G4...
http://atxg4.com/mdd.html
http://atxg4.com/
Maybe you are looking for
-
My HP7250 All-in-One has gone offline again - How do I get it online again?
I have an HP7250 AIO that keeps going offline. In the past it has been the wireless radio and I have been told how to fix that - but that isn't the case this time. My operating system is Windows 7 64-bit According to the wireless test report all is g
-
Is it possible to connect two external monitors to my T400 laptop?
Hi, I bought a T400 laptop this summer and have it connected through the VGA port to an external monitor, using both the laptop´s monitor as an extended display. I would like to add a third monitor. I am wondering if there is any way to do this or an
-
Script to know how many statements are not loaded in oracle CM
Hi, Can some one would help me on how to prepare a script to know the bank statements are not loaded into oracle CM and how many checks under that particular statement are not loaded?? I have the following info with me.. 1. The statements are loaded
-
Has anyone else encountered this? You try to enter a new subtitle, using all the same techniques that have been working everywhere else, and suddenly instead of a nice bold yellow box in the timeline waiting for text and (if needed) buttons, you get
-
Adobe Acrobat Professional 6.0 Freezing on XP Prof.
Are there any know problems, running the above on XP Prof SP1? Anytime I lauch Adobe it just freezes. The system was setup a couple weeks back. I have un-installed, cleaned the registry and re-installed but same result. Any ideas?