Arch wait on sendreq is always very high
Hi all,
we have a DG system ,
a primary and 1 standby both 10.2.0.3 , the arch wait on sendreq is always very high
i can't see any recommendation from Grid control?
should i worry about that?
Thanks
Hi,
there is doc http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/log_transport.htm#i1227137
http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_DataGuardNetworkBestPractices.pdf
Regards,
Tom
http://oracledba.cz
Similar Messages
-
"ARCH wait on SENDREQ" is top in top 5 timed events
my database is two node rac (10.2.0.4)database.my database performance is very slow.
please suggest how to reduce the "ARCH wait on SENDREQ".
thanks in advance
karthiThis wait event will not impact on user sessions. No end user has ever telephoned the help desk to complain that "the wait event such-and-such is too high". What are they complaining about? For example, is it a query that takes too long? Or a batch job that doesn't finish in time? You need to focus on the problem before trying to find a solution.
-
Dear Experts,
I found following wait event in my ADDM report(TOP most)
wait event ARCH wait on SENDREQ in wait class network was consuming significant amount of time.We a standby database with follwing setting:
log_archive_dest_2=service=aaaaa reopen=60 ARCH NOAFFIRM
How to reduce this wait event.
Thanks&Regards
Sunil Kumar
:)Hi,
Re: LNS wait on SENDREQ -
Deadlocks and very high wait times
We are seeing a very high number of deadlocks in the system. The deadlocks trace all show a 'enq: TX - row lock contention' with wait times of around 2929700+ seconds ex:
last wait for 'enq: TX - row lock contention' blocking sess=0x70000006d85e1b8 seq=55793 wait_time=2929704 seconds since wait started=4
name|mode=54580006, usn<<16 | slot=1d0010, sequence=705f
Dumping Session Wait History
for 'enq: TX - row lock contention' count=1 wait_time=2929704
name|mode=54580006, usn<<16 | slot=1d0010, sequence=705f
for 'latch: enqueue hash chains' count=1 wait_time=1649
address=70000006dbb4a20, number=13, tries=0
for 'enq: TX - row lock contention' count=1 wait_time=2929708
name|mode=54580006, usn<<16 | slot=1d0010, sequence=705f
for 'SQL*Net message from client' count=1 wait_time=101740
driver id=54435000, #bytes=1, =0
for 'SQL*Net message to client' count=1 wait_time=1
driver id=54435000, #bytes=1, =0
for 'direct path write temp' count=1 wait_time=921
file number=fb, first dba=6521b, block cnt=2
for 'SQL*Net more data from client' count=1 wait_time=3
driver id=54435000, #bytes=10, =0
for 'SQL*Net more data from client' count=1 wait_time=5
driver id=54435000, #bytes=1e, =0
for 'SQL*Net more data from client' count=1 wait_time=10
driver id=54435000, #bytes=2c, =0
for 'SQL*Net more data from client' count=1 wait_time=5
driver id=54435000, #bytes=3a, =0
Any ideas on how to resolve this ?
Thanks
SuryaSorry for the typo, that Ora-0060 error we are seeing. Here is the deadlock graph:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining Scoring Engine
and Real Application Testing options
ORACLE_HOME = /orasw/product/10.2.0.4.0
System name: AIX
Node name: spda5001
Release: 3
Version: 5
Machine: 00074D5AD400
Instance name: IAMS01P1
Redo thread mounted by this instance: 1
Oracle process number: 21
Unix process pid: 2306444, image: oracle@spda5001
*** 2011-12-24 05:05:39.885
*** SERVICE NAME:(IAMS01P) 2011-12-24 05:05:39.884
*** SESSION ID:(443.2130) 2011-12-24 05:05:39.884
DEADLOCK DETECTED ( ORA-00060 )
[Transaction Deadlock]
The following deadlock is not an ORACLE error. It is a
deadlock due to user error in the design of an application
or from issuing incorrect ad-hoc SQL. The following
information may aid in determining the deadlock:
Deadlock graph:
---------Blocker(s)-------- ---------Waiter(s)---------
Resource Name process session holds waits process session holds waits
TX-00080020-000c3957 21 443 X 58 391 X
TX-001d0010-0000705f 58 391 X 21 443 X
session 443: DID 0001-0015-0000002E session 391: DID 0001-003A-00000081
session 391: DID 0001-003A-00000081 session 443: DID 0001-0015-0000002E
Rows waited on:
Session 391: obj - rowid = 0001098B - AAATtpAAGAAADROAAD
(dictionary objn - 67979, file - 6, block - 13390, slot - 3)
Session 443: obj - rowid = 00010B25 - AAARRwAAGAAAAdgAAN
(dictionary objn - 68389, file - 6, block - 1888, slot - 13)
Information on the OTHER waiting sessions:
Session 391:
pid=58 serial=16572 audsid=52790041 user: 93/IAMS_USR
O/S info: user: , term: , ospid: 1234, machine: mac3023
program:
Current SQL Statement:
update spt_identity set created=:1, modified=:2, owner=:3, assigned_scope=:4, assigned_scope_path=:5, extended1=:6, extended2=:7, extended3=:8, extended4=:9, extended5=:10, extended6=:11, extended7=:12, extended8=:13, extended9=:14, extended10=:15, extended11=:16, extended12=:17, extended13=:18, extended14=:19, extended15=:20, extended16=:21, extended17=:22, extended18=:23, extended19=:24, extended20=:25, name=:26, description=:27, protected=:28, iiqlock=:29, attributes=:30, manager=:31, display_name=:32, firstname=:33, lastname=:34, email=:35, manager_status=:36, inactive=:37, last_login=:38, last_refresh=:39, password=:40, password_expiration=:41, password_history=:42, bundle_summary=:43, assigned_role_summary=:44, correlated=:45, auth_question_lock_start=:46, failed_auth_question_attempts=:47, controls_assigned_scope=:48, certifications=:49, activity_config=:50, preferences=:51, history=:52, scorecard=:53, uipreferences=:54, attribute_meta_data=:55, workgroup=:56 where id=:57
End of information on OTHER waiting sessions.
Current SQL statement for this session:
update spt_workflow_case set created=:1, modified=:2, owner=:3, assigned_scope=:4, assigned_scope_path=:5, stack=:6, attributes=:7, launcher=:8, host=:9, launched=:10, completed=:11, progress=:12, percent_complete=:13, type=:14, messages=:15, name=:16, description=:17, complete=:18, target_class=:19, target_id=:20, target_name=:21, workflow=:22 where id=:23 -
Very high log file sequential read and control file sequential read waits?
I have a 10.2.0.4 database and have 5 streams capture processes running to replicate data to another database. However I am seeing very high
log file sequential read and control file sequential read by the capture procesess. This is causing slowness in the database as the databass is wasting so much time on these wait events. From AWR report
Elapsed: 20.12 (mins)
DB Time: 67.04 (mins)
and From top 5 wait events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time 1,712 42.6
log file sequential read 99,909 683 7 17.0 System I/O
log file sync 49,702 426 9 10.6 Commit
control file sequential read262,625 384 1 9.6 System I/O
db file sequential read 41,528 378 9 9.4 User I/O
Oracle support hasn't been of much help, other than wasting my 10 days and telling me to try this and try that.
Do you have streams running in your environment, are you experiencing this wait. Have you done anything to resolve these waits..
ThanksWelcome to the forums.
There is insufficient information in what you have posted to know that your analysis of the situation is correct or anything about your Streams environment.
We don't know what you are replicating. Not size, not volume, not type of capture, not rules, etc.
We don't know the distance over which it is being replicated ... 10 ft. or 10 light years.
We don't have any AWR or ASH data to look at.
etc. etc. etc. If this is what you provided Oracle Support it is no wonder they were unable to help you.
To diagnose this problem, if one exists, requires someone on-site or with a very substantial body of data which you have not provided. The first step is to fill in the answers to all of the obvious first level questions. Then we will likely come back with a second level of questioning.
But when you do ... do not post here. Your questions are not "Database General" they are specific to Streams and there is a Streams forum specifically for them.
Thank you. -
Very high ASYNC_NETWORK_IO
Hi There; I’m an ‘Accidental DBA’ with a problem (is there any other kind?).
We seem to be getting very high ASYNC_NETWORK_IO; with a WaitCount of around 20 million hits per 24 hour period (total wait time around 9 hours in that same period).
I’ve spent a lot of time researching this wait and as I understand it ASYNC_NETWORK_IO is most often caused by the application not consuming data fast enough; so we had a programmer work through the code and resolve all locations where we are using a IQueryable
in a for next loop and converting them immediately into arrays (Linq To SQL).
This seems to have made no difference.
I would like to set up an extended event trace to find which queries are generating this particular wait, but at an average of over 200 hits per second I’m concerned about performance impact such a trace might have.
I’m looking for suggestions as to how to move forward in resolving this issue.
My first question is, am I correct in thinking that the number of these waits that we are getting is excessively high?
Assuming that is the case how can I go about tracing the offending queries without hammering the system?
If (as I suspect) it is not individual queries that are causing the issue, what else should I be looking for?
Thanks in advance
Paul.Hi chaps
Thanks for the help.
We have about 100 concurrent users and they are always complaining about the application being slow, so the application is definitely not performing well.
Using activity monitor the cpu is normally at 20-60% and there are usually no more than 0-4 waiting tasks and a few hundred batch req/sec.
The only thing that stands out is the very high ASYNC_NETWORK_IO, in terms of both wait time & wait count.
Raju, thank you for the link, I have read that blog post before in my research: the server is using only about 2.5% of the available bandwidth (1gb). Our network admin assures me that the network is set-up correctly. while it is possible that there are a
few badly written queries most of them are quite lean. Almost all of our processes use Linq to SQL, there are no bulk dataloads
You also said:
>I’ve spent a lot of time researching this wait and as I understand it ASYNC_NETWORK_IO
>is most often caused by the application not consuming data fast enough; so we had a
>programmer work through the code and resolve all locations where we are using a IQueryable
>in a for next loop and converting them immediately into arrays (Linq To SQL).
I'm not familiar with what kind of SQL that generates.
I asked about a design issue of too much data before, if you are using a low-level interface then the opposite is a question as well, even for modest amounts of data if you somehow use server-side cursors, or otherwise end up sending SQL commands for
just one row at a time, then you can get slow response and high asynch waits. I'm also not sure what you mean by having the programmer "resolve" these locations.
David also asks a good question whether there are just a couple of waits that throw off the totals. A similar but more design-oriented question is whether your app has some little widget that is always tickling SQL Server for an update, 100 users issuing
a one-line query once a second, to fill some tiny counter on the user screen, can have this same kind of effect.
Finally on the perceived app slowness *again* I would ask about design issues, I've seen apps that were very cleverly doing async, background data loads on ten hidden panels while the user gazed at their data. This was very heavily loading the system
for basically no good reason, but it wasn't SQL Server's fault.
Josh -
Very High processing time in STAD
Hi
I have a problem in my NW04 BI system.
Users are occasionally experiencing high responsetimes while using the HTTP interface to work with BI reports.
If I display the statistical records STAD I can see that while using the SAPMHTTP program the "processing time" and therefor "Total time in workprocs" are very high but other times in the record are very low. CPU time is very low. Below is the detailed analysis.
CPU time 94 ms
RFC+CPIC time 0 ms
Total time in workprocs 481.566 ms
Response time 481.566 ms
Wait for work process 0 ms
Processing time 481.093 ms
Load time 1 ms
Generating time 0 ms
Roll (in+wait) time 0 ms
Database request time 226 ms
Enqueue time 0 ms
DB procedure call time 246 ms
Number Roll ins 1
Roll outs 1
Enqueues 8
Load time Program 1 ms
Screen 0 ms
CUA interf. 0 ms
Roll time Out 0 ms
In 0 ms
Wait 0 ms
Frontend No.roundtrips 0
GUI time 0 ms
Net time 0 ms
No. of DB procedure calls 1
Can anyone tell me what is going on in the system or how I can go further to analyze this. You might assume that this is a CPU bottleneck but it is not, I am using 4 x 64bit processors and 12Gb memory with load of 5-10% when this problem occurs.
Best Regards
Sindriwhy should the database be a problem, it is
> Database request time 226 ms
Go to the details (double click) of the line with problems. In the details you should have a button 'http' in the action line. There you can chewck the http details which should tell you where the time was spend.
As always, repeat your measurements a few times to see whether the behaviour is reproducible.
Siegfried -
Very high "load average" in top
Hi,
our OES11SP1 two-server-cluster (fully patched) shows a very high "load
average" (>50, up to 110) in top in some circumstances. There are no
problems in normal operation, but administrator actions like shutdown or
cluster migrate might trigger the problem.
For example when I enter 'halt', then there is the following line in
/var/log/messages:
Sep 12 20:27:18 srv1 shutdown[14675]: shutting down for system halt
more than 20 minutes later:
Sep 12 20:51:19 srv1 init: Switching to runlevel: 0
Within thes 20 minutes nothing happens, but "average load" goes up to at
least 50, with ndsd at top. Access to storage related tools and commands is
not possible, for example 'nss /pool' hangs without any output.
This happens on nearly every shutdown, but from time to time it doesn't. The
same will sometimes be triggered by a cluster migrate.
This only happens with our OES11SP1 cluster, it does not happen with OES11
and OES2SP3; the only other difference I'm aware of: Novell CIFS is only
running on the OES11SP1 cluster.
Any ideas?
Thanks,
MirkoSorry for the delay, it seems it's a bad habit of me to ask questions
immediately before holidays...
Yes, these servers have replicas, all of them... Cache size is set to 195328
KB, which is about twice the DIB size. IIRC this was a recommendation I read
somewhere at Novell. But I'll check that information again.
Thanks,
Mirko
kjhurni wrote:
>
> Mirko Guldner;2283539 Wrote:
>> top shows ndsd on top - but it's there in normal operation too, so I
>> don't
>> know if this means something.. (?) And it's not always the CPU which is
>> at
>> 100% - I have an example screenshot with: load average 50.20, 51.61,
>> 41.0
>> 3.2%us, 1.0%sy, 0.0%ni, 77.0%id 18%wa 0.0%hi 0.3%si 0.0%st. But this is
>> only
>> an example - this differs.
>>
>> Thanks,
>> Mirko
>>
>> kjhurni wrote:
>>
>> >
>> > Mirko Guldner;2283448 Wrote:
>> >> Hi,
>> >>
>> >> our OES11SP1 two-server-cluster (fully patched) shows a very high
>> "load
>> >> average" (>50, up to 110) in top in some circumstances. There are no
>> >> problems in normal operation, but administrator actions like
>> shutdown
>> >> or
>> >> cluster migrate might trigger the problem.
>> >>
>> >> For example when I enter 'halt', then there is the following line in
>> >> /var/log/messages:
>> >>
>> >> Sep 12 20:27:18 srv1 shutdown[14675]: shutting down for system halt
>> >>
>> >> more than 20 minutes later:
>> >>
>> >> Sep 12 20:51:19 srv1 init: Switching to runlevel: 0
>> >>
>> >> Within thes 20 minutes nothing happens, but "average load" goes up
>> to
>> >> at
>> >> least 50, with ndsd at top. Access to storage related tools and
>> commands
>> >> is
>> >> not possible, for example 'nss /pool' hangs without any output.
>> >>
>> >> This happens on nearly every shutdown, but from time to time it
>> doesn't.
>> >> The
>> >> same will sometimes be triggered by a cluster migrate.
>> >>
>> >> This only happens with our OES11SP1 cluster, it does not happen with
>> >> OES11
>> >> and OES2SP3; the only other difference I'm aware of: Novell CIFS is
>> >> only
>> >> running on the OES11SP1 cluster.
>> >>
>> >> Any ideas?
>> >>
>> >> Thanks,
>> >> Mirko
>> >
>> > Which process(es) does top show as being the culprit?
>> >
>> > In the past (on OES2 SP3) we had issues with CIFS causing ncp to
>> cause
>> > high utilization, but that was fixed a while ago.
>> >
>> > --Kevin
>> >
>> >
>
> I have seen ncp issues cause high ndsd utilization, but we've not yet
> upgraded our cluster or DS servers to OES11 yet (waiting for new
> hardware to go in place first).
>
> Out of curiosity, are the servers with high utilization also replica
> servers? For some reason, during one of our upgrades on a replica
> server (we have a server that contains all R/W copies of everything),
> the cache size got set down really low and that caused all sorts of
> issues.
>
> Maybe one of my collegues will wander by and offer additional insight,
> as this may be eDir related and/or NCP related. Not sure if triggering
> a core manually would help (but you'd have to send that to Novell and
> open an SR to get it read).
>
> IF you suspect CIFS, do you have the ability to temporarily shut off
> CIFS for like a few days to see if that's the culprit?
>
> -
Over the last few months my Mac has developed a worrying habit.
Within the first few minutes of starting it up (perhaps on average about 50% of the time) it will completely lock up. This may happen at the Log In screen (if I start up the Mac & then leave it for a while) or during normal use.
Often the first symptom is a very high pitched (but quiet) whining noise that seems to come from the loud speaker on the front of the Mac. The pointer may freeze at this point, or it may still be moveable for 5 or 10 seconds before it freezes. It sometimes turns into the spinning beach ball during this. Once locked up the only way I can restart the Mac is to hold down the power button on the front for a few seconds to completely reboot the machine.
Once the Mac has restarted, it usually behaves normally, almost always for the rest of the day. The initial lock up & resulting restart only normally seem to happen the first time I use the Mac that day.
The only peripherals attached to the Mac (apart from the display, keyboard & mouse) are an ADSL modem, a USB printer and a pair of Apple Pro speakers, and this setup hasn't changed since long before the problems started, so I'm confident that I can discount the peripherals causing problems. I doubt that unplugging the speakers, for example, would have any effect.
I've run Disc Utility, OnyX and DiscWarrior without anything major cropping up. My instincts (I've been troubleshooting Mac problems for 16 years) tell me that I have a fundamental hardware problem, possibly with one of the 4 RAM DIMMs installed.
The RAM configuration is shown in the attached screen grab.
I'm considering removing one DIMM, running with 1.5GB of RAM rather than 2GB for a while, and repeating with a different DIMM removed each time until I can hopefully isolate the dodgy DIMM.
Do people feel this is a sensible approach, or should I try something else first?
Many thanks.Sometimes visual inpection will show bulging tops/sides., my guess is if it is it's most likely in the PSU.
Possible cheap fix, You can convert an ATX PSU for use on a G4...
http://atxg4.com/mdd.html
http://atxg4.com/ -
Very high noise my Dell Studio 1558 laptop
Hi, it's me again I'm after installation and under configuration my Arch
Ok, lets start.
My problem is very high noise my laptop. I think this is graphic card fault becouse while I used Ubuntu I had similar trouble. I think so becouse I was installed laptop-mode-tools and it's not helped. Now I am using open source driver xf86-video-ati.
My graphic card is :
02:00.0 VGA compatible controller: ATI Technologies Inc Manhattan [Mobility Radeon HD 5400 Series
Do You think installing ATI closed driver helps?
Can be something other guilty?
I hope You understanding and help me.Closed-source ATI driver will probaly help, but before this try to set profile in open-source ones. It fixed the problem for me.
echo profile > /sys/class/drm/card0/device/power_method
echo low > /sys/class/drm/card0/device/power_profile
If low causes instabilities and display problems for you, try mid.
For more info about this, see https://wiki.archlinux.org/index.php/AT … MS_enabled
My favourite settings are low profile on battery and dynpm on AC.
Last edited by eXine (2011-12-22 21:52:30) -
How to handle very high volume msgs in XI
I have a scenario where 6 million msgs/ hour(xml) have to be picked form source and to be sent to target ,
How do we handle this high volume of data is there any good practice
Pls suggest .Hi Anubhav,
if you just have to send source file to target system without doing any mapping, then it dont use XI for it.......because you have a very high volume of msgs and for this such load, XI will start putting your msgs in wait status and may be the JAVA engine may go down because of insufficient memory.......
So better have an FTP utility in target system.....connect to source system by this...and move the files from source to target by this FTP utility......
But if you have to do a mapping from source msg to target msg, then use XI....but try doing the mapping by graphical mapping and dont go for ABAP mapping to reduce memory usage for your scenario.......
Thanks,
Rajeev Gupta -
Latency is very high when SELECT statements are running for LONG
We are a simple DOWN STREAM streams replication environment ( Archive log is shipped from source , CAPTURE & APPLY are running on destination DB).
Whenever there is a long running SELECT statement on TARGET the latency become very high.
SGA_MAX_SIZE = 8GB
STREAMS_POOL_SIZE=2GB
APPLY parallelism = 4
How can resolve this issue?Is the log file shipped but not acknowledge? -- NO
Is the log file not shipped? -- It is shipped
Is the log file acknowledged by not applied? -- Yes...But Apply process was not stopped. it may be slow or waiting for something?
It is 10g Environment. I will run AWR.. But what should i look for in AWR? -
Very high cpu utilization with mq broker
Hi all,
I see a very high cpu utilization (400% on 8 cpu server) when I connect consumers to OpenQ. It increase close to 100% for every consumer I add. Slowly, the consumer comes to a halt, as the producers are sending messages at a good rate too.
Environment Setup
Glassfish version 2.1
com.sun.messaging.jmq Version Information Product Compatibility Version: 4.3 Protocol Version: 4.3 Target JMS API Version: 1.1
Cluster set up using persistent storage. snippet from broker log.
Java Runtime: 1.6.0_14 Sun Microsystems Inc. /home/user/foundation/jdk-1.6/jre [06/Apr/2011:12:48:44 EDT] IMQ_HOME=/home/user/foundation/sges/imq [06/Apr/2011:12:48:44 EDT] IMQ_VARHOME=/home/user/foundation/installation/node-agent-server1/server1/imq [06/Apr/2011:12:48:44 EDT] Linux 2.6.18-164.10.1.el5xen i386 server1 (8 cpu) user [06/Apr/2011:12:48:44 EDT] Java Heap Size: max=394432k, current=193920k [06/Apr/2011:12:48:44 EDT] Arguments: -javahome /home/user/foundation/jdk-1.6 -Dimq.autocreate.queue=false -Dimq.autocreate.topic=false -Dimq.cluster.masterbroker=mq://server1:37676/ -Dimq.cluster.brokerlist=mq://server1:37676/,mq://server2:37676/ -Dimq.cluster.nowaitForMasterBroker=true -varhome /home/user/foundation/installation/node-agent-server1/server1/imq -startRmiRegistry -rmiRegistryPort 37776 -Dimq.imqcmd.user=admin -passfile /tmp/asmq5711749746025968663.tmp -save -name clusterservercom -port 37676 -bgnd -silent [06/Apr/2011:12:48:44 EDT] [B1004]: Starting the portmapper service using tcp [ 37676, 50, * ] with min threads 1 and max threads of 1 [06/Apr/2011:12:48:45 EDT] [B1060]: Loading persistent data...
I followed step in http://middlewaremagic.com/weblogic/?p=4884 to narrow it down to Threads that was causing high cpu. Both were around 94%.
Following is the stack for those threads.
"Thread-jms[224]" prio=10 tid=0xd635f400 nid=0x5665 runnable [0xd18fe000] java.lang.Thread.State: RUNNABLE at com.sun.messaging.jmq.jmsserver.data.TransactionList.isConsumedInTransaction(TransactionList.java:697) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:918) - locked <0xf3d35730> (a java.util.Collections$SynchronizedMap) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:810) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.destroyConsumer(ConsumerHandler.java:577) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.handle(ConsumerHandler.java:422) at com.sun.messaging.jmq.jmsserver.data.PacketRouter.handleMessage(PacketRouter.java:181) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.readData(IMQIPConnection.java:1489) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.process(IMQIPConnection.java:644) at com.sun.messaging.jmq.jmsserver.service.imq.OperationRunnable.process(OperationRunnable.java:170) at com.sun.messaging.jmq.jmsserver.util.pool.BasicRunnable.run(BasicRunnable.java:493) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None
"Thread-jms[214]" prio=10 tid=0xd56c8000 nid=0x566c waiting for monitor entry [0xd2838000] java.lang.Thread.State: BLOCKED (on object monitor) at com.sun.messaging.jmq.jmsserver.data.TransactionInformation.isConsumedMessage(TransactionList.java:2544) - locked <0xdbeeb538> (a com.sun.messaging.jmq.jmsserver.data.TransactionInformation) at com.sun.messaging.jmq.jmsserver.data.TransactionList.isConsumedInTransaction(TransactionList.java:697) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:918) - locked <0xe4c9abf0> (a java.util.Collections$SynchronizedMap) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:810) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.destroyConsumer(ConsumerHandler.java:577) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.handle(ConsumerHandler.java:422) at com.sun.messaging.jmq.jmsserver.data.PacketRouter.handleMessage(PacketRouter.java:181) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.readData(IMQIPConnection.java:1489) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.process(IMQIPConnection.java:644) at com.sun.messaging.jmq.jmsserver.service.imq.OperationRunnable.process(OperationRunnable.java:170) at com.sun.messaging.jmq.jmsserver.util.pool.BasicRunnable.run(BasicRunnable.java:493) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None
"Thread-jms[213]" prio=10 tid=0xd65be800 nid=0x5670 runnable [0xd1a28000] java.lang.Thread.State: RUNNABLE at com.sun.messaging.jmq.jmsserver.data.TransactionList.isConsumedInTransaction(TransactionList.java:697) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:918) - locked <0xe4c4bad8> (a java.util.Collections$SynchronizedMap) at com.sun.messaging.jmq.jmsserver.core.Session.detatchConsumer(Session.java:810) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.destroyConsumer(ConsumerHandler.java:577) at com.sun.messaging.jmq.jmsserver.data.handlers.ConsumerHandler.handle(ConsumerHandler.java:422) at com.sun.messaging.jmq.jmsserver.data.PacketRouter.handleMessage(PacketRouter.java:181) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.readData(IMQIPConnection.java:1489) at com.sun.messaging.jmq.jmsserver.service.imq.IMQIPConnection.process(IMQIPConnection.java:644) at com.sun.messaging.jmq.jmsserver.service.imq.OperationRunnable.process(OperationRunnable.java:170) at com.sun.messaging.jmq.jmsserver.util.pool.BasicRunnable.run(BasicRunnable.java:493) at java.lang.Thread.run(Thread.java:619) Locked ownable synchronizers: - None
Any ideas will be appreciated.
--Thanks ak, for the response.
Yes, the messages are consumed in transactions. I set imq.txn.reapLimit=200 in Start Arguments in jvm configuration.
I verified that it is being set in the log.txt file for the broker:
-Dimq.autocreate.queue=false -Dimq.autocreate.topic=false -Dimq.txn.reapLimit=250
It did not make any difference. Do I need to set this property somewhere else ?
As far as upgrading MQ is concerned, I am using glassfish 2.1. And I think MQ 4.3 is packaged with it. Can you suggest a safe way to upgrade to OpenMQ 4.5 in a running environment. I can bring down the cluster temporarily. Can I just change the jar file somwhere to use MQ4.5 ?
Here is the snippet of the consumer code :
I create Connection in @postConstruct and close it in @preDestroy, so that I don't have to do it everytime.
private ResultMessage[] doRetrieve(String username, String password, String jndiDestination, String filter, int maxMessages, long timeout, RetrieveType type)
throws InvalidCredentialsException, InvalidFilterException, ConsumerException {
// Resources
Session session = null;
try {
if (log.isTraceEnabled()) log.trace("Creating transacted session with JMS broker.");
session = connection.createSession(true, Session.SESSION_TRANSACTED);
// Locate bound destination and create consumer
if (log.isTraceEnabled()) log.trace("Searching for named destination: " + jndiDestination);
Destination destination = (Destination) ic.lookup(jndiDestination);
if (log.isTraceEnabled()) log.trace("Creating consumer for named destination " + jndiDestination);
MessageConsumer consumer = (filter == null || filter.trim().length() == 0) ? session.createConsumer(destination) : session.createConsumer(destination, filter);
if (log.isTraceEnabled()) log.trace("Starting JMS connection.");
connection.start();
// Consume messages
if (log.isDebugEnabled()) log.trace("Creating retrieval containers.");
List<ResultMessage> processedMessages = new ArrayList<ResultMessage>(maxMessages);
BytesMessage jmsMessage = null;
for (int i = 0 ; i < maxMessages ; i++) {
// Attempt message retrieve
if (log.isTraceEnabled()) log.trace("Attempting retrieval: " + i);
switch (type) {
case BLOCKING :
jmsMessage = (BytesMessage) consumer.receive();
break;
case IMMEDIATE :
jmsMessage = (BytesMessage) consumer.receiveNoWait();
break;
case TIMED :
jmsMessage = (BytesMessage) consumer.receive(timeout);
break;
// Process retrieved message
if (jmsMessage != null) {
if (log.isTraceEnabled()) log.trace("Message retrieved\n" + jmsMessage);
// Extract message
if (log.isTraceEnabled()) log.trace("Extracting result message container from JMS message.");
byte[] extracted = new byte[(int) jmsMessage.getBodyLength()];
jmsMessage.readBytes(extracted);
// Decompress message
if (jmsMessage.propertyExists(COMPRESSED_HEADER) && jmsMessage.getBooleanProperty(COMPRESSED_HEADER)) {
if (log.isTraceEnabled()) log.trace("Decompressing message.");
extracted = decompress(extracted);
// Done processing message
if (log.isTraceEnabled()) log.trace("Message added to retrieval container.");
String signature = jmsMessage.getStringProperty(DIGITAL_SIGNATURE);
processedMessages.add(new ResultMessage(extracted, signature));
} else
if (log.isTraceEnabled()) log.trace("No message was available.");
// Package return container
if (log.isTraceEnabled()) log.trace("Packing retrieved messages to return.");
ResultMessage[] collectorMessages = new ResultMessage[processedMessages.size()];
for (int i = 0 ; i < collectorMessages.length ; i++)
collectorMessages[i] = processedMessages.get(i);
if (log.isTraceEnabled()) log.trace("Returning " + collectorMessages.length + " messages.");
return collectorMessages;
} catch (NamingException ex) {
sessionContext.setRollbackOnly();
log.error("Unable to locate named queue: " + jndiDestination, ex);
throw new ConsumerException("Unable to locate named queue: " + jndiDestination, ex);
} catch (InvalidSelectorException ex) {
sessionContext.setRollbackOnly();
log.error("Invalid filter: " + filter, ex);
throw new InvalidFilterException("Invalid filter: " + filter, ex);
} catch (IOException ex) {
sessionContext.setRollbackOnly();
log.error("Message decompression failed.", ex);
throw new ConsumerException("Message decompression failed.", ex);
} catch (GeneralSecurityException ex) {
sessionContext.setRollbackOnly();
log.error("Message decryption failed.", ex);
throw new ConsumerException("Message decryption failed.", ex);
} catch (JMSException ex) {
sessionContext.setRollbackOnly();
log.error("Unable to consumer messages.", ex);
throw new ConsumerException("Unable to consume messages.", ex);
} catch (Throwable ex) {
sessionContext.setRollbackOnly();
log.error("Unexpected error.", ex);
throw new ConsumerException("Unexpected error.", ex);
} finally {
try {
if (session != null) session.close();
} catch (JMSException ex) {
log.error("Unexpected error.", ex);
Thanks for your help.
Edited by: vineet on Apr 7, 2011 10:06 AM -
XML select query causing very high CPU usage.
Hi All,
In our Oracle 10.2.0.4 Two node RAC we are facing very high CPU usage....and all of the top CPU consuming processes are executing this below sql...also these statements are waiting for some gc wiat events as shown below.
SELECT B.PACKET_ID FROM CM_PACKET_ALT_KEY B, CM_ALT_KEY_TYPE C, TABLE(XMLSEQUENCE ( EXTRACT (:B1 , '/AlternateKeys/AlternateKey') )) T
WHERE B.ALT_KEY_TYPE_ID = C.ALT_KEY_TYPE_ID AND C.ALT_KEY_TYPE_NAME = EXTRACTVALUE (VALUE (T), '/AlternateKey/@keyType')
AND B.ALT_KEY_VALUE = EXTRACTVALUE (VALUE (T), '/AlternateKey')
AND NVL (B.CHILD_BROKER_CODE, '6209870F57C254D6E04400306E4A78B0') =
NVL (EXTRACTVALUE (VALUE (T), '/AlternateKey/@broker'), '6209870F57C254D6E04400306E4A78B0')
SQL> select sid,event,state from gv$session where state='WAITING' and event not like '%SQL*Net%';
SID EVENT STATE
66 jobq slave wait WAITING
124 gc buffer busy WAITING
143 gc buffer busy WAITING
147 db file sequential read WAITING
222 Streams AQ: qmn slave idle wait WAITING
266 gc buffer busy WAITING
280 gc buffer busy WAITING
314 gc cr request WAITING
317 gc buffer busy WAITING
392 gc buffer busy WAITING
428 gc buffer busy WAITING
471 gc buffer busy WAITING
518 Streams AQ: waiting for time management or cleanup tasks WAITING
524 Streams AQ: qmn coordinator idle wait WAITING
527 rdbms ipc message WAITING
528 rdbms ipc message WAITING
532 rdbms ipc message WAITING
537 rdbms ipc message WAITING
538 rdbms ipc message WAITING
539 rdbms ipc message WAITING
540 rdbms ipc message WAITING
541 smon timer WAITING
542 rdbms ipc message WAITING
543 rdbms ipc message WAITING
544 rdbms ipc message WAITING
545 rdbms ipc message WAITING
546 rdbms ipc message WAITING
547 gcs remote message WAITING
548 gcs remote message WAITING
549 gcs remote message WAITING
550 gcs remote message WAITING
551 ges remote message WAITING
552 rdbms ipc message WAITING
553 rdbms ipc message WAITING
554 DIAG idle wait WAITING
555 pmon timer WAITING
79 jobq slave wait WAITING
117 gc buffer busy WAITING
163 PX Deq: Execute Reply WAITING
205 db file parallel read WAITING
247 gc current request WAITING
279 jobq slave wait WAITING
319 LNS ASYNC end of log WAITING
343 jobq slave wait WAITING
348 direct path read WAITING
372 db file scattered read WAITING
475 jobq slave wait WAITING
494 gc cr request WAITING
516 Streams AQ: qmn slave idle wait WAITING
518 Streams AQ: waiting for time management or cleanup tasks WAITING
523 Streams AQ: qmn coordinator idle wait WAITING
528 rdbms ipc message WAITING
529 rdbms ipc message WAITING
530 Streams AQ: waiting for messages in the queue WAITING
532 rdbms ipc message WAITING
537 rdbms ipc message WAITING
538 rdbms ipc message WAITING
539 rdbms ipc message WAITING
540 rdbms ipc message WAITING
541 smon timer WAITING
542 rdbms ipc message WAITING
543 rdbms ipc message WAITING
544 rdbms ipc message WAITING
545 rdbms ipc message WAITING
546 rdbms ipc message WAITING
547 gcs remote message WAITING
548 gcs remote message WAITING
549 gcs remote message WAITING
550 gcs remote message WAITING
551 ges remote message WAITING
552 rdbms ipc message WAITING
553 rdbms ipc message WAITING
554 DIAG idle wait WAITING
555 pmon timer WAITINGI am not at all able to understand what this SQL is...i think its related to some XML datatype.
Also not able to generate execution plan for this sql using explain plan- getting error(ORA-00932: inconsistent datatypes: expected - got -)
Please help me in this issue...
How can i generate execution plan?
Does this type of XML based query will cause high GC wiat events and buffer busy wait events?
How can i tune this query?
How can i find that this is the only query causing High CPU usage?
Our servers are having 64 GB RAM and 16 CPU's..
OS is Solaris 5.10 with UDP as protocol for interconnect..
-YasserI found some more xml queries as shown below.
SELECT XMLELEMENT("Resource", XMLATTRIBUTES(RAWTOHEX(RMR.RESOURCE_ID) AS "resourceID", RMO.OWNER_CODE AS "ownerCode", RMR.MIME_TYPE AS "mimeType",RMR.FILE_SIZE AS "fileSize", RMR.RESOURCE_STATUS AS "status"), (SELECT XMLAGG(XMLELEMENT("ResourceLocation", XMLATTRIBUTES(RAWTOHEX(RMRP.REPOSITORY_ID) AS "repositoryID", RAWTOHEX(DIRECTORY_ID) AS "directoryID", RESOURCE_STATE AS "state", RMRO.RETRIEVAL_SEQ AS "sequence"), XMLFOREST(FULL_PATH AS "RemotePath"))ORDER BY RMRO.RETRIEVAL_SEQ) FROM RM_RESOURCE_PATH RMRP, RM_RETRIEVAL_ORDER RMRO, RM_LOCATION RML WHERE RMRP.RESOURCE_ID = RMR.RESOURCE_ID AND RMRP.REPOSITORY_ID = RMRO.REPOSITORY_ID AND RMRO.LOCATION_ID = RML.LOCATION_ID AND RML.LOCATION_CODE = :B2 ) AS "Locations") FROM RM_RESOURCE RMR, RM_OWNER RMO WHERE RMR.OWNER_ID = RMO.OWNER_ID AND RMR.RESOURCE_ID = HEXTORAW(:B1 )
SELECT XMLELEMENT ( "Resources", XMLAGG(XMLELEMENT ( "Resource", XMLATTRIBUTES (B.RESOURCE_ID AS "id"), XMLELEMENT ("ContentType", C.CONTENT_TYPE_CODE), XMLELEMENT ("TextExtractStatus", B.TEXT_EXTRACTED_STATUS), XMLELEMENT ("MimeType", B.MIME_TYPE), XMLELEMENT ("NumberPages", TO_CHAR (B.NUM_PAGES)), XMLELEMENT ("FileSize", TO_CHAR (B.FILE_SIZE)), XMLELEMENT ("Status", B.STATUS), XMLELEMENT ("ContentFormat", D.CONTENT_FORMAT_CODE), G.ALTKEY )) ) FROM CM_PACKET A, CM_RESOURCE B, CM_REF_CONTENT_TYPE C, CM_REF_CONTENT_FORMAT D, ( SELECT XMLELEMENT ( "AlternateKeys", XMLAGG(XMLELEMENT ( "AlternateKey", XMLATTRIBUTES ( H.ALT_KEY_TYPE_NAME AS "keyType", E.CHILD_BROKER_CODE AS "broker", E.VERSION AS "version" ), E.ALT_KEY_VALUE )) ) ALTKEY, E.RESOURCE_ID RES_ID FROM CM_RESOURCE_ALT_KEY E, CM_RESOURCE F, CM_ALT_KEY_TYPE H WHERE E.RESOURCE_ID = F.RESOURCE_ID(+) AND F.PACKET_ID = HEXTORAW (:B1 ) AN
D E.ALT_KEY_TYPE_ID = H.ALT_KEY_TYPE_ID GROUP BY E.RESOURCE_ID) G WHERE A.PACKET_ID = HEXTORAW (:B1
SELECT XMLELEMENT ("Tagging", XMLAGG (GROUPEDCAT)) FROM ( SELECT XMLELEMENT ( "TaggingCategory", XMLATTRIBUTES (CATEGORY1 AS "categoryType"), XMLAGG (LISTVALUES) ) GROUPEDCAT FROM (SELECT EXTRACTVALUE ( VALUE (T), '/TaggingCategory/@categoryType' ) CATEGORY1, XMLCONCAT(EXTRACT ( VALUE (T), '/TaggingCategory/TaggingValue' )) LISTVALUES FROM TABLE(XMLSEQUENCE(EXTRACT ( :B1 , '/Tagging/TaggingCategory' ))) T) GROUP BY CATEGORY1)
SELECT XMLCONCAT ( :B2 , DI_CONTENT_PKG.GET_ENUM_TAGGING_FN (:B1 ) ) FROM DUAL
SELECT XMLCONCAT (:B2 , :B1 ) FROM DUAL
SELECT * FROM EQ_RAW_TAG_ERROR A WHERE TAG_LIST_ID = :B2 AND EXTRACTVALUE (A.RAW_TAG_XML, '/TaggingValues/TaggingValue/Value' ) = :B1 AND A.STATUS = '
NR'
SELECT RAWTOHEX (S.PACKET_ID) AS PACKET_ID, PS.PACKET_STATUS_DESC, S.LAST_UPDATE AS LAST_UPDATE, S.USER_ID, S.USER_COMMENT, MAX (T.ALT_KEY_VALUE) AS ALTKEY, 'Y' AS IS_PACKET FROM EQ_PACKET S, CM_PACKET_ALT_KEY T, CM_REF_PACKET_STATUS PS WHERE S.STATUS_ID = PS.PACKET_STATUS_ID AND S.PACKET_ID = T.PACKET_ID AND NOT EXISTS (SELECT 1 FROM CM_RESOURCE RES WHERE RES.PACKET_ID = S.PACKET_ID AND EXISTS (SELECT 1 FROM CM_REF_CONTENT_FORMAT CF WHERE CF.CONTENT_FORMAT_ID = RES.CONTENT_FORMAT AND CF.CONTENT_FORMAT_CODE = 'I_FILE')) GROUP BY RAWTOHEX (S.PACKET_ID), PS.PACKET_STATUS_DESC, S.LAST_UPDATE, S.USER_ID, S.USER_COMMENT UNION SELECT RAWTOHEX (A.FATAL_ERROR_ID) AS PACKET_ID, C.PACKET_STATUS_DESC, A.OCCURRENCE_DATE AS LAST_UPDATE, '' AS USER_ID, '' AS USER_COMMENT, RAWTOHEX (A.FATAL_ERROR_ID) AS ALTKEY, 'N' AS IS_PACKET FROM EQ_FATAL_ERROR A, EQ_ERROR_MSG B, CM_REF_PACKET_STATUS C, EQ_SEVERITYD WHERE A.PACKET_ID IS NULL AND A.STATUS = 'NR' AND A.ERROR_MSG_ID = B.ERROR_MSG_ID AND B.SEVERITY_I
SELECT /*+ INDEX(e) INDEX(a) INDEX(c)*/ XMLAGG(XMLELEMENT ( "TaggingCategory", XMLATTRIBUTES ( G.TAG_CATEGORY_CODE AS "categoryType" ), XMLELEMENT ("TaggingValue", XMLATTRIBUTES (C.IS_PRIMARY AS "primary", H.ORIGIN_CODE AS "origin"), XMLAGG(XMLELEMENT ( "Value", XMLATTRIBUTES ( F.TAG_LIST_CODE AS "listType" ), E.TAG_VALUE )) ) )) FROM TABLE (CAST (:B1 AS T_TAG_MAP_HIERARCHY_TAB)) A, TABLE (CAST (:B2 AS T_ENUM_TAG_TAB)) C, REM_TAG_VALUE E, REM_TAG_LIST F, REM_TAG_CATEGORY G, CM_ORIGIN H WHERE E.TAG_VALUE_ID = C.TAG_VALUE_ID AND F.TAG_LIST_ID = E.TAG_LIST_ID AND G.TAGGING_CATEGORY_ID = F.TAGGING_CATEGORY_ID AND H.ORIGIN_ID = C.ORIGIN_ID AND C.ENUM_TAG_ID = A.MAPPED_ENUM_TAG_ID GROUP BY C.IS_PRIMARY, H.ORIGIN_CODE, G.TAG_CATEGORY_CODE START WITH A.MAPPED_ENUM_TAG_ID = HEXTORAW (:B3 ) CONNECT BY PRIOR A.MAPPED_ENUM_TAG_ID = A.ENUM_TAG_ID
SELECT /*+ INDEX(e) */ XMLAGG(XMLELEMENT ( "TaggingCategory", XMLATTRIBUTES ( G.TAG_CATEGORY_CODE AS "categoryType" ), XMLELEMENT ( "TaggingValue", XMLATTRIBUTES (C.IS_PRIMARY AS "primary", H.ORIGIN_CODE AS "origin"), XMLAGG(XMLCONCAT ( XMLELEMENT ( "Value", XMLATTRIBUTES ( F.TAG_LIST_CODE AS "listType" ), E.TAG_VALUE ), CASE WHEN LEVEL = 1 THEN :B4 ELSE NULL END )) ) )) FROM TABLE (CAST (:B1 AS T_TAG_MAP_HIERARCHY_TAB)) A, TABLE (CAST (:B2 AS T_ENUM_TAG_TAB)) C, REM_TAG_VALUE E, REM_TAG_LIST F, REM_TAG_CATEGORY G, CM_ORIGIN H WHERE E.TAG_VALUE_ID = C.TAG_VALUE_ID AND F.TAG_LIST_ID = E.TAG_LIST_ID AND G.TAGGING_CATEGORY_ID = F.TAGGING_CATEGORY_ID AND H.ORIGIN_ID = C.ORIGIN_ID AND C.ENUM_TAG_ID = A.MAPPED_ENUM_TAG_ID GROUP BY G.TAG_CATEGORY_CODE, C.IS_PRIMARY, H.ORIGIN_CODE START WITH A.MAPPED_ENUM_TAG_ID = HEXTORAW (:B3 ) CONNECT BY PRIOR A.MAPPED_ENUM_TAG_ID = A.ENUM_TAG_IDBy observing above sql queries i found some hints forcing for index usage..
I think xml schema is created already...and its progressing as you stated above. Please correct if i am wrong.
I found all these sql from AWR report and all of these are very high resource consuming queries.
And i am really sorry if i am irritating you by asking all stupid questions related to xml.
-Yasser
Edited by: YasserRACDBA on Nov 17, 2009 3:39 PM
Did syntax allignment. -
Why does my 4S stay warm and display very high usage?
I have an iPhone 4S and the battery life is unbelievably poor. Sitting in my pocket on standby it drains 7-10% per hour. However, if I go to settings > general > usage it shows a very high amount of usage - say 40 minutes usage for every 1 hour standby. On top of this the phone is almost always warm to the touch. Obviously it's not meant to be this way.
I have tried restoring to factory settings. I have tried recovery/dfu restoring. Even with adding no data or apps to the basic restore, I'm still seeing the same strange usage and battery drain.
On top of this I've also installed iOS 5.0.1 betas 1 and 2. Again I saw no improvement.
Any ideas on what further I can try? I'd really love to be able to get through a day without having to constantly have the phone on charge.Just in case anyone comes here wondering about the answer to my issues. I got the phone replaced and the replacement has much better battery life with the exact same software, so I'm guessing that I just had a defective phone.
It's weird though as all the signs pointed to it being a software issue. Oh well, it's not like I hadn't tried everything, so I just got it replaced (gotta love that customer service). It was interesting watching the Genius say stuff like "ah but you can fix the battery life problem by turning off X..." [goes through settings] "oh, you already have that turned off"
The usage numbers still seem a bit high on the new one, but on day 1 it's managed 14h27m standby, 4h24m usage (real usage is more like 2h) and it's on 33% battery remaining at nearly midnight. That's just about acceptable, and an upgrade to 5.0.1 should get it running even better I presume. The old phone would probably have died a good few hours ago, so this one is about twice as good.
Maybe you are looking for
-
Textfile Locked while uploading in LSMW
Hi, When read data step, I am getting this locked textfile error in LSMW. Please Help me out. Thanks in advance. Regards, Phani.
-
Is there a version of Logic Pro X that can be installed on a Mac Book Pro with OS X Version 10.7.5 ?
-
IPad enter passcode screen keeps appearing
Every 10 seconds, the screen returns to the Enter Password and I cannot turn the iPad off. How do I stop this?
-
i migrated my account to kobo in March and went to check on the credit balance at Kobo. it isnt there and didnt transfer. sony say they no longer have access and kobo say they cannot access the credits. some of my books migrated. has anyone been
-
I receive error 255:255 when trying to deactivate Elements 11
am trying to deactivate ph sh ele 11 but get error code #255:255 both suspend and permanently what is meaning of code and what is solution. i understand i cant deinstall ele till deactivated permanently