Pm_tick delay of .. ms exceeds .. ms

I have got sun cluster 3.2 in x86 in my PC
I have got a numerous messages type "pm_tick delay of .. ms exceeds .. ms"
what's matter?
thanks

You probably have a single node cluster in a virtual machine. The micro timing if virtual machines is somehow suboptimal and usually not comparable to physical boxes.
Oracle Solaris Cluster was designed with the requirement of a reliable timing within the machines. This warning is to be expected in virtual machines, until they have a reliable timing.
So this message should be filtered out at the syslog level, otherwise you will spoil your /var/adm/messages. I would suggest to configure the kernel severity to error in /etc/syslog.conf.
After doing this you have to refresh you syslog service.
Hope that helps
Detlef

Similar Messages

  • Suncluster 3.2 on Vmware panic problem

    Hello All,
    This issue might have been discussed already, but looking for any known solution from anyone.
    I have installed Sun Cluster 3.2 on VMware. (Hardware AMD 64 X2, 4G RAM). Everything went on smooth. The cluster is installed configured and quorum configured (shared disk by a vmdk file). Suddenly, it began to get errors as below and nodes started panicing.
    Dec 13 05:33:53 vmnode5 genunix: [ID 313806 kern.notice] NOTICE: pm_tick delay of 15281 ms exceeds 2147 ms
    Dec 13 05:33:53 vmnode5 genunix: [ID 313806 kern.notice] NOTICE: pm_tick delay of 15267 ms exceeds 2147 ms
    Dec 13 05:33:53 vmnode5 genunix: [ID 313806 kern.notice] NOTICE: pm_tick delay of 15280 ms exceeds 2147 ms
    Dec 13 05:33:53 vmnode5 genunix: [ID 313806 kern.notice] NOTICE: pm_tick delay of 15266 ms exceeds 2147 ms
    Dec 13 05:33:53 vmnode5 genunix: [ID 313806 kern.notice] NOTICE: pm_tick delay of 15267 ms exceeds 2147 ms
    I have searched and even updated the patches to 120012-14. Still no luck. Please advice if anyone knows solution for this.
    Regards
    LP

    Thanks for posting this. It's interesting that you got this to work.
    The rules for what is used for quorum devices (QDs) and other shared devices are as follows:
    _2 paths to the device_
    QD: SCSI-2, with PGRe.
    non-QDs: SCSI-2 (*no* PGRe is needed for shared devices that are not QDs).
    [Ie, PGRe is only needed for QDs.]
    _2 paths to the device_
    QDs and non-QDs: SCSI-3 PGR
    PGRE = persistent group reservation emulation.
    So SCSI-3 should only be needed if you have a >2 node cluster or more than 2 paths to your storage. If you use MPxIO then the dual paths would be hidden.
    So, in theory, you should be able to try out RAC.
    Let us know how you get on!
    Tim
    ---

  • SC 3.1u4 on VMWARE

    Hi, as I understand from previous postings there is no support for SC 3.1u4 on x86 32bit. Thus I wonder whether there are any experiences in runing SC 3.1.u4 on top of Sol 10u1 in a 64bit virtual machine of VMWARE. This would be then a perfect environment for testing.
    Thnaks, Marc

    I had limited success in Solaris 9 09/04 x86 based 2-node cluster (with oracle server agent running)
    under VMware server 1.0, where SCSI-2 is supported for quorum. But bear in mind, it can never be
    used for anything but serious purposes, absolutely not for production, for three reasons,
    (1) clock delay is constantly annoyance, and even generates pm_tick delay to bring down the cluster in panic.
    (2) pcn0 public interface automatically turns to sc_ipmp0 group, and constantly fails. Has to tweak /etc/default/mpathd to survive.
    (3) you have to be patient enough to endure the pain and suffering of numerous reboots to get the cluster running,
    and dozen times more efforts to failover Oracle server successfully.
    If you can gracefully resolve those three issues, you will be an asset in building Solaris cluster under VMware.
    Your experience will be valueable to share.
    P.S. I have not tried 5.0 VMware workstation. But 4.0 (likely 4.5.2 ?) ws does not support SCSI disks.

  • Audit Commit Delay exceeded, written a copy to OS Audit Trail

    OS=Redhat 5
    DB=11.1.0.7.0
    We are running a 2 node RAC and are seeing the following error in the alert log file. Based on my research, this may be a bug, but I am not convience it is. Also, this error is only ocurring on one of the nodes. Any ideas.
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail
    Mon Feb 11 18:23:54 2013
    AUD: Audit Commit Delay exceeded, written a copy to OS Audit Trail

    After futher investigation, it turns out there was something wrong with the Network Attached Storage. As a result, database was not able to write to this storage, which explains why we were seeing the Audit Commit Delay exceed error.

  • Delay time data at Daily Production Report

    Hi all
    We hereby crated the Daily Production report which has the delay deatils of all depts. The delay details we are using with notifications. The problem is users required the shift wise delay according to the production.
    SAP std we couldnt keep the shift at notifications level (exit also helps but time couldnt assign). The Shift A is 7 am-3 pm, B- 3 pm to 11 pm. C- 11pm-7 am. The malfunaction start time some time may exceed from the A to B shift. But delay calculation hours is only one. With out going for hard coded how could assign the delay hours shift wise.
    At notification level is Capacity requirements will appear as per shift ?
    please suggest

    Hi,
    1.Select a capacity category for which the actual capacity requirements are to be calculated on the Capacities tab page in the resource.
    2.Choose ActCapReqmnts.
    3.The Settings for Determining the Actual Capacity Requirements dialog box appears.
    4.Set the Calc. actual cap. reqmts indicator.
    5.If the actual capacity requirements are to be calculated from activities:
    Enter one activity for each actual capacity requirement in Actual capacity requirements from activities. Determine the standard value parameter that identifies the corresponding activity.
    regards,
    Venkatesan Anandan

  • How can I set up a delayed analog trigger on PCI 6115 DAQ

    I have an S-Series PCI 6115 DAQ which I’m running with Labview. I’m using it to measure signals from an acoustic emission sensor and two force transducers. I’d like to set up a delayed analog trigger which will start acquisition on all three channels a period of time after a selected channel’s voltage exceeds a threshold.
    Currently I’m using the AI Config VI in line with the AI Start VI and AI Read VI to capture data after a analog hardware trigger occurs. A software trigger probably wouldn't work because I have to sample my data at 10MS/sec. My setup works fine for triggering without any delay or skip counts. However, if I set the delay or skip count in the additional trigger parameter field of the AI start VI, there is no effect, and the device still starts capturing data immediately after the trigger is received. What is the cause of this, and how can I get around it?
    Also, is it possible to sample the channels of a PCI-6115 DAQ at different rates? Right now, I’m sampling all my channels at 10MS/sec and throwing away data on all channels except one. However, this seems relatively slow and eventually I would like to attempt pseudo-real time control using my data.

    rpursley8 is right about needing to get the counters involved if you want a hardware timed delay in your application.
    Concerning whether or not you can sample at different rate, check this document out.
    Sampling Different Channels at Different Rates with NI-DAQmx
    Otis
    Training and Certification
    Product Support Engineer
    National Instruments

  • Why do I receive "4.3.2 connection rate limit exceeded" error message using the mail merge extension and what can be done about it?

    I am using TB 31.3.0 with Mail Merge 3.9.1. I routinely send an email to 435 members of an volunteer emergency responders group that I coordinate. I do so using a .csv list with mail merge. While there were no problems in the past, more recently the mail merge function will hang after sending a varied number of messages successfully and I get the following error message:
    "An error occurred sending mail. The mail server sent an incorrect greeting: 4.3.2 connection rate exceeded.."
    There is an "OK" button in the error message pop up that can be clicked to resume the mail merge process. It would appear that if I click the "OK" button immediately whenever the error message is received I must do so frequently and some members do not receive their email. If I delay clicking the "OK" button, I found that I only needed to click the "OK" button twice to send to all the members.
    An on line search suggests this is the result of some sort of throttling by my ISP, Sonic.net. There is also this comment: "If you are receiving this error, you are likely using mailing list software which cannot decipher the temporary fail codes. If so, you will need to set your software to slow down its delivery rate and/or reduce the number of active connections per remote host."
    I am not super technical. Is it realistic to think that I can tweak mail merge to that I do not have to babysit my email to this group?

    then I suggest you send using a mail provider that is not actively trying to block your outgoing mail. Or use a yahoo / google groups mailing list feature so you only send a single mail.
    Or use a free account from the likes of http://www.ymlp.com/ who limit free account mailing lists to 1000 subscribers. ( Googled them this morning)

  • Excessive (?) cluster delays during shutdown of storage enabled node.

    We are experiencing significant delays when shutting down a storage enabled node. At the moment, this is happening in a benchmark environment. If these delays were to occur in production, however, they would push us well outside of our acceptable response times, so we are looking for ways to reduce/eliminate the delays.
    Some background:
    - We're running in a 'grid' style arrangement with a dedicated cache tier.
    - We're running our benchmarks with a vanilla distributed cache -- binary storage, no backups, no operations other than put/get.
    - We're allocating a relatively large number of partitions (1973), basing that number on the total potential cluster storage and the '50MB per partition' rule.
    - We're using JSW to manage startup/shutdown, calling DefaultCacheServer.main() to start the cache server, and using the shutdown hook (from the operational config) to shutdown the instance.
    - We're currently running all of the dedicated cache JVMs on a single machine (that won't be the case in production, of course), with a relatively higher ratio of JVMs to cores --> about 2 to 1.
    - We're using a simple benchmarking client that is issuing a combination of puts/gets against the distributed cache. The ids for these puts/gets are randomized (completely synthetic, i know).
    - We're currently handling all operations on the distributed service thread (i.e. thread count is zero).
    What we see:
    - When adding a new node to a cluster under steady load (~50% CPU idle avg) , there is a very slight degradation, but only very slight. There is no apparent pause, and the maximum operation times against the cluster might barely exceed ~100 ms.
    - When later removing that node from the cluster (kill the JVM, triggering the coherence supplied shutdown hook), there is an obvious, extended pause. During this time, the maximum operation times against the cluster are as high as 5, 10, or even 15 seconds.
    At the beginning of the pause, a client will see this message:
    2010-07-13 22:23:53.227/55.738 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service Management with senior member 1
    During the length of the pause, the cache server logging indicates that primary partitions are being shuffled around.
    When the partition shuffle is complete, the clients become immediately responsive, and display these messages:
    2010-07-13 22:23:58.935/61.446 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member 8 left service hibL2-distributed with senior member 1
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): MemberLeft notification for Member 8 received from Member(Id=8, Timestamp=2010-07-13 22:23:21.378, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server)
    2010-07-13 22:23:58.973/61.484 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): Member(Id=8, Timestamp=2010-07-13 22:23:58.973, Address=x.x.x.x:8001, MachineId=47282, Location=site:xxx.com,machine:xxx,process:30552,member:xxx-S02, Role=server) left Cluster with senior member 1
    2010-07-13 22:23:59.135/61.646 Oracle Coherence GE 3.5.3/465 <D5> (thread=Cluster, member=10): TcpRing: disconnected from member 8 due to the peer departure
    Note that there was almost nothing actually in the entire cluster-wide cache at this point -- maybe 10 MB of data at most.
    Any thoughts on how we could eliminate (or nearly eliminate) these pauses on shutdown?

    Increasing the number of threads associated with the distributed service does not seem to have a noticable effect. I might try it in a larger scale test, just to make sure, but initial indications are not positive.
    From the client side, the operations seem hung behind the DistributedCache$BinaryMap.waitForPartitionRedistribution() method. The call stack is listed below.
    "main" prio=10 tid=0x09a75400 nid=0x6f02 in Object.wait() [0xb7452000]
    java.lang.Thread.State: TIMED_WAITING (on object monitor)
    at java.lang.Object.wait(Native Method)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForPartitionRedistribution(DistributedCache.CDB:96)
    - locked <0x9765c938> (a com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap$Contention)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.waitForRedistribution(DistributedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:21)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.get(DistributedCache.CDB:16)
    at com.tangosol.util.ConverterCollections$ConverterMap.get(ConverterCollections.java:1547)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.get(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.get(SafeNamedCache.CDB:1)
    at com.ea.nova.coherence.lt.GetRandomTask.main(GetRandomTask.java:90)
    Any help appreciated!

  • Clustered role 'Cluster Group' has exceeded its failover threshold.

    Hello.
    I’m hoping to get some help with a cluster issue I’m having using Windows Storage Server 2012.
    When the cluster is created my Cluster Core Resources are all happy and online.
    I can more the Cluster Name using “move Core Cluster Resources” between the two nodes without any problems.
    If I select ‘Simulate Failure’ on the IP Address resource, it works the first time
    If I do it again shortly after it fails and I get an Event ID 1254, 1205 and 1069.
    Event ID 1254
    Clustered role 'Cluster Group' has exceeded its failover threshold. 
    It has exhausted the configured number of failover attempts within the failover period of time allotted to it and will be left in a failed state. 
    No additional attempts will be made to bring the role online or fail it over to another node in the cluster. 
    Please check the events associated with the failure.  After the issues causing the failure are resolved the role can be brought online manually or the cluster may attempt to bring it online again after the restart delay period.
    Event ID 1205
    The Cluster service failed to bring clustered service or application 'Cluster Group' completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered service or application.
    Event ID 1069
    Cluster resource 'Cluster IP Address' of type 'IP Address' in clustered role 'Cluster Group' failed.
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it. 
    Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    Basically I’m trying to simulate a network failure to make sure the failover kicks in.
    If I click on it and ‘Bring Online’ it comes up fine.
    Where do I find this Threshold Policy and set it to initiate failover if the IP Address resources fails?
    Thank you in advance for your help.

    Hi,
    The failover threshold is the number of times the group can fail over within the number of hours specified by the failover period. For example, if a group failover threshold is set to "5" and its failover period to "3," the clustering software stops attempting
    to bring the group online and leaves the resources within the group in their current state. For example, if the IP Address resource is brought online but the Network Name resource fails, the group is left offline, but the IP Address resource is left online.
    To configure thresholds for a resource:
    Right-click the cluster resource and then select 'Propereties'
    Click 'Advanced'
    Select 'Do not restart' if the cluster service should not attempt to restart. Restart is the default
    If 'Restart' is selected:
    Affect the Group: uncheck to prevent a failure of the selected resource from causing the Server group to failover
    Threshold: number of times the cluster service will attempt to restart the resource, and period is the amount of time in seconds between retries
    Do not modify the 'LooksAlive' and 'IsAlive' settings
    Unless necessary, do not alter the 'Pending Timeout'. This is the amount of time the resource is either in the online or pending or offline pending states before the the cluster service puts it in either offline or failed state
    For more information please refer to following MS articles:
    Windows Failover Clustering Overview
    http://blogs.technet.com/b/rob/archive/2008/05/07/failover-clustering.aspx
    Tuning Failover Cluster Network Thresholds
    http://blogs.msdn.com/b/clustering/archive/2012/11/21/10370765.aspx
    Failover cluster (group) maximum failures limit
    http://blogs.msdn.com/b/arvindsh/archive/2012/03/09/failover-cluster-group-maximum-failures-limit.aspx
    Lawrence
    TechNet Community Support

  • How can a http response be delayed?

    Hi,
    I have a "simple" problem to solve but I am new to J2EE so I hope to find someone who has already faced with this problem.
    I know a servlet can receive an http request from a client and prepare the response. The response will be sent as soon as the servlet returns. The same servlet serves the requests of many users, so if there are 100 users requesting for a servlet I guess the requests are queued and there will be a response outcoming the web-server for each request in the same order.
    Please correct me if I say wrong things.
    Now I would like to write a java code that receives the request but waits for a trigger event to send the response, without blocking other users.
    I mean, a user sends a request, a servlet should forward the request to a thread (what else?) and exit without sending any response. The thread (I thought about a thread for each user) will wait for a trigger event and ONLY if the event happens the response will be sent to the requesting client.
    In the meantime other users can send their requests and receive their response when their trigger events has happened.
    1. Can J2EE technology allow me to do that?
    2. How can I let the thread send the response to the right client?
    3. Http packets do not contain the info about the client, how can I trigger the sending of the response?
    I had a look to pushlet and comet but the problem I have seems to me different from let the server start a communication with a client. In my case the paradigm request-response is kept but the response is delayed. Of course the time-out should be increased (I have read that time out is settable in the client).
    To be more clear I'm adding two examples:
    1. one client (an applet) sends its request, the server receives the http packet and it will start a timer of 5 sec. Only when the timer expires the response will be sent. In the meantime other clients perform their requests and they all will be answered after 5 secs from their request (because there is a timer for each request).
    2. the trigger is the number of requests received: if the server receives 10 requests from 10 clients then all the 10 clients will receive the response. If the server receives less of 10 requests in the time out period there will be issued a time-out error.
    Please could you help me?
    Thanks in advance.

    maxqua72 wrote:
    Hi,
    I have a "simple" problem to solve but I am new to J2EE so I hope to find someone who has already faced with this problem.
    I know a servlet can receive an http request from a client and prepare the response. The response will be sent as soon as the servlet returns. Not exactly. The request is sent with the contents of the output stream are flushed to the client. This may be right away, or it may not be. This usually does happen before the 'servlet returns' but you have control of when that happens.
    The same servlet serves the requests of many users, so if there are 100 users requesting for a servlet I guess the requests are queued and there will be a response outcoming the web-server for each request in the same order.Again, not really true. requests are handled in the order in which they are received, but that does not mean that the first in is the first out. It just means the first in will begin before the second in. The second in could very well finish before the first one.
    Please correct me if I say wrong things.
    Now I would like to write a java code that receives the request but waits for a trigger event to send the response, without blocking other users.
    I mean, a user sends a request, a servlet should forward the request to a thread (what else?) and exit without sending any response. The thread (I thought about a thread for each user) will wait for a trigger event and ONLY if the event happens the response will be sent to the requesting client.
    In the meantime other users can send their requests and receive their response when their trigger events has happened.Each request will have its own thread. There is no need to make your own. You could delay the response as long as you like without making your own thread.
    This assumes you do not implement the 'SingleThreadModel' deprecated interface on your servlet.
    >
    1. Can J2EE technology allow me to do that?
    2. How can I let the thread send the response to the right client?
    3. Http packets do not contain the info about the client, how can I trigger the sending of the response?Yes, JEE can do that. Because each request has its own thread, you can do your delay right in the servlet's service method (doXXX). You would then have the same access to the request/response as always. Alternatively, you could pass the request/response to whatever method/object that will do your work.
    Note that using a new thread for this will actually be a detriment, since that will allow the thread the servlet is called in to return, which will end the request cycle and destroy the response object. So you should do it in the same thread that your request is made in.
    >
    I had a look to pushlet and comet but the problem I have seems to me different from let the server start a communication with a client. In my case the paradigm request-response is kept but the response is delayed.This is actually the same thing as what pushlets do. Pushlets require the user to make an initial request, then the Pushlets hold on to the initial request's thread by never closing the response stream or returning from the service method. It lets Pushlets then continuously send data to the client. Your system would be simpler, you would receive the request, open the response stream, maybe send initial data to client, then delay. When the delay finishes send the rest of the content and close the response stream and let the request die.
    Of course the time-out should be increased (I have read that time out is settable in the client).That is true, but is a nightmare if you expect your client base to be more than a few people whom you know. A better option would be to regularly send small bits of 'keep alive' packets so the client knows the page is still active (which may also help prevent the client from navigating away from your site because it would appear frozen). What you could do is send small bits of javascript that update a specific portion of the page (like a count of current request, or a countdown until activation... whatever seems appropriate). Or just send nonsense javascript that doesn't affect the client at all (but which doesn't break your page either).
    To be more clear I'm adding two examples:
    1. one client (an applet) sends its request, the server receives the http packet and it will start a timer of 5 sec. Only when the timer expires the response will be sent. In the meantime other clients perform their requests and they all will be answered after 5 secs from their request (because there is a timer for each request).No problem. Adding a Thread.sleep(5000) will do this nicely.
    >
    2. the trigger is the number of requests received: if the server receives 10 requests from 10 clients then all the 10 clients will receive the response. If the server receives less of 10 requests in the time out period there will be issued a time-out error.Again, possible, but you would need to device a means of locking and unlocking threads based on client count. Two quickie methods would be to keep a member variable count of current request. Each request would use synchronized blocks to first increment the count, then in a loop, check the count, wait if it is < 10, or end the synchronized block if it is >= 10.
        public void doGet(...) {
            int localCount = 0;
            //Count this request, then recall how many total requests are already made
            synchronized (countLock) {
                this.count++;
                localCount = this.count;
            //Loop until this thread knows there are 10 requests
            while (localCount <10) {
                //wait for a while, and let other threads work
                try { Thread.sleep(100); } catch (InterruptedException ie) { /* don't care */ }
                //get an updated count of requests
                synchronized (countLock) {
                    localcount = this.count;
            } //end while
            //do the rest of the work
            //Remove this request from the count
            synchronized(countLock) {
                this.count--;
        }A second, better approach would use a 'request registrar' which each request adds itself to. The registrar blocks each request on the condition that 10 requests come in, then releases them all. This would use the tools available in java.util.concurrent to work (java 5+). Untested example:
    public class RequestRegistrar {
        private final ReentrantLock requestLock = new ReentrantLock();
        private final Condition requestCountMet = requestLock.newCondition();
        private int count;
        private static final int TARGET_COUNT  = 10;
        private static final RequestRegistrar instance = new RequestRegistrar();
        public static RequestRegistrar getInstance() { return instance; }
        private RequestRegistrar() { }
        public void registerRequest() {
            this.requestLock.lock();
            try {
                this.count++;
                if (this.count < TARGET_COUNT) {
                    while (this.count < TARGET_COUNT) {
                        try {
                            this.requestCountMet.await();
                        } catch (InterruptedException ie) {
                            /* don't care, will just loop back and await again */
                    } //end loop of waiting
                } else { //the target count has been met or exceeded
                    this.count = 0; //reset count
                    this.requestCountMet.signalAll();
            } finally {
                this.requestLock.unlock();
    //Then your requests would all do:
        public void doGet(...) {
            RequestRegistrar.getInstance().registerRequest(); //waits for 10 requests to be registered
            //The rest of your work
        }Undoubtedly there is a lot of work to fix this up, but if you understand Locks and conditions it won't be hard (and if you don't perhaps now is a good time to learn...)
    >
    Please could you help me?
    Thanks in advance.

  • Trigger email for exceeded budget

    Hi,
    I want to know how to trigger email (and also maintain the person name/userid) to the person resonsible if the budget is exceeded.
    There is a standard 'action' no. 2      Warning with MAIL to person responsible in 'Fund Management Availability Control :Tolerance Limit'. The transaction code is OF20.
    Points would be rewarded.
    Thanks
    Suresh

    Hello Suresh,
    There is a functionality covered in SAP by the name Alert Management. It recognizes predefined critical situations and informs interested or responsible parties by sending them an alert without delay. Such critical situations may be an important customer terminating a contract or a budget being exceeded, for example. The alerts are delivered to the recipients in their alert inboxes, which are located in the enterprise portal. They can also be delivered using other channels, such as by Internet mail or to mobile devices.
    Alert Management helps prevent delays in the processing of critical situations, because the time between discovering and processing such situations is reduced considerably.
    Implementation Considerations:Alert Management is an ideal solution if you can identify specific business or technical situations that are critical and could jeopardize efficient operation, and you want specific parties to be informed if these situations arise.
    Integration:The Alert Framework is provided as part of the SAP Web Application Server. The application must define its own alert categories and implement the triggering of the alert instances to realize Alert Management. Alerts can also be triggered by external alert providers. They are all sent to the alert inboxes of the alert recipients, but can additionally be sent by other channels, such as by Internet mail, SMS, or to external alert systems. You must configure and schedule the processing of the alerts to meet your requirements.
    Features:An application triggers an alert of a particular alert category based on an important or critical business or technical situation. ( The alert recipients are determined either by the application, by an administrator, or using a subscription procedure. An alert outlining the situation is delivered to the recipients without delay. The alert is delivered to the recipients in their alert inbox within the enterprise portal, and can also be delivered using other channels if the recipients have made the appropriate configuration. If the receipt of the alert is not confirmed by any of the recipients, the alert can be sent to an escalation recipient.
    Constraints:The following are not incorporated in Alert Management:
    Feedback to the triggering application. It is possible to model feedback to the application, such as confirming that a subsequent activity has been executed, using SAP Business Workflow.
    Merging of alerts that are related from a content perspective.
    Hope I had been able to help you out. Please assign points.
    Rgds
    Manish

  • Help!! ORA-01000: maximum open cursors exceeded

    Dear All,
    Here is my software information:
    Platform: WLP7.0sp2
    OS : solaris8
    And I have two connection Pools, their settings are:
    Initial Capacity: 10
    Maximum Capacity: 10
    Capacity Increment: 1
    Login Delay Seconds: 0 seconds
    Refresh Period: 0 minutes
    Shrink Period: 15 minutes
    Prepared Statement Cache Size: 5
    When I start my Weblogic Portal server, it happened the following exceptions:
    java.sql.SQLException: ORA-01000: maximum open cursors exceeded
    ORA-06512: at "SYS.STANDARD", line 1014
    ORA-06512: at "TU_PORTLET_P13N", line 2
    ORA-04088: error during execution of trigger 'TU_PORTLET_P13N'
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:169)
    at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:208)
    at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:543)
    at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1405)
    at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:822)
    at oracle.jdbc.driver.OracleStatement.executeNonQuery(OracleStatement.java:1602)
    at oracle.jdbc.driver.OracleStatement.doExecuteOther(OracleStatement.java:1527)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:2045)
    at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:39
    5)
    at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:446)
    at weblogic.jdbc.pool.Statement.execute(Statement.java:274)
    at weblogic.jdbc.rmi.internal.PreparedStatementImpl.execute(PreparedStatementImpl.java:378)
    at weblogic.jdbc.rmi.SerialPreparedStatement.execute(SerialPreparedStatement.java:401)
    at com.bea.portal.manager.internal.persistence.jdbc.JdbcPortalPersistenceHelper.addOrUpdate
    PageP13nPortlets(JdbcPortalPersistenceHelper.java:785)
    at com.bea.portal.manager.internal.persistence.jdbc.JdbcPortalPersistenceHelper.addOrUpdate
    PortalP13nPages(JdbcPortalPersistenceHelper.java:508)
    at com.bea.portal.manager.internal.persistence.jdbc.JdbcPortalPersistenceManager.addOrUpdat
    ePortalPersonalization(JdbcPortalPersistenceManager.java:414)
    at com.bea.portal.manager.internal.persistence.MainPersistenceManager.addOrUpdatePortalPers
    onalization(MainPersistenceManager.java:104)
    at com.bea.portal.manager.internal.PortalPersistenceManager.updatePortalModel(PortalPersist
    enceManager.java:330)
    at com.bea.portal.manager.internal.PortalPersistenceManager.createDataItem(PortalPersistenc
    eManager.java:199)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.handleDataItemMe
    ssage(AbstractDataRepository.java:814)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.onDataSyncMessag
    e(AbstractDataRepository.java:990)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.executeMessage(A
    bstractDataRepository.java:252)
    at com.bea.p13n.management.data.message.internal.JvmCommunicationPipe.sendMessage(JvmCommun
    icationPipe.java:116)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.onSyncRequestRes
    ultMessage(AbstractDataRepository.java:1185)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.executeMessage(A
    bstractDataRepository.java:261)
    at com.bea.p13n.management.data.message.internal.JvmCommunicationPipe.sendMessage(JvmCommun
    icationPipe.java:116)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.onSyncRequestMes
    sage(AbstractDataRepository.java:1086)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.executeMessage(A
    bstractDataRepository.java:257)
    at com.bea.p13n.management.data.message.internal.JvmCommunicationPipe.sendMessage(JvmCommun
    icationPipe.java:116)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.notifyDataReposi
    tory(AbstractDataRepository.java:706)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.onRefreshMessage
    (AbstractDataRepository.java:912)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.executeMessage(A
    bstractDataRepository.java:247)
    at com.bea.p13n.management.data.message.internal.JvmCommunicationPipe.sendMessage(JvmCommun
    icationPipe.java:116)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.sendRefreshMessa
    geToThis(AbstractDataRepository.java:341)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.addNotifiedDataR
    epository(AbstractDataRepository.java:324)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.addNotifiedDataR
    epository(AbstractDataRepository.java:304)
    at com.bea.portal.manager.internal.DeploymentManager.initializePortal(DeploymentManager.jav
    a:142)
    at com.bea.portal.manager.internal.DeploymentManager.initialize(DeploymentManager.java:101)
    at com.bea.portal.manager.internal.PortalManagerDelegateImpl.<init>(PortalManagerDelegateIm
    pl.java:86)
    at com.bea.portal.manager.PortalFactory.createPortalManagerDelegate(PortalFactory.java:248)
    at com.bea.portal.manager.ejb.internal.PortalManagerBean.ejbCreate(PortalManagerBean.java:1
    47)
    Any ideas will be appreciated.
    Robert

    Robert Chao wrote:
    Dear All,
    Here is my software information:
    Platform: WLP7.0sp2
    OS : solaris8
    And I have two connection Pools, their settings are:
    Initial Capacity: 10
    Maximum Capacity: 10
    Capacity Increment: 1
    Login Delay Seconds: 0 seconds
    Refresh Period: 0 minutes
    Shrink Period: 15 minutes
    Prepared Statement Cache Size: 5Hi. You should either configure your DBMS to allow more open cursors per connection,
    or you can try setting the prepared statement cache size to zero.
    Joe
    >
    When I start my Weblogic Portal server, it happened the following exceptions:
    java.sql.SQLException: ORA-01000: maximum open cursors exceeded
    ORA-06512: at "SYS.STANDARD", line 1014
    ORA-06512: at "TU_PORTLET_P13N", line 2
    ORA-04088: error during execution of trigger 'TU_PORTLET_P13N'
    at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:169)
    at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:208)
    at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:543)
    at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1405)
    at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:822)
    at oracle.jdbc.driver.OracleStatement.executeNonQuery(OracleStatement.java:1602)
    at oracle.jdbc.driver.OracleStatement.doExecuteOther(OracleStatement.java:1527)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:2045)
    at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:39
    5)
    at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:446)
    at weblogic.jdbc.pool.Statement.execute(Statement.java:274)
    at weblogic.jdbc.rmi.internal.PreparedStatementImpl.execute(PreparedStatementImpl.java:378)
    at weblogic.jdbc.rmi.SerialPreparedStatement.execute(SerialPreparedStatement.java:401)
    at com.bea.portal.manager.internal.persistence.jdbc.JdbcPortalPersistenceHelper.addOrUpdate
    PageP13nPortlets(JdbcPortalPersistenceHelper.java:785)
    at com.bea.portal.manager.internal.persistence.jdbc.JdbcPortalPersistenceHelper.addOrUpdate
    PortalP13nPages(JdbcPortalPersistenceHelper.java:508)
    at com.bea.portal.manager.internal.persistence.jdbc.JdbcPortalPersistenceManager.addOrUpdat
    ePortalPersonalization(JdbcPortalPersistenceManager.java:414)
    at com.bea.portal.manager.internal.persistence.MainPersistenceManager.addOrUpdatePortalPers
    onalization(MainPersistenceManager.java:104)
    at com.bea.portal.manager.internal.PortalPersistenceManager.updatePortalModel(PortalPersist
    enceManager.java:330)
    at com.bea.portal.manager.internal.PortalPersistenceManager.createDataItem(PortalPersistenc
    eManager.java:199)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.handleDataItemMe
    ssage(AbstractDataRepository.java:814)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.onDataSyncMessag
    e(AbstractDataRepository.java:990)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.executeMessage(A
    bstractDataRepository.java:252)
    at com.bea.p13n.management.data.message.internal.JvmCommunicationPipe.sendMessage(JvmCommun
    icationPipe.java:116)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.onSyncRequestRes
    ultMessage(AbstractDataRepository.java:1185)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.executeMessage(A
    bstractDataRepository.java:261)
    at com.bea.p13n.management.data.message.internal.JvmCommunicationPipe.sendMessage(JvmCommun
    icationPipe.java:116)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.onSyncRequestMes
    sage(AbstractDataRepository.java:1086)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.executeMessage(A
    bstractDataRepository.java:257)
    at com.bea.p13n.management.data.message.internal.JvmCommunicationPipe.sendMessage(JvmCommun
    icationPipe.java:116)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.notifyDataReposi
    tory(AbstractDataRepository.java:706)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.onRefreshMessage
    (AbstractDataRepository.java:912)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.executeMessage(A
    bstractDataRepository.java:247)
    at com.bea.p13n.management.data.message.internal.JvmCommunicationPipe.sendMessage(JvmCommun
    icationPipe.java:116)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.sendRefreshMessa
    geToThis(AbstractDataRepository.java:341)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.addNotifiedDataR
    epository(AbstractDataRepository.java:324)
    at com.bea.p13n.management.data.repository.internal.AbstractDataRepository.addNotifiedDataR
    epository(AbstractDataRepository.java:304)
    at com.bea.portal.manager.internal.DeploymentManager.initializePortal(DeploymentManager.jav
    a:142)
    at com.bea.portal.manager.internal.DeploymentManager.initialize(DeploymentManager.java:101)
    at com.bea.portal.manager.internal.PortalManagerDelegateImpl.<init>(PortalManagerDelegateIm
    pl.java:86)
    at com.bea.portal.manager.PortalFactory.createPortalManagerDelegate(PortalFactory.java:248)
    at com.bea.portal.manager.ejb.internal.PortalManagerBean.ejbCreate(PortalManagerBean.java:1
    47)
    Any ideas will be appreciated.
    Robert

  • Ni scope measurement time delay

    使用niscope measurement  测量 time delay ,如何使测量更加准确,现在测出来的值的浮动范围很大,另,能不能提供其测量的具体算法编程

    The PXI 1031 is just the chassis and power supply.  It does not read anything.  So I am guessing that the 5105 is doing the measurements.
    What is the resistor value you are using to convert the current to voltage?  Have you verified that the resistor is actually close to the nominal value? Is the voltage in the current loop within the common mode range of the measurement device?  Do you have the loop grounded at two different points?  Does the pressure actually exceed 62 bar?  Does the current from the transducer actually go to 20 mA?
    Your calculation is correct for a 1-5 V input corresponding to 0-100 bar.
    You have several questions which all seem to be related to the same project.  Do you have a good overall plan for the total project? Did someone sit down and create a specification for the hardware and the software? Have the various subsystems been designed in detail before you started developing code and buying hardware?  Do you have a System Architect (Project Manager, or other title) who has a good overall idea of how this thing is supposed to work?
    Without that you will likely continue to be frustrated and not too productive.
    Lynn

  • Clustered role 'Availability Role' has exceeded its failover threshold

    I am getting this alert on SQL 2012 R2 SP1. So please kindly tell me the solution of the below given alert on windows failover clustering .
    Clustered role 'Availability Role' has exceeded its failover threshold. It has exhausted the configured number of failover attempts within the failover period of time allotted to it and will be left in a failed state. No additional attempts will
    be made to bring the role online or fail it over to another node in the cluster.Please check the events associated with the failure. After the issues causing the failure are resolved the role can be brought online manually or the cluster may attempt to bring
    it online again after the restart delay period.

    Hi Syed Tauseef Ahmed,
    Please offer more information such as under what circumstances this issue occurs, what event id you have got. The failover threshold is the number of times the group can fail
    over within the number of hours specified by the failover period.
    The related KB:
    Tuning Failover Cluster Network Thresholds
    http://blogs.msdn.com/b/clustering/archive/2012/11/21/10370765.aspx
    You can refer the following similar thread for the first step troubleshooting:
    Clustered role 'Cluster Group' has exceeded its failover threshold.
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/4eb44f05-eb9b-448a-821b-359879141608/clustered-role-cluster-group-has-exceeded-its-failover-threshold
    I’m glad to be help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Traffic-shaping for delay sensitive traffic

    Hello, I would like to verify the use of a traffic-shaping policy within an MQC. I was told that you need to apply a shaping policy in order for QoS to always be engaged and not simply during times of congestion. This apparently is critical when you have apps like VoIP. 
    On a similar note, i remember reading up on Ciscopress that you might NOT want to subject VoIP to any form of Shaping as this introdues delay and can cause Jitter.
    Below is a sample config. If you can post an authoritative source on CCO that explains this I would greatly appreciate it.
    Regards,
    -Mike
    policy-map QoS-Policy
     class realtime
      priority 512
        police 512000 conform-action transmit  exceed-action drop
     class preferred
      bandwidth remaining percent 40
      random-detect dscp-based
     class missioncritical
      bandwidth remaining percent 39
      random-detect dscp-based
     class trans-apps
      bandwidth remaining percent 16
      random-detect dscp-based
     class general
      bandwidth remaining percent 1
      random-detect dscp-based
     class class-default
      bandwidth remaining percent 4
      random-detect dscp-based
    policy-map shape-20MB
     class class-default
      shape average 2000000
      service-policy QoS-Policy
    interface Serial0/0/0
     service-policy output shape-20MB

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    I was told that you need to apply a shaping policy in order for QoS to always be engaged and not simply during times of congestion.
    Nope.
    You only need to shape when you're dealing with a path where you know the end-to-end bandwidth is less the the egress interface's physical bandwidth and where you cannot manage congestion further downstream along the end-to-end path.
    On a similar note, i remember reading up on Ciscopress that you might NOT want to subject VoIP to any form of Shaping as this introdues delay and can cause Jitter.
    Semi-true.
    The problem can be mitigated by decreasing the shaper's Tc.  Also, if shaper doesn't account for L2 overhead (and I believe many do not), you'll need to shape "slower" than the nominal bandwidth.  The major problem with the latter, L2 overhead varies, as a percentage, based on packet size.  So, you can either allow for worst case, which will best guarantee VoIP service, but tends to give up much of the available bandwidth, or you can shape for average case, which will make VoIP latency and jitter more variable but usually not so much to exceed its service requirements.
    You can also bypass shaping for some traffic, but then you need to shape all your other traffic even slower to guarantee the non-shaped traffic bandwidth is always available.  As you're effectively reserving this bandwidth, it then becomes unavailable for your other traffic even when unused.
    An example of the latter:
    policy-map QoS-Policy
     class preferred
      bandwidth remaining percent 40
      random-detect dscp-based
     class missioncritical
      bandwidth remaining percent 39
      random-detect dscp-based
     class trans-apps
      bandwidth remaining percent 16
      random-detect dscp-based
     class general
      bandwidth remaining percent 1
      random-detect dscp-based
     class class-default
      bandwidth remaining percent 4
      random-detect dscp-based
    policy-map shape-20MB
     class realtime
      priority 512
        police 512000 conform-action transmit  exceed-action drop
     class class-default
      shape average 1950000
      service-policy QoS-Policy
    interface Serial0/0/0
     service-policy output shape-20MB
    NB: BTW, the above doesn't account for L2 overhead, and I wouldn't recommend it for other reasons, but it should show how you can bypass the shaper.

Maybe you are looking for