Peak Load

I am looking for a way to repopulate some simple state information (list of
proxies) to a backup copy of one of my SO's once the primary SO fails over.
I had sent a request to the news group awhile back and received some great
suggestions about writing to a file or to the DB. However, since the state
information is data that CAN be recovered by the client app (i.e. node name,
client id, pointer or proxy through the passing of an anchored object) we
decided not to write this data to persistent storage but rather have the
client re-register themselves with the SO by sending the appropriate info
over to the SO (including the proxy back to the client). My question is
this: If the SO goes down while a large number of users are logged on (lets
say 500) and they all re-register themselves on the backup SO at the same
time (passing 3 TextData objects and a proxy to the client) how much load
will this cause and will it be enough of a load to hang the application?
Another Question is: How does one go about forcing an SO to failover for
testing purposes?
Thanks you for your assistance!!
George Vallas
Systems Engineer
EDS Medi-Cal - Systems
3215 Prospect Park Dr.
Rancho Cordova, CA 95670
Phone: (916)636-1183
mailto:[email protected]
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

You can also use audit features of the database to audit user connections/disconnections and write the query to get what you need.
Or you can also use with 9i/10g a LOGON/LOGOFF trigger to audit session connection and disconnection in a audit table and then write the query on this audit table to analyze the maximum number of connections.
Got from somewhere in the forum.
Hth
Girish Sharma

Similar Messages

  • RE: Peak Load

    Hi George,
    500 users sending 4 not very heavy objects does not seem like a lot. I
    would guess that you would see some brief sluggishness in the clients,
    followed by normal behavior. Before you act on this, I reccomend setting
    up a test to simulate this. The Forte Consulting group has done a lot of
    work on this issue, and may have a lot to offer setting up such a test.
    There are several ways to simulate the server going down. In reverse order
    of extremity:
    * Shut down the partition in econsole/escript
    * Kill the process via the OS
    * Pull the network plug from the server
    * Shut down the subnet (or equivalent) where the server resides
    * Kill the OS
    * Cut the power to the server
    * Something more drastic?
    This brings up several other issues. First, know what level of fault
    tolerance you want to provide. For example, if you want to deal with the
    possibility of a subnet going down, don't replicate a partition to servers
    hanging off the same hub. Second, for your particular architecture, make
    sure that clients are aware when the server goes down. For example, if you
    have a service object of Message Dialog Duration, and a client makes a call
    to a server that is temporarily down, the client will not get a
    DistributedAccessException. The manuals and courses from the Forte
    Training department provide more details on which exceptions are thrown
    under which circumstances. Last, make sure that when your clients register
    (or reregister) that they are not walking over each other's records.
    Disregarding load balancing, service objects are inherently
    multi-threaded. Thus, you should make sure you have a mutex (via the
    IsShared property or a Mutex object) to control who writes to the set of
    clients.
    Good Luck,
    CSB
    -----Original Message-----
    From: Vallas, George
    Sent: Tuesday, March 02, 1999 6:54 PM
    To: Forte News Group
    Subject: Peak Load
    I am looking for a way to repopulate some simple state information (list of
    proxies) to a backup copy of one of my SO's once the primary SO fails over.
    I had sent a request to the news group awhile back and received some great
    suggestions about writing to a file or to the DB. However, since the state
    information is data that CAN be recovered by the client app (i.e. node
    name,
    client id, pointer or proxy through the passing of an anchored object) we
    decided not to write this data to persistent storage but rather have the
    client re-register themselves with the SO by sending the appropriate info
    over to the SO (including the proxy back to the client). My question is
    this: If the SO goes down while a large number of users are logged on
    (lets
    say 500) and they all re-register themselves on the backup SO at the same
    time (passing 3 TextData objects and a proxy to the client) how much load
    will this cause and will it be enough of a load to hang the application?
    Another Question is: How does one go about forcing an SO to failover for
    testing purposes?
    Thanks you for your assistance!!
    George Vallas
    Systems Engineer
    EDS Medi-Cal - Systems
    3215 Prospect Park Dr.
    Rancho Cordova, CA 95670
    Phone: (916)636-1183
    mailto:[email protected]

    Hi again,
    You don't need a nice operation team to look at the dependencies on production.
    In fact, it would not be a good idea if you use autostart. But you can
    synchronize your service objects. You can look at synchronization components
    (without source code : it's for sale) and a short user sample on
    http://perso.club-internet.fr/dnguyen/ (CmpSynchro is the component library and
    Sequence & Synchro is the sample).
    Hope this helps,
    Daniel Nguyen
    Freelance Forte Consultant
    http://perso.club-internet.fr/dnguyen/
    Peter Sham (HTHK - Assistant Manager - Software Development, IITB) a &eacute;crit:
    Hi,
    Thanks for your reply. It's really an interesting and intriguing. I once
    have some thought on this but dropped the idea soon. The reason is ...
    If I implement question 1 with events, what if the SO misses the event as
    event doesn't guarantee delivery. After all, to implement such scenario,
    there would be a dependency between the startup sequence of the SO. As I
    don't have a nice operation team to watch the system, I dropped this idea.
    My finally decision is to implement it using database synchronization.
    Given the story, I would really appreciate if you can share some idea with
    me on such concerns too.
    Regards,
    Peter Sham.
    -----Original Message-----
    From: Dimitar Gospodinov [SMTP:[email protected]]
    Sent: Wednesday, March 03, 1999 6:14 PM
    To: Peter Sham (HTHK - Assistant Manager - Software Development,
    IITB)
    Subject: Re: Peak Load
    Hi,
    I am from Sergei's group too...
    For question 2 - it just listens for the RemoteAccessEvent. When the
    SO is down
    all registered clients will receive this event. This event is posted
    only if the
    SO is with session dialog duration.
    According point 1, you can get a reference to all replicates by some
    simple
    protocol. For example posting an event that all replicates are
    registered for.
    Each replicate will respond to this event by posting another event
    that contains
    a reference to the replicate. The SynchronizationMgrSO is registered
    for the
    second event so it will get all events posted.
    Hope it makes sense,
    Regards,
    Dimitar
    "Peter Sham (HTHK - Assistant Manager - Software Development, IITB)"
    wrote:
    Hi,
    I'm interested in your implementation and have some questions:
    1. How do SynchronizationMgrSO gets a reference to allreplicas of
    NotificationMgrSO?
    2. How does a NotificationMgrSO know whenSynchronizationMgrSO has
    crashed?
    Best regards,
    Peter Sham.
    -----Original Message-----
    From: Sergei Sherstyuk [SMTP:[email protected]]
    Sent: Wednesday, March 03, 1999 12:19 PM
    To: Vallas, George
    Cc: Forte News Group
    Subject: Re: Peak Load
    I'm very new to Forte but it happened that we had thesimilar
    problem in our labs - we had to
    develop the fault tolerant NotificationManager. Actuallythe first
    our approach was the same as
    yours - to force clients to reregister themselves to SO
    (NotificationManagerSO) after it restarted.
    But then we implemented another schema. May be it is morecomplex
    but IMHO more robust and scalable.
    We have load balanced (!) NotificationMgrSO with 2 or 3replicas and
    additional
    SynchronizationMgrSo. The every call tosubscribe/unsubscribe to
    NotificationMgrSO comes to
    particular replica. This replica registers thissubscription and
    then call SynchronizationMgrSo to
    make the same changes in all other replicas.SynchronizationMgrSO
    has references to every replica of
    NotificationMgrSO. SynchronizationMgrSO is fault tolerant.And if it
    fails and restarts all replicas
    of NotificationMgrSO should reregister themselves at newinstance of
    SynchronizationMgrSo - it is
    the same technique that you use for clients but at the SOlevel and
    hence no problems of
    overheading.
    Of course there are some issues with this approach but itworked.
    The main advantage is that clients
    don't wary about their services were restarted.
    May be I don't understand well your second question but wetested
    our application simply shutdowning
    partitions with SO from Environment Console.
    Sincerely,
    Sergei Sherstyuk
    "Vallas, George" wrote:
    I am looking for a way to repopulate some simple state
    information
    (list of
    proxies) to a backup copy of one of my SO's once the
    primary SO
    fails over.
    I had sent a request to the news group awhile back and
    received
    some great
    suggestions about writing to a file or to the DB.
    However, since
    the state
    information is data that CAN be recovered by the client
    app (i.e.
    node name,
    client id, pointer or proxy through the passing of an
    anchored
    object) we
    decided not to write this data to persistent storage but
    rather
    have the
    client re-register themselves with the SO by sending theappropriate info
    over to the SO (including the proxy back to the client).
    My
    question is
    this: If the SO goes down while a large number of users
    are
    logged on (lets
    say 500) and they all re-register themselves on the
    backup SO at
    the same
    time (passing 3 TextData objects and a proxy to the
    client) how
    much load
    will this cause and will it be enough of a load to hang
    the
    application?
    Another Question is: How does one go about forcing an SO
    to
    failover for
    testing purposes?
    Thanks you for your assistance!!
    George Vallas
    Systems Engineer
    EDS Medi-Cal - Systems
    3215 Prospect Park Dr.
    Rancho Cordova, CA 95670
    Phone: (916)636-1183
    mailto:[email protected]
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive<URL:http://pinehurst.sageit.com/listarchive/>
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive
    <URL:http://pinehurst.sageit.com/listarchive/>
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive<URL:http://pinehurst.sageit.com/listarchive/>
    Dimitar Gospodinov
    Consultant
    International Business Corporation
    e-mail: [email protected]
    << File: Card for Dimitar Gospodinov >>
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>-
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

  • Recording peak load with Labview

    i have an application where a cylic load is applied to an object via a load cell. I need to record the peak load of every cycle and plot it on an XY graph versus the cycle number.
    However, the only trigger available to tell labview to look for a peak and add a count to the cycle number, is when the load goes over a certain threshold. The load is over that threshold for 3/4 seconds normally.
    I am struggling to:
    1. Add a single count i.e. if I use a case structure the condition is true for 3/4 seconds so the count is not single
    2. Record the peak load - If I use a while loop, the live load cell reading freezes at the point the loop starts.
    I appreciate it is probably very easy code and I'm being a bit thick - but can someone help please!! 

    Hi,
    Attached is a sample VI that shows the two types of peak detector available side by side.  For the point by point graph I have also divided by the cycle time in samples and returned only the quotient that should give the cycle number.  Is this what you were after?
    Regards,
    James Mc
    ========
    CLA and cRIO Fanatic
    wiresmithtech.com/blog
    Attachments:
    Peak Detector Example.vi ‏32 KB

  • Query tuning during peak load

    Dear all,
    10.2.0.4 on solaris 10
    We have a query which is running for 3 -5 seconds on normal time and on on high load on the system ,it consumes more than 10 seconds. We have a time out set to 5000Millisec . because of this the application cannot be able to read the data on high peak load time.
    Any idea how to proceed with this ?
    Kai

    You have been asking this kind of question for more than three years here, many times I have advised you to find out 'what it is waiting for' and still you are
    - too lazy
    - incapable
    (cross all that apply)
    to do any troubleshooting on your own?
    Don't you think it is about high time you stop abusing this forum?
    Sybrand Bakker
    Senior Oracle DBASybrand,
    It will hurts to any one with these words, stop commenting. I can see here how "kais" expressing his feelings..
    we can tell OP to close the threads but cannot involve much..
    Kais,
    COOL, Please Mention what is your findings on high load? does it by CPU consumption or some thing else.
    Please close the threads in future if answered.

  • Is there any issue in scheduling GATHER_STATS_JOB during peak load times?

    Hi All,
    The default DBMS_STATS job(GATHER_STATS_JOB) is running during our peak load time.
    Will it have any performance impact on normal database transactions?
    Is it better to reschedule that window?
    What all problems can we expect during the job's run(Object locks, High I/O due to table/index reads, High CPU usage etc)?
    Please help me in getting the answer. I could not find relevant information from net.
    Thanks and regards
    Satish

    Satish V C wrote:
    I am struggling to find an appropriate period to run the job as this database is active 24*7 globally.
    I want to know how the activity should be measured. Should it be based on no of sessions or number of transactions or number of cursors or net I/O or a combination of all above?Satish,
    if you have the AWR license you could check the AWR reports to find out when your database is most idle. In the 10g time model the most significant aspect is the DB time spent, so the period where the DB time is least might be a good candidate.
    You can also check the number of logical/physical I/Os performed per second, the number of sessions, and other ratios that are shown in the top part of the AWR report (e.g. section "Load Profile").
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Weblogic Server hanging at high load!!!!!!!!!!!

    Folks,
    We have a weblogic server 9.2 which is running fine but after quiet a while at high load we observe that the server goes down and following exception is displayed in the server logs:
    ####<Nov 13, 2009 2:42:48 AM EST> <Error> <WebLogicServer> <ibmrxcptpnp-wb1> <AdminServer> <[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1258098168215> <BEA-000337> <[STUCK] ExecuteThread: '9' for queue: 'weblogic.kernel.Default (self-tuning)' has been busy for "617" seconds working on the request "[email protected]", which is more than the configured time (StuckThreadMaxTime) of "600" seconds. Stack trace:
         java.net.SocketOutputStream.socketWrite0(Native Method)
         java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:105)
         java.net.SocketOutputStream.write(SocketOutputStream.java:149)
         oracle.net.ns.DataPacket.send(Unknown Source)
         oracle.net.ns.NetOutputStream.flush(Unknown Source)
         oracle.net.ns.NetInputStream.getNextPacket(Unknown Source)
         oracle.net.ns.NetInputStream.read(Unknown Source)
         oracle.net.ns.NetInputStream.read(Unknown Source)
         oracle.net.ns.NetInputStream.read(Unknown Source)
         oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1099)
         oracle.jdbc.driver.T4CMAREngine.unmarshalSB1(T4CMAREngine.java:1070)
         oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:478)
         oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216)
         oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:955)
         oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1060)
         oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:839)
         oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1132)
         oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3316)
         oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3422)
         weblogic.jdbc.common.internal.ConnectionEnv.test(ConnectionEnv.java:718)
         weblogic.jdbc.common.internal.ConnectionEnv.test(ConnectionEnv.java:460)
         weblogic.common.resourcepool.ResourcePoolImpl.checkResource(ResourcePoolImpl.java:1455)
         weblogic.common.resourcepool.ResourcePoolImpl.checkAndReturnResource(ResourcePoolImpl.java:1372)
         weblogic.common.resourcepool.ResourcePoolImpl.checkAndReturnResource(ResourcePoolImpl.java:1362)
         weblogic.common.resourcepool.ResourcePoolImpl.testUnusedResources(ResourcePoolImpl.java:1767)
         weblogic.common.resourcepool.ResourcePoolImpl.access$700(ResourcePoolImpl.java:37)
         weblogic.common.resourcepool.ResourcePoolImpl$ResourcePoolMaintanenceTask.timerExpired(ResourcePoolImpl.java:1935)
         weblogic.timers.internal.TimerImpl.run(TimerImpl.java:265)
         weblogic.work.ServerWorkManagerImpl$WorkAdapterImpl.run(ServerWorkManagerImpl.java:518)
         weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
         weblogic.work.ExecuteThread.run(ExecuteThread.java:181)
    >
    Any answers?????

    Hi,
    I have a similar issue on ODSI 10gR3 (WLS 10.3). I am having stuck threads - but, in my case it is clear that the database is down (not at the startup)
    Here is some more info:
    1. Test Connections On Reserve - Enabled
    2. Default values for other parameters
    Using Oracle thin driver for the connection pool "MyPool" -
    ./wlserver_10.3/server/lib/ojdbc6.jar
    My question is:
    1. How do I avoid the stuck thread caused when the database is down? During peak load, the stuck threads are causing some service requests to fail.
    I did my homework and the closest option I could find is set the statementTimeout (which is set currently to the default value of -1 i.e. never timeout). Oracle documentation (http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/htdocs/111070_readme.html) says that the ojdbc6.jar supports setQueryTimeout() method.
    Is there any better way to attack this problem?
    <Dec 21, 2009 12:11:18 AM EST> <Error> <JDBC> <BEA-001112> <Test "SELECT 1 FROM DUAL" set up for pool "MyPool" failed with exception: "java.sql.SQLRecoverableException: Io exception: Read failed: Connection timed out".>
    <Dec 21, 2009 12:14:27 AM EST> <Warning> <JDBC> <BEA-001129> <Received exception while creating connection for pool "MyPool": Io exception: The Network Adapter could not establish the connection>
    <Dec 21, 2009 12:18:38 AM EST> <Error> <WebLogicServer> <BEA-000337> <[STUCK] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)' has been busy for "600" seconds working on the request "[email protected]", which is more than the configured time (StuckThreadMaxTime) of "600" seconds. Stack trace:
    Thread-52 "[STUCK] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'" <alive, in native, suspended, waiting, priority=1, DAEMON> {
    -- Waiting for notification on: [email protected][fat lock]
    java.lang.Object.wait(Object.java:485)
    com.sun.jmx.remote.internal.ClientNotifForwarder.postReconnection(ClientNotifForwarder.java:304)
    javax.management.remote.rmi.RMIConnector$RMIClientCommunicatorAdmin.reconnectNotificationListeners(RMIConnector.java:1488)
    javax.management.remote.rmi.RMIConnector$RMIClientCommunicatorAdmin.doStart(RMIConnector.java:1568)
    com.sun.jmx.remote.internal.ClientCommunicatorAdmin.restart(ClientCommunicatorAdmin.java:72)
    com.sun.jmx.remote.internal.ClientCommunicatorAdmin.gotIOException(ClientCommunicatorAdmin.java:34)
    javax.management.remote.rmi.RMIConnector$RMIClientCommunicatorAdmin.gotIOException(RMIConnector.java:1420)
    javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:857)
    weblogic.management.mbeanservers.domainruntime.internal.ManagedMBeanServerConnection.getAttribute(ManagedMBeanServerConnection.java:288)
    javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:235)
    weblogic.management.jmx.MBeanServerInvocationHandler.doInvoke(MBeanServerInvocationHandler.java:477)
    weblogic.management.jmx.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:294)
    $Proxy69.getServerRuntime(Unknown Source)
    weblogic.management.mbeanservers.domainruntime.internal.DomainRuntimeServiceMBeanImpl.lookupServerRuntime(DomainRuntimeServiceMBeanImpl.java:242)
    sun.reflect.GeneratedMethodAccessor1880.invoke(Unknown Source)
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    java.lang.reflect.Method.invoke(Method.java:575)
    weblogic.management.jmx.modelmbean.WLSModelMBean.invoke(WLSModelMBean.java:355)
    com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:831)
    com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
    weblogic.management.mbeanservers.domainruntime.internal.FederatedMBeanServerInterceptor.invoke(FederatedMBeanServerInterceptor.java:255)
    weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:447)
    weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.invoke(WLSMBeanServerInterceptorBase.java:441)
    weblogic.management.mbeanservers.internal.SecurityMBeanMgmtOpsInterceptor.invoke(SecurityMBeanMgmtOpsInterceptor.java:55)
    weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase$16.run(WLSMBeanServerInterceptorBase.java:447)
    weblogic.management.jmx.mbeanserver.WLSMBeanServerInterceptorBase.invoke(WLSMBeanServerInterceptorBase.java:441)
    weblogic.management.mbeanservers.internal.SecurityInterceptor.invoke(SecurityInterceptor.java:437)
    weblogic.management.mbeanservers.internal.AuthenticatedSubjectInterceptor$10$1.run(AuthenticatedSubjectInterceptor.java:582)
    weblogic.management.mbeanservers.internal.AuthenticatedSubjectInterceptor$10.run(AuthenticatedSubjectInterceptor.java:576)
    weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:350)
    weblogic.management.mbeanservers.internal.AuthenticatedSubjectInterceptor.invoke(AuthenticatedSubjectInterceptor.java:570)
    weblogic.management.jmx.mbeanserver.WLSMBeanServer.invoke(WLSMBeanServer.java:305)
    javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1378)
    javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72)
    javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1264)
    javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1338)
    javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:761)
    javax.management.remote.rmi.RMIConnectionImpl_WLSkel.invoke(Unknown Source)
    weblogic.rmi.internal.ServerRequest.sendReceive(ServerRequest.java:136)
    weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:211)
    javax.management.remote.rmi.RMIConnectionImpl_1030_WLStub.invoke(Unknown Source)
    javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.invoke(RMIConnector.java:969)
    com.bea.diagnostics.server.MBeanServerUtil.processRecordsFromArchive(MBeanServerUtil.java:177)
    com.bea.diagnostics.server.MetricEarliestTimestampLocator$RecordProcessorDesc.identifyEarliestTimestamps(MetricEarliestTimestampLocator.java:261)
    com.bea.diagnostics.server.MetricEarliestTimestampLocator.identifyEarliestTimestamps(MetricEarliestTimestampLocator.java:84)
    com.bea.diagnostics.server.MetricEarliestTimestampLocator.timerExpired(MetricEarliestTimestampLocator.java:79)
    weblogic.timers.internal.TimerImpl.run(TimerImpl.java:253)
    weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516)
    weblogic.work.ExecuteThread.execute(ExecuteThread.java:198)
    weblogic.work.ExecuteThread.run(ExecuteThread.java:165)
    <Dec 21, 2009 12:24:04 AM EST> <Error> <JDBC> <BEA-001112> <Test "SELECT 1 FROM DUAL" set up for pool "MyPool" failed with exception: "java.sql.SQLRecoverableException: Io exception: Connection reset".>
    <Dec 21, 2009 12:24:04 AM EST> <Warning> <JDBC> <BEA-001129> <Received exception while creating connection for pool "MyPool": Io exception: The Network Adapter could not establish the connection>
    <Dec 21, 2009 12:24:04 AM EST> <Warning> <JDBC> <BEA-001129> <Received exception while creating connection for pool "MyPool": Io exception: The Network Adapter could not establish the connection>
    <Dec 21, 2009 12:24:05 AM EST> <Warning> <JDBC> <BEA-001129> <Received exception while creating connection for pool "MyPool": Io exception: The Network Adapter could not establish the connection>
    <Dec 21, 2009 12:24:07 AM EST> <Warning> <JDBC> <BEA-001129> <Received exception while creating connection for pool "MyPool": Io exception: The Network Adapter could not establish the connection>
    <Dec 21, 2009 12:24:12 AM EST> <Warning> <JDBC> <BEA-001129> <Received exception while creating connection for pool "MyPool": Io exception: The Network Adapter could not establish the connection>
    <Dec 21, 2009 12:24:17 AM EST> <Warning> <JDBC> <BEA-001129> <Received exception while creating connection for pool "MyPool": Io exception: The Network Adapter could not establish the connection>
    <Dec 21, 2009 12:24:22 AM EST> <Warning> <JDBC> <BEA-001129> <Received exception while creating connection for pool "MyPool": Io exception: The Network Adapter could not establish the connection>

  • Load balancing Oracle Forms in Application Server

    Hi,
    We're currently planning moving from client-server forms to webforms running on Oracle Application Server (10.1.2.0.2). To meet our high availability requirements, we plan to run our forms app servers active-active. We're coming a little unstuck determining the best method of load balancing requests between these however.
    Currently the main sticking point is determining whether to go for software or hardware load balancers. One of the main things we're unsure of is how much traffic we can expect to flow through the network. In our client-server environment we typically have 900-1000 active connections during peak load and ~350 forms coming in at just over 300Mb in total.
    If you have any experience load balancing between Oracle Forms on Application Server, we'd appreciate it if you could share details of you setup and how it's performing; particularly:
    * Whether you have software/hardware load balancing
    * The number of connections this supports
    * Details of the volume of network traffic you have
    Thanks,
    Chris

    We have implemented both hardware load-balancers and Webcache as load-balancer. Depending on the situation either may be the best solution. In general we recommend to use hardware load-balancers for production systems that require high availability. However, Webcache is very reliable as well and a Webcache instance can be recovered very quickly in case of a failure (use the cloning facility to create a copy of the instance on new hardware within minutes).
    When using https for your applications we do not recommend to use WebCache as there are several issues to cope with then, one is performance (https acceleration facilities are often part of hardware load-balancers).

  • Another temp question on MBPr 15...211F peak operating temp?

    Hello, I know this is a somewhat redundant question, but my own situation is slightly different from other situations.
    I purchased a MBPr a few days ago (11,3 with the 2.5 i7 quad.) This is my fourth MBP. All have ran hot as I realize the aluminum casing works differently than a plastic casing, but this particular one runs a bit hotter than the 2011 8,2 (2.3 GHz i7, 16 GB RAM, 512 SSD, 1 GB GPU, OS X Mavericks) it replaced, and the fans wait for higher temperatures to ramp up. The CPU die temp can elevate to a peak of 211F under max load (measured with iStat Menus, which I assume is not totally accurate), and the fans themselves wait until a higher temp to ramp up compared to my previous computers. By peak load I mean running statistical software to crunch a large dataset for an extended period that maxes the CPU and has pushed every Mac I have ever owned above 200F for brief periods. The difference is that the other Macs were right around 200-204F, where as this rMBP is 210-211.
    Under moderate load, it maintains a reasonably cool working temp of 110-150F depending what I am doing, which is more or less the same as my previous model. It manages to do this with much lower fan speeds, which I assume is due to the more efficient design of the Retina models. The outer casing remains cooler than my previous model. Functionally this computer is fantastic, and it is not throttling CPU performance at these higher temps.
    The Apple Store is quite a drive, and Apple Tech Support told me they are unable to tell me if this is or is not within normal operating temps, which is rather frustrating as I am 80% sure this is not a problem at all. But given this is a $3,000 investment, I prefer to ask others.
    So I ask you, is this something worth being concerned about, or is this within acceptable operating parameters? TIA.

    I cannot give you a definitive answer but be aware that there is a thermal protection circuit that will shut down the MBP before it commits Hari Kari.
    ZNickey wrote:
    The Apple Store is quite a drive, and Apple Tech Support told me they are unable to tell me if this is or is not within normal operating temps, which is rather frustrating as I am 80% sure this is not a problem at all.
    I am surprised that the Apple support would not commit to an answer.  If I were in your position, I would ask what is the thermal shut down temperature for your model MBP.  I would think they should be able to answer that question.
    Ciao.

  • High CPU Load with Alesis Multimix Usb 2.0 mixer

    Hi everybody,
    i'm still playing around with logic studio 9, mainly mainstage 2.1.1 for a possible future live use.
    Today i tried to use my Alesis Multimix 8 usb2.0, just as the output devices, using software instruments with a m-audio keystation.
    I always get cpu peak load when playing notes, with every patch, either internal apple instrument or third party au instruments, when idle cpu load goes to about 40%...
    If i revert back to internal output / internal input, cpu load drastically goes down to 10% idle and max 50-60% with internal instruments,max 80% with 2-3 layers of third party au instruments (Omnisphere, loungelizard, m-tron)...
    I know that Alesis mac drivers for snow leopard are at a beta stage, but am i the only one experiencing this or is it a known problem?
    Thanks,
    Alex.

    Gotenks82 wrote:
    Hi Pan,
    i made a different test today...
    I know it's not the same but i tried using a guitar rig on mainstage (guitar on channel1 in the mixer), and even with 256 samples i ALWAYS get 100% cpu usage...
    then closed mainstage and opened NI Guitar Rig 3 as standalone, using the alesis as interface, with 64 samples as buffer, i cannot go over 13-14% of CPU usage (Hi quality enabled)... I know that mainstage adds more effects than the ones i selected in guitar rig 3, but the HUGE difference in cpu usage cannot be just that, i mean...
    It could be if one of the effects Mainstage was using is Space Designer.
    If it is using Space Designer try using a lesser reverb, like Gold.
    USB doesn't use that much more CPU so that's not the problem.
    You are comparing two separate guitar setups, correct? Or do you mean you're opening Guitar Rig 3 in Mainstage?
    pancenter-
    As I understand it you are opening completely different setups, correct.

  • Database Initialization Parameter Setting - extrapolation on Load Testing

    We have a new application going live, hence a new database instance need to be set. The oracle version is 10.2.0.5 (exisiting env. constraint)
    The load testing is conducted for approx. 2000 user load. As per the test results, during the peak load, number of database sessions opened are 450. The expected user load is 5500 in production, hence we are expecting approx 1100 database sessions at that time.
    Due to constraints in load testing environment, we cannot test for 6000 users in production. Hence, we have to extrapolate the database parameter settings and put in production.
    The SGA sizing & some other parameters in Load Testing is as below (with the following setting in load, the performance is acceptable)
    sga_max_size 7.5 GB
    sga_target 6 GB
    db_cache_size 3 GB
    shared_pool_size 1.5 GB
    shared_pool_reserved_size 150 MB
    java_pool_size 0.2 GB
    large_pool_size 0.5 GB
    sort_area_size 0.5 MB
    streams_pool_size 48 MB
    pga_aggregate_target 4 GB
    processes 1200
    db_block_size 8K
    db_file_multiblock_read_count 16
    db_keep_cache_size 134217728
    fast_start_mttr_target 600
    open_links 25
    Please let me know how to set the database size for production on extrapolation. Apart from processes and sessions, which are parameters I should be giving more focus.

    user8211187 wrote:
    We have a new application going live, hence a new database instance need to be set. The oracle version is 10.2.0.5 (exisiting env. constraint)
    The load testing is conducted for approx. 2000 user load. As per the test results, during the peak load, number of database sessions opened are 450. The expected user load is 5500 in production, hence we are expecting approx 1100 database sessions at that time.
    Due to constraints in load testing environment, we cannot test for 6000 users in production. Hence, we have to extrapolate the database parameter settings and put in production.
    The SGA sizing & some other parameters in Load Testing is as below (with the following setting in load, the performance is acceptable)
    sga_max_size 7.5 GB Upon which metrics was 7.5GB derived?

  • Load Testing - MDBs

              I hvae 30 instances of my MDB Listening on a queue.. All works fine for small load
              of messages to the queue. But when i start feeding about 200 messages at the same
              time, there are some messages that stay up at the queue and wont get processed
              until the TX times out. At this point, these messages are picked up again and
              get processed. These messages are not going into the JMSRedelievered section of
              the code which is true.. But im wondering why it wont get processed when there
              were some free threads. At peak loads, there are 30 messages and i can imagine
              the other messages waiting in teh queue for some MDBs to be freed. But even after
              they are all done, they sit in the queue waiting for a timeout..
              Any ideas what may be wrong?
              Please let me know if i havent posted enough info
              

    Hi
    Thanks for the info.
    However the I think I've localised the problem (not the solution)
    If I understand correctly solveXPath is looking for .//FORM[@name='f1']/@action and will put the value of the action attribute into the variable web.formaction.f1
    The line with _adf.ctrl-state is the last value and is not actually needed or used in this case.
    For some reason .//FORM[@name='f1']/@action can't be found.
    I've tried various variants (without the point, using @id instead of name) but no luck yet.
    Any ideas ?
    Regards
    Paul

  • Session bleed under heavy load...any suggestions?

    hello.
    i'm working on an application that has user sensitive data and we are seeing session bleed under heavy load (ie users reporting seeing other users data, error reports with missing session values, things along thoes lines).  the app itself is typical stuff; a user logs in, they see information specific to their user account and do things with it.  some of that information comes from the session.  this all seems to work fine under normal load (100 or less users), or with a few users testing, but fails under heavy load (1000+ concurrent users).  we cannot reporoduce it locally, nor can we see it when we log into the system ourselves and click around during peak load times.
    here is some more detail.  as i mentioned, we are storing certain user informaiton in the session.  we use an exclusive lock of the session scope to write that info, and a readonly lock of the session scope to read it (i am quadruple checking this now).  this app is running in a multi-instance clustered environment (all on the same server).  CF8 with IIS.  we are using j2ee session management, with sticky sessions and session  replication on.  we were seeing the session bleed before the clustering was introduced however...
    one caveat is that a huge number of our users come from behind a proxy  system, meaning they all have the same IP.  i did some searching on this, but could not find any definitive information that it would create a problem with session variables.
    i was wondering if anyone else had seen this kind of problem and/or had any suggestions in dealing with it?
    thanks.

    the jury is still out to a degree, but i think we've identified the culprit(s) of our session bleed for anyone interested.  it boiled down to two problems.
    1.  var scoping issues.  unfortunately this was a fairly old application, written before we strictly employed best pracitices on var scoping variables within functions in all our cfc's.  we've fixed the bad code and our session bleed problems seemed to have stopped.  there is a great utility for checking code for var scoping problems available at: http://varscoper.riaforge.org/
    2.  a misunderstanding of how cflock with a timeout setting (and no throw on error)  behaves.  aside from session bleed, it turned out we had another issue in there, which was expected session values missing all together.  the crux of the problem is that we had set our cflock read/write timeout's to 30 seconds.  under the extremely heavy load, requests were routinely exceeding those timeout thresholds.  the locks were not set to throw on error, so when the timeout threshold was exceeded, the code within the lock ended up just being skipped.  this was leading to missing data in the session. temporarily we've simply increased the timeout setting to a large number, which has fixed our problem.  eventually we'll set these locks to throw on error and handle the exception in a more graceful manner.
    hope this helps someone.

  • Weird Bug Still On Going

    I have recently upgraded my home PC with new motherboard, CPU and RAM. Since then i have a weird bug that i cant seem to get my head around to fix.
    Basically whenever i am in the metro UI start screen, at random times when i left or right click it will hang my system with NO BSOD. My monitors go  to sleep my PC is still on and the fans are still on as well as the LEDS andmy keyboard and mouse but
    they are frozen.
    I cant remote onto it from my logmein account to see what is going on.
    I have tried the following:
    change power options to disable sleep timeout
    re installed fresh windows 8 then update to 8.1
    changed my main HDD and re installed windows 8 then updated to 8.1
    installed and updated all of my hardware (mouse, keyboard, motherboard chipset)
    It is very random and frustrating as i am an IT engineer and i cannot figure out what is going on. I have estimated that my system at peak load pulls around 590W of power and i have a 750W Modular Gaming PSU.
    I also have a NVIDIA GTX 295 Pre watercooled.
    I really cannot think what this could be and would really appreciate some guidance/assistance with this. In my opinion I do not think it is to do with my PSU or GPU as it is ONLY doing this in WINDOWS 8.1 METRO Screen and nowhere elese not even the metro
    screen in windows 8 before I update to 8.1
    Many thanks in advance for this.

    try to check whether this issue also happen in safe mode or BIOS, if still hang there than high possibility hardware compatibility or HW issue itself
    try to remove all 3rd party HW like printer, LAN cable, usb dongle and do power cycle to isolate 3rd party HW compatibility or HW issue
    if you suspected this is windows issue, I suggest create 1 partition and installed windows 7 or any others OS to proof that this is not HDD or any other HW related issue
    because you already reformat your OS, I suggest try to troubleshoot from the hardware side first (HDD, RAM, connection cable SATA and power etc)

  • Memory Speed Wrong In Cpu-z I have ddr533

    Sorry for the double post didnt mean to do that
    WELL I finally got my better memory to see how far I can push my prescott but only thing is my memory speed is showing up as pc3200 in cpu-z I am running in dual channel and yes I do have one in each slot (1 and 3). Can any one help on this issuse and what setting do I set it to in the bios auto?any help would be great. and what would be good timings for this memory.

    UnderTheSun,
    I can run 3-4-3-7 with SPD set manually with PAT=normal(Disable) and read the same on CPUZ using Bios 3.6, are you always get 3-4-4-8 wiht SPD setting manually?
    I dont have this problem with FSB above 250 mHz (testing with CC I can reach FSB 261 mHz)
    On what FSB did you have CAS set to 2.5 on Bios, since even I set it manually on bios with CAS value 2.5 the CPUZ is always read 3 with PAT=normal(disable).
    Funny thing with this mobo CAS value always set to 2.5 if I turn PAT=Fast (Fast, Turbo and Ultra Turbo always have the same memory speed using memtest86+) and the bad things is I loose the CAS value 3 in Bios so I need to reflash the bios.
    I cannot change the CAS value to 2.5 with this mobo as shown on the following link,
    http://www.anandtech.com/memory/showdoc.html?i=1867
    I would like to try OCZ PC3700 Rev.2 EL Gold Edition which in my opinion working best with this 'P' series regarding DOT and manual OCing as shown in the following link,
    http://www.anandtech.com/memory/showdoc.html?i=1940
    According to OCZ office this DDR is discontinued so I need to find it on the online store and hope can get it.
    So finally I think we need to wait for the newer bios of the 'P' series.
    Casing Tt Xaser III Skull
    M/B 865 PE Neo-2 PFS Platinum Edition Bios Ver.3.6
    CPU:P4 2.4C (HT enable) and ThermalTake SubZero 4G
    (DOT Rank=Commander,Normal,2395 mHz - 2760 mHz) The PAT/MAT using Bios 3.6 doesn't perform properly.
    Memory: Corsair Twinx XMS 4000 pro 2x512 meg (dual-channel dimm 1 and 3).
    VGA:WInFAst A360 Ultra TDH(FX 5700 with 128 meg DDRII)
    HD:2 SATA Maxtor 80gig and 1 ATA Maxtor 80Gig
    CD/RW Yamaha and Pioneer DVD
    PSU:ThermalTake Silent PurePower 480W (W0010/Black)
    +5 V/40 A, +3.3 V/30 A, +12 V/18 A, -5 V/0.3 A, -12 V/0.8 A, +5 VSB/2 A Peak Load 550 W
    Window XP PRO SP1
    NEC FP2141SB
    Microsoft DesktopPro
    Sound Blaster Audigy 2 Platinum + Klipsch Promedia 4.1

  • Error execute report in R12

    Hi all,
    I think I better change this question into a sharing because I'd solved my problem :p.
    Below is my first post in this thread before I found the solution :
    I hoped I put this in the right forum. Before I ask the question, here are my environment:
    Operating system : Oracle Enterprise Linux 4 update 6
    RDBMS : 10.2.0.3.0
    Oracle Applications : 12.0.4
    Report Builder : 10.1.2.0.2
    I created a report using Oracle Report Developer then I upload it to my Apps machine. After set all the executable and validation set I tested the report. At first the request completed with status Warning. Viewed the log I got warning like this :
    REP-0004: Warning: Unable to open user preference file
    After browse metalink and OTN (such as in Reports Compilation Errors: REP-25200 / REP-0004 / REP-1430 and in Re: REP-0004 Error... it seems that the problem was because prefs.ora was not present in $HOME so I copied the prefs.ora file from /apps/tech_st/10.1.2/tools/admin to $HOME then I requested the reports again. This time that error didn't show up and the request completed normal but there's no output at all. The same report did show output when tested in Oracle Report Developer.
    I opened the report file in Report Developer and tried to compile it (Tools > File Conversion) from rdf to rdf using different name then I copied the report and tested again. This time the request completed normal but still no output. When I viewed the log I got this error message:
    Oracle error -6502: ORA-06502: PL/SQL: numeric or value error: associative array shape is not consistent with session parameters has been detected in fnd_global.put(PERMISSION_CODE, FND_PERMIT_0000).
    APP-FND-01564: Oracle error 6502 in FDXNC
    Cause: FDXNC failed due to ORA-06502: PL/SQL: numeric or value error: associative array shape is not consistent with session parameters
    ORA-06512: at "APPS.FND_GLOBAL", line 1233
    ORA-06512: at "APPS.FND_GLOBAL", line 1432
    ORA-06512: at line 1.
    The SQL statement being executed at the time of the error was: begin fnd_global.set_nls_context( p_nls_numeric_characters => :nc ); end; and was executed from the file &ERRFILE.
    The routine FDPREP was unable to set the numeric character to .,.
    So I tried another way, I recreate the report but try to display only some fields. The fields must be put in a repeating frame because if I tried to put in a normal frame caused reports developer displayed error message that I put the field below it's frequency. After put some fields in the repeating frame I tested the report again in Oracle Apps. this time the result was Completed Normal but still no data (0 byte) though in Reports Developer it showed some data. After some test I got conclusion that the data can only be put in a repeating frame if all the data were put in the same level (e.g.: all the field put in the same repeating frame) and can't be break down where there exist repeating frame in a repeating frame (nested repeating frame) as if you want to create grouping while want I want is to create grouping. Using Reports Developer wizard to create the report gave me the same result, no data displayed.
    Now, anyone know how to solve that problem? thanks.
    and now for the solution :
    For REP-0004: Warning: Unable to open user preference file error message
    Just copy the prefs.ora file from /apps/tech_st/10.1.2/tools/admin to $HOME then the error message wouldn't show up anymore. Though based on my experience this error message can be omitted further analyze would be good to know if there's any impact to the system or performance or anything else.
    For no data displayed error
    Make sure the setup was right especially the token field as that was my problem. The token field was not the same as the bind variable in the report query, that's why the request completed normal and no data displayed :P it's very foolish of me to miss that simple thing.
    thank you all
    Message was edited by:
    UD

    mdtaylor wrote:
    Since you are on 10.2.0.3, you may want to also look at database patch 5890966 INTERMITTENT ORA-06502 DURING PEAK LOADING
    Associative Array Shape Is Not Consistent With Session Parameter at Peak Load     
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=467688.1
    Hi Michael,
    Yes, that patch is exactly I got from oracle support, however I'm having a problem to apply it :P
    Here are what I usually did to apply patch in E-Biz :
    1. Source the application environment using the environment file in APPL_TOP
    2. Enable maintenance mode using adadmin
    3. Apply patch using adpatch
    4. Disable maintenance mode using adadmin
    5. Start all the services
    How to apply patch just for database only? Reading the readme file it said that to apply patch was only using command :
    $ opatch apply
    However we need to ensure that the directory containing the opatch script appears in $PATH. I found that opatch.pl exist in $ORACLE_HOME/Opatch but I can't use opatch though I had sourcing application enviroment or database environment.
    I echoed $PATH and $ORACLE_HOME/Opatch (which either ../db/tech_st/10.2.0 for database or ../apps/tech_st/10.1.2/Opatch) and found was not in the $PATH.
    I assumed that I shouldn't change the application environment or database environment file recklessly so that $ORACLE_HOME/Opatch appears in $PATH.
    However if I tried to run opatch directly from it's folder then I got this following error :
    OPatch cannot find a valid oraInst.loc file to locate Central Inventory
    So my questions are :
    1. Do I need to source environment before running opatch? If yes, which one, application environment or database environment?
    2. How to set the Central Inventory location so that opatch recognized it? Is it by sourcing the environment in question 1?
    3. Which opatch must I run? The one in application tier or in database tier?
    4. How to run it? Type all the path directly (as /db/tech_st/10.2.0/OPatch/opatch)?
    Thx

Maybe you are looking for

  • Strange behavior of the Macbook pro display

    Hi to all I'm using my MacBook pro, and notice some strange behavior after open several windows or applications, I don't know how to explain, so here is a photo: http://www.flickr.com/photos/[email protected]/ I have tried: go back to 10.4.6 / create a ne

  • Relationship between Objects Not Found in PA

    Dear all, I have uploaded objects (Org Unit, Job, and Position) and the relationships between objects separately using LSMW. When i checked the update of the relationship between objects in PA30, but the relationship between objects (Org Unit, Job, a

  • Putting image on Frame in netbeans

    Dear All How can we add an image on frame in netbeans IDE. From Rajeev Sarawat

  • File system is getting full Frequently in Federated Portal

    Hi In my production federated portal the file system File system           /usr/sap/<SID>/JC01 - is getting full 100% for every 30 minutes , what could be the reason for this behavior,  this is happening since from 6 days can I  be assisted on this R

  • Creating back-up and querying

    Hi, i have two question and please i urgently need ur help. first, i have created a library program using java codes and am looking of ways how i can be backing up the data. for example when a button is clicked, it will direct you to back up the data