Errors with Queue exceed maximum capacity of: '65536' elements

I faced this issue recently , where the messaging bridges from an external domain was not able to connect my my domain.
When i checked the logs i found one of the server with the below error.
####<Feb 28, 2011 7:31:23 PM GMT> <Warning> <RMI> <dyh75a03> <managed06_orneop02> <ExecuteThread: '2' for queue: 'weblogic.socket.Muxer'> <<WLS Kernel>> <> <
BEA-080003> <RuntimeException thrown by rmi server: weblogic.rmi.internal.BasicServerRef@a - hostID: '4275655823338432757S:dywlms06-orneop02.neo.openreach.co
.uk:[61007,61007,-1,-1,61007,-1,-1,0,0]:dywlms01-orneop02.neo.openreach.co.uk:61002,dywlms02-orneop02.neo.openreach.co.uk:61003,dywlms03-orneop02.neo.openrea
ch.co.uk:61004,dywlms04-orneop02.neo.openreach.co.uk:61005,dywlms05-orneop02.neo.openreach.co.uk:61006,dywlms06-orneop02.neo.openreach.co.uk:61007,dywlms07-o
rneop02.neo.openreach.co.uk:61008,dywlms08-orneop02.neo.openreach.co.uk:61009,dywlms09-orneop02.neo.openreach.co.uk:61010:orneop02:managed06_orneop02', oid:
'10', implementation: 'weblogic.transaction.internal.CoordinatorImpl@f6ede1'
weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity of: '65536' elements.
weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity of: '65536' elements
at weblogic.utils.UnsyncCircularQueue.expandQueue(UnsyncCircularQueue.java:72)
at weblogic.utils.UnsyncCircularQueue.put(UnsyncCircularQueue.java:94)
at weblogic.kernel.ExecuteThreadManager.execute(ExecuteThreadManager.java:374)
at weblogic.kernel.Kernel.execute(Kernel.java:345)
at weblogic.rmi.internal.BasicServerRef.dispatch(BasicServerRef.java:312)
at weblogic.rjvm.RJVMImpl.dispatchRequest(RJVMImpl.java:1113)
at weblogic.rjvm.RJVMImpl.dispatch(RJVMImpl.java:1031)
at weblogic.rjvm.ConnectionManagerServer.handleRJVM(ConnectionManagerServer.java:225)
at weblogic.rjvm.ConnectionManager.dispatch(ConnectionManager.java:805)
at weblogic.rjvm.t3.T3JVMConnection.dispatch(T3JVMConnection.java:782)
at weblogic.socket.SocketMuxer.readReadySocketOnce(SocketMuxer.java:705)
at weblogic.socket.SocketMuxer.readReadySocket(SocketMuxer.java:651)
at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:123)
at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:32)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)
>
As a workaround , i had restart this server. After this bridges from ext domain was working.
Has any one come across this issue before.
Regards
Deepak

Found this issue again in one of the domains....the threads dumps is showing deadlocks.
Found one Java-level deadlock:
=============================
"ExecuteThread: '3' for queue: 'weblogic.kernel.Default'":
waiting to lock monitor 0868dea4 (object a4e99be8, a weblogic.transaction.internal.TransactionLoggerImpl$IOBuffer),
which is held by "ExecuteThread: '6' for queue: 'weblogic.kernel.Default'"
"ExecuteThread: '6' for queue: 'weblogic.kernel.Default'":
waiting to lock monitor 00255304 (object b23be050, a weblogic.transaction.internal.ServerTransactionImpl),
which is held by "ExecuteThread: '0' for queue: 'weblogic.kernel.Non-Blocking'"
"ExecuteThread: '0' for queue: 'weblogic.kernel.Non-Blocking'":
waiting to lock monitor 086677a4 (object a4e99c00, a weblogic.transaction.internal.TransactionLoggerImpl$IOBuffer),
which is held by "ExecuteThread: '3' for queue: 'weblogic.kernel.Default'"
Java stack information for the threads listed above:
===================================================
"ExecuteThread: '3' for queue: 'weblogic.kernel.Default'":
     at weblogic.transaction.internal.TransactionLoggerImpl$LogDisk.release(TransactionLoggerImpl.java:1323)
     - waiting to lock <a4e99be8> (a weblogic.transaction.internal.TransactionLoggerImpl$IOBuffer)
     - locked <a4e99c00> (a weblogic.transaction.internal.TransactionLoggerImpl$IOBuffer)
     at weblogic.transaction.internal.TransactionLoggerImpl.release(TransactionLoggerImpl.java:389)
     at weblogic.transaction.internal.ServerTransactionImpl.releaseLog(ServerTransactionImpl.java:2767)
     at weblogic.transaction.internal.ServerTransactionManagerImpl.remove(ServerTransactionManagerImpl.java:1466)
     at weblogic.transaction.internal.ServerTransactionImpl.afterCommittedStateHousekeeping(ServerTransactionImpl.java:2645)
     at weblogic.transaction.internal.ServerTransactionImpl.setCommitted(ServerTransactionImpl.java:2669)
     at weblogic.transaction.internal.ServerTransactionImpl.localCommit(ServerTransactionImpl.java:1875)
     at weblogic.transaction.internal.ServerTransactionImpl.localCommit(ServerTransactionImpl.java:1163)
     at weblogic.transaction.internal.SubCoordinatorImpl.startCommit(SubCoordinatorImpl.java:274)
     at weblogic.transaction.internal.CoordinatorImpl_WLSkel.invoke(Unknown Source)
     at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:492)
     at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:435)
     at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
     at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:147)
     at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:430)
     at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:35)
     at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
     at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)
"ExecuteThread: '6' for queue: 'weblogic.kernel.Default'":
     at weblogic.transaction.internal.ServerTransactionImpl.onDisk(ServerTransactionImpl.java:836)
     - waiting to lock <b23be050> (a weblogic.transaction.internal.ServerTransactionImpl)
     at weblogic.transaction.internal.TransactionLoggerImpl$LogDisk.write(TransactionLoggerImpl.java:1252)
     - locked <a4e99be8> (a weblogic.transaction.internal.TransactionLoggerImpl$IOBuffer)
     - locked <a4e99bb8> (a weblogic.transaction.internal.TransactionLoggerImpl$LogDisk)
     at weblogic.transaction.internal.TransactionLoggerImpl.flushLog(TransactionLoggerImpl.java:614)
     at weblogic.transaction.internal.TransactionLoggerImpl.store(TransactionLoggerImpl.java:305)
     at weblogic.transaction.internal.ServerTransactionImpl.log(ServerTransactionImpl.java:1850)
     at weblogic.transaction.internal.ServerTransactionImpl.globalPrepare(ServerTransactionImpl.java:2118)
     at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:259)
     at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:228)
     at weblogic.transaction.internal.TransactionManagerImpl.commit(TransactionManagerImpl.java:303)
     at weblogic.jms.bridge.internal.MessagingBridge.onMessageInternal(MessagingBridge.java:1279)
     at weblogic.jms.bridge.internal.MessagingBridge.onMessage(MessagingBridge.java:1190)
     at weblogic.jms.adapter.JMSBaseConnection$29.run(JMSBaseConnection.java:1989)
     at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
     at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:147)
     at weblogic.jms.adapter.JMSBaseConnection.onMessage(JMSBaseConnection.java:1985)
     at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2686)
     at weblogic.jms.client.JMSSession.execute(JMSSession.java:2598)
     at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
     at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)
"ExecuteThread: '0' for queue: 'weblogic.kernel.Non-Blocking'":
     at weblogic.transaction.internal.TransactionLoggerImpl$LogDisk.release(TransactionLoggerImpl.java:1322)
     - waiting to lock <a4e99c00> (a weblogic.transaction.internal.TransactionLoggerImpl$IOBuffer)
     at weblogic.transaction.internal.TransactionLoggerImpl.release(TransactionLoggerImpl.java:389)
     at weblogic.transaction.internal.ServerTransactionImpl.releaseLog(ServerTransactionImpl.java:2767)
     at weblogic.transaction.internal.ServerTransactionManagerImpl.remove(ServerTransactionManagerImpl.java:1466)
     at weblogic.transaction.internal.ServerTransactionImpl.setRolledBack(ServerTransactionImpl.java:2597)
     at weblogic.transaction.internal.ServerTransactionImpl.ackRollback(ServerTransactionImpl.java:1093)
     - locked <b23be050> (a weblogic.transaction.internal.ServerTransactionImpl)
     at weblogic.transaction.internal.CoordinatorImpl.ackRollback(CoordinatorImpl.java:298)
     at weblogic.transaction.internal.CoordinatorImpl_WLSkel.invoke(Unknown Source)
     at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:492)
     at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:435)
     at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
     at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:147)
     at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:430)
     at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:35)
     at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
     at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)
Found 1 deadlock.

Similar Messages

  • BEA-000802 : Queue exceed maximum capacity of: '65536' elements.

    Hi,
    I am getting following error in my weblogic server. Its version is 10.3.3.0
    And after some time, server is not showing its state. Usually if the server is running properly, its state is "OK". But after the following error messages, my server is not showing any state and applications are not running.
    ####<Nov 11, 2010 5:18:50 AM GMT+05:30> <Error> <Kernel> <APP1> <Server1> <[ACTIVE] ExecuteThread: '360' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1289432930650> <BEA-000802> <ExecuteRequest failed
    weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity of: '65536' elements.
    weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity of: '65536' elements
         at weblogic.utils.UnsyncCircularQueue$FullQueueException.<init>(UnsyncCircularQueue.java:147)
         at weblogic.utils.UnsyncCircularQueue$FullQueueException.<init>(UnsyncCircularQueue.java:144)
         at weblogic.utils.UnsyncCircularQueue.expandQueue(UnsyncCircularQueue.java:71)
         at weblogic.utils.UnsyncCircularQueue.put(UnsyncCircularQueue.java:94)
         at weblogic.rjvm.MsgAbbrevJVMConnection$WritingState.sendNow(MsgAbbrevJVMConnection.java:511)
         at weblogic.rjvm.MsgAbbrevJVMConnection.canSendMsg(MsgAbbrevJVMConnection.java)
         at weblogic.rjvm.MsgAbbrevJVMConnection.sendMsg(MsgAbbrevJVMConnection.java:181)
         at weblogic.rjvm.MsgAbbrevJVMConnection.sendMsg(MsgAbbrevJVMConnection.java:142)
         at weblogic.rjvm.ConnectionManager.sendMsg(ConnectionManager.java:593)
         at weblogic.rjvm.RJVMImpl.send(RJVMImpl.java:901)
         at weblogic.rjvm.MsgAbbrevOutputStream.flushAndSend(MsgAbbrevOutputStream.java:394)
         at weblogic.rjvm.MsgAbbrevOutputStream.sendOneWay(MsgAbbrevOutputStream.java:400)
         at weblogic.rjvm.BasicOutboundRequest.sendOneWay(BasicOutboundRequest.java:105)
         at weblogic.rmi.internal.BasicRemoteRef.sendOneway(BasicRemoteRef.java:271)
         at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:214)
         at weblogic.management.logging.DomainLogHandlerImpl_1033_WLStub.publishLogEntries(Unknown Source)
         at weblogic.logging.DomainLogBroadcasterClient$2.run(DomainLogBroadcasterClient.java:209)
         at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    And even I am getting out of memory error warning.
    ####<Nov 11, 2010 7:34:46 AM GMT+05:30> <Critical> <Health> <APP1> <Server1> <weblogic.GCMonitor> <<anonymous>> <> <> <1289441086135> <BEA-310003> <Free memory in the server is 2,953,904 bytes. There is danger of OutOfMemoryError>
    My server arguments are
    -XX:PermSize=256m -XX:MaxPermSize=256m -Xms1024M -Xmx2048M
    Please tell me where the things are going wrong.

    This wls instance can't afford huge requests.
    See the following error
    ####<Nov 11, 2010 5:18:50 AM GMT+05:30> <Error> <Kernel> <APP1> <Server1> <[ACTIVE] ExecuteThread: '360' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>>
    This ExecuteThread number is 360th.
    I think your instance is already too busy.
    I guess OOME is second issue.
    If i have to solve this problem, i consider several follow things.
    1. Check the total requests.
         - I guess we need to sizing of wls instances.
    2. Extend wls instance count.
         - Under 100 ExecuteThread per wls instance
    3. Find long term requests, especially STUCK thread.
         - Tuning that requests under JTA timeout.
    4. Set the -Xms and -Xmx as same value.
         - For performance.
    I hope this will be helpful to you

  • UnsyncCircularQueue$FullQ Queue exceed maximum capacity of: '65536' element

    dear friends:
    Does anyone knows the problem like this <BEA-002911> <WorkManager ClusterMessaging failed to schedule a request due to weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity of: '65536' elements
    weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity of: '65536' elements
    why this happend and how to solve it?
    thanks a lot!

    Hi,
    Weblogic has a maximum storage for Pending Request queue configured at 65536. At this point, it will stop accepting incoming request to your Server Socket port.
    This means that your Weblogic server / JVM very likely has depleted its internal Thread Pool triggering Weblogic to start queuing incoming requests; eventually reaching its maximum capacity.
    From a resolution perspective, you will need to investigate the root cause, Thread Dump generation and analysis will be required in your case.
    Please generate a JVM Thread Dump (kill -3 <pid> for UNIX OS) and share detail on Thread contention / problem pattern.
    Thanks.
    P-H
    http://javaeesupportpatterns.blogspot.com/

  • Queue exceed maximum capacity of '256' elements.

              Has anyone seen this one before??
              Thanks
              Thomas
              weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity
              of: '256' elements.
              weblogic.utils.UnsyncCircularQueue$FullQueueException:      of: '256' elements
              at weblogic.utils.UnsyncCircularQueue.expandQueue(UnsyncCircularQueue.java:72)
              at weblogic.utils.UnsyncCircularQueue.put(UnsyncCircularQueue.java:94)
              at weblogic.kernel.ExecuteThreadManager.execute(ExecuteThreadManager.java:343)
              at weblogic.kernel.Kernel.execute(Kernel.java:338)
              at weblogic.rmi.internal.BasicServerRef.dispatch(BasicServerRef.java:304)
              at weblogic.rjvm.RJVMImpl.dispatchRequest(RJVMImpl.java:923)
              at weblogic.rjvm.RJVMImpl.dispatch(RJVMImpl.java:844)
              at weblogic.rjvm.ConnectionManagerServer.handleRJVM(ConnectionManagerServer.java:222)
              at weblogic.rjvm.ConnectionManager.dispatch(ConnectionManager.java:794)
              at weblogic.rjvm.t3.T3JVMConnection.dispatch(T3JVMConnection.java:570)
              at weblogic.socket.SocketMuxer.readReadySocketOnce(SocketMuxer.java:681)
              at weblogic.socket.SocketMuxer.readReadySocket(SocketMuxer.java:627)
              at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:123)
              at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:32)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
              

    Hi,
              The queue in question here is not a JMS queue. Instead,
              this is the "execute queue" of jobs waiting to be
              serviced by a thread in its thread pool.
              I think the size of these queues is configurable on the
              thread pool. Note that if dead-lock is locking up threads
              and causing the problem,
              you will likely need to increase the number of threads
              or dedicate server-side resources to their own thread pools
              rather than (or in addition to) increasing the size
              of the execute queues. (A thread dump helps
              in diagnose dead-locks.)
              Likely this error is on a server side thread pool, so
              there should be warnings/error messages in the server logs
              that describe what is happening.
              Tom
              Thomas Kreig wrote:
              > Has anyone seen this one before??
              > Thanks
              > Thomas
              >
              > weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity
              > of: '256' elements.
              > weblogic.utils.UnsyncCircularQueue$FullQueueException:      of: '256' elements
              > at weblogic.utils.UnsyncCircularQueue.expandQueue(UnsyncCircularQueue.java:72)
              > at weblogic.utils.UnsyncCircularQueue.put(UnsyncCircularQueue.java:94)
              > at weblogic.kernel.ExecuteThreadManager.execute(ExecuteThreadManager.java:343)
              > at weblogic.kernel.Kernel.execute(Kernel.java:338)
              > at weblogic.rmi.internal.BasicServerRef.dispatch(BasicServerRef.java:304)
              > at weblogic.rjvm.RJVMImpl.dispatchRequest(RJVMImpl.java:923)
              > at weblogic.rjvm.RJVMImpl.dispatch(RJVMImpl.java:844)
              > at weblogic.rjvm.ConnectionManagerServer.handleRJVM(ConnectionManagerServer.java:222)
              > at weblogic.rjvm.ConnectionManager.dispatch(ConnectionManager.java:794)
              > at weblogic.rjvm.t3.T3JVMConnection.dispatch(T3JVMConnection.java:570)
              > at weblogic.socket.SocketMuxer.readReadySocketOnce(SocketMuxer.java:681)
              > at weblogic.socket.SocketMuxer.readReadySocket(SocketMuxer.java:627)
              > at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:123)
              > at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:32)
              > at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
              > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
              >
              

  • Error:  Debug Component exceeds maximum size (65535 bytes)

    Hi All,
    It would seem that I have come across a limitation in the CAP file format for Java Card 2.2.1. When I run the Sun 2.2.1 converter I get the following output:
    Converting oncard package
    Java Card 2.2.1 Class File Converter, Version 1.3
    Copyright 2003 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms.
    error:  Debug Component exceeds maximum size (65535 bytes).
    Cap file generation failed.
    conversion completed with 1 errors and 0 warnings.This also happens with JCDK 2.2.2. Is there any way around this limitation? I can instruct the compiler to not generate some of the debug output such as variable attributes, but this makes the debugger less than useful. I could not find anything in the JCVM spec for JC2.2.1 that mentions this limitation (I could have missed it very easily if it is there). Is this a limitation that will be lifted in new versions of the standard? This is tarting to be a bit of a problem as our code base has outgrown our development tools (no more debugger). Compiling and running without debug information works fine.
    Any information on this would be much appreciated.
    Cheers,
    Shane

    For those that are interested, I found this in the Java Card VM spec. u1 and u2 are 1 and 2 byte unsigned values respectively. It would appear that it is a limitation of the CAP file format. We have managed to work around this problem (parsing the debug component of the cap file was very informative) by shortening package names, reducing exception handling, and removing classes that we could do without.
    It looks like this is removed for JC3.0 Connected Edition does not have this issue as it does not have CAP files, but JC2.2.2 and JC3.0 Classic Edition have this same issue. It would be nice if there was a converter/debugger combination that removed this limitation. If anyone knows of such a combination, I am all ears! I know for a fact that JCOP does not :(
    Cheers,
    Shane
    *6.14 Debug Component*
    This section specifies the format for the Debug Component. The Debug Component contains all the metadata necessary for debugging a package on a suitably instrumented Java Card virtual machine. It is not required for executing Java Card programs in a non-debug environment.
    The Debug Component references the Class Component (Section 6.8 "Class Component”), Method Component (Section 6.9 "Method Component”), and Static Field Component (Section 6.10 "Static Field Component”). No components reference the Debug Component.
    The Debug Component is represented by the following structure:
    {code}
    debug_component {
    u1 tag
    u2 size
    u2 string_count
    utf8_info strings_table[string_count]
    u2 package_name_index
    u2 class_count
    class_debug_info classes[class_count]
    {code}
    The items in the debug_component structure are defined as follows:
    *tag* tag item has the value COMPONENT_Debug (12).
    *size* The number of bytes in the component, excluding the tag and size items. The value of size must be greater than zero.

  • Full Export/Import Errors with Queue tables and ApEx

    I'm trying to take a full export of an existing database in order to build an identical copy in another database instance, but the import is failing each time and causing problems with queue tables and Apex tables.
    I have used both the export utility and Data Pump (both with partial and full exports) and the same problems are occurring.
    After import, queue tables in my schema are unstable. They cannot be dropped using the queue admin packages as they throw ORA-24002: QUEUE_TABLE <table> does not exist exceptions. Trying to drop the tables causes the ORA-24005: must use DBMS_AQADM.DROP_QUEUE_TABLE to drop queue tables error
    As a result, the schema cannot be dropped at all unless manual data dictionary clean up steps (as per metalink) are done.
    The Apex import fails when creating foreign keys to WWV_FLOW_FILE_OBJECTS$PART. It creates the table ok, but for some reason the characters after the $ are missing so the referencing tables try to refer to WWV_FLOW_FILE_OBJECTS$ only.
    I am exporting from Enterprise Edition 10.2.0.1 and importing into Standard edition 10.2.0.1, but we are not using any of the features not available in standard, and I doubt this would cause the issues I'm getting.
    Can anyone offer any advice on how I can resolve these problems so a full import will work reliably?

    Thanks for the lead!
    After digging around MetaLink some more, it sounds like I'm running into Bug 5875568 (MetaLink Note:5875568.8) which is in fact related to the multibyte character set. The bug is fixed in the server patch set 10.2.0.4 or release 11.1.0.6.

  • Link Time error with queue STL

    Hi,
    I am getting a link time error during the building of a program, which uses an object file that uses the <queue> template in STL. I have provided a summary of the error below. I am not able to identify the problem. If somebody could help me in this regard, I would appreciate it.
    Undefined first referenced
    symbol in file
    void std::deque<MsgBuf,std::allocator<MsgBuf> >::__allocate_at_end() ../sunlib/libMqApi.a(MQ_GuardedQueue.o)
    __type_1 std::copy<std::deque<MsgBuf,std::allocator<MsgBuf> >::const_iterator,std::back_insert_iterator<std::d
    eque<MsgBuf,std::allocator<MsgBuf> > > >(__type_0,__type_0,__type_1) M1234.o
    std::deque<MsgBuf,std::allocator<MsgBuf> >::~deque() M1234.o
    void std::deque<double,std::allocator<double> >::__allocate_at_end() M1234.o
    Thanks

    Unfortunately since 5.0 is EOLed, you probably need to contact Sun's office support site if you have contract with sun.
    - Rose

  • Error with Contacts - "Exceeding iCloud limits"

    I have Outlook 2007 synched with icloud.  I normally don't have any trouble.  But today I tried to alter a contact, and I got a popup that says "Exceeding iCloud Limits."  But when I log into my iCloud, I'm not even using 1% of my allowed storage.  I've tried to add more contacts, and get the same error.
    So, then I logged into icloud.com and tried to make changes there, and when I do, I get a popup that says "Your changes to this contact couldn't be saved because of a server error. Please try again"
    Is anyone else having this problem?  I did some tests in Calendar, and it is working fine.
    Thanks.

    Same deal. Edited a contact, now a pop-up everytime I open Outlook 07, refresh icloud etc.. 4.6+ GB unused storage.
    My Fix: Went to icloud.com (thank you) and deleted that contact completly. Back to Outlook and deleted that contact. Closed Outlook and when I opened it back the error message was gone.
    I've edited other contacts to see if the same pop-up would appear ("cannot save changes exceeding cloud limits")
    and it seems to be okay. Finally.

  • Credit receipt exceeds Maximum amount allowed

    Hello,
    We were recently requested to update one of our expense type's "Default/Max. value", also we were requested to have the "Amount type" set to "Error message for exceeding maximum".  We have now come across a problem where credit receipts that are uploaded to our system are above the maximum amount allowed and are receiving the error message, these receipts cannot be itemized with personal expense down to the maximum because of the error message, so the users are having to enter these receipts in manually. 
    Is there a way to keep the error message for the maximum value and be able to itemize the receipt even if the receipt is above the limit?
    - Edward Edge

    hi
    no you cannot.
    the only way you can change the system error message as warning and proceed.

  • Ssl certificate exceeds maximum length

    Here is the situation I am having....
    When ever anyone(I have tried, as well as many of my friends and family) tries to log-in to my mail server they get an ssl error saying the the ssl cerificate exceeds maximum permissible length.
    my e-mail web login is page is located here --- https://mail.warezwaldo.us/mail/
    I know that this server is working properly, because I can get to that web page from my Internal Network(with no problems after getting standard cetificate error & excepting the Certificate) but Can Not get to it from the outside world.
    I have verified that SSL is enabled on the Server as well as the Web Browser. I have searched the web for possible solutions to this issue but have yet to find the Solution.
    I am currently on an openSuSE 11.2 Linux Laptop, and have tried in Ubuntu8.04, 9.04, 9.10, and 10.04, as well as Mint7 & 8, and Windows Vista Home Premium & Windows 7, all using FF 3.5.9.
    Can anyone PLEASE HELP, I am trying to start a Business providing Secure E-mail with the ability to have a Web Login Page and this issue is killing me.
    Steps I have taken so far --- I have unistalled and reinstalled the iRedMail system on a server running Ubuntu Server 8.04, 9.04, 9.10, openSuSE server 11.1 & 11.2, CeNTOS 5+, Fedora 10, 11, 12... I have also tried the iRedOS with & with out Updates, and they all give the same error -- ssl certificate exceeds maximum permissible length.
    == URL of affected sites ==
    http://mail.warezwaldo.us/mail/

    OK here is the solution to the Issue that i was having. After dealing with my ISPs crappy equipment I figured out that the issue was being caused by teh Qwest Provided Actiontec PK5000 DSL Modem. Upon initial set-up I had used just the Advanced Port Forwarding and had assigned ports 25, 110, 143, 443, 585, 993, 995 to be forwarded to my mail server, ports 22, 80 to web server, ports 53, 953 to dns server. For 5months this worked just fine and then all of the sudden it stopped working.
    After Dealing Directly with Actiontec Support Staff and being told that I found a "Glitch" in their Software, and well to say the least Actiontec Support Staff couldn't figure out what was causing the Issue and if there was a fix or work around. After about 75hrs of trying to trouble shoot the problem I found the Fix and/or Work Around.
    First: Set-up rules under Advanced Port Forwarding for the appropriate ports to the appropriate IPs
    Second: Create New Rules in Application Forwarding and apply those Newly Created Rules to the Appropriate IPs
    Third: (This is MY RECOMMENDATION) Replace the Actiontec PK5000 ASAP it will stop working on you, this "Glitch" Came after having just Advanced Port Forwarding Rules in place and working fine for 5 months) Third Step to fix the Issue Hope and Pray that the Modem and those Rules will last long enough for you to replace the DSL Modem. I recommend the D-Link DSL2540b it was easy to set-up and runs a lot Cooler than the Actiontec M1000 and PK5000

  • ORA-30951: Element or attribute at Xpath /AC//Doc[@] exceeds maximum length

    Hi All,
    I am trying to load the XML Files into a Table using the SQL Loader and i am getting the Error
    Record 1: Rejected - Error on table COMMONASSETCATALOG.
    ORA-30951: Element or attribute at Xpath /AC/T[1]/T[1]/T[1]/T[1]/T[1]/Doc[@] exceeds maximum length
    The <Doc> Element which is child of the <T> contains an XML Schema inside it..
    The Doc Element is declared in Schema as
    <xs:complexType name="AsDocType">
              <xs:annotation>
                   <xs:documentation>A (Doc)ument, a container for any type of file</xs:documentation>
              </xs:annotation>
              <xs:sequence minOccurs="0" maxOccurs="unbounded">
                   <xs:any namespace="##any" processContents="lax"/>
              </xs:sequence>
              <xs:attributeGroup ref="AsDocAtts"/>
         </xs:complexType>
    The Size of the XML Content that <Doc> Node has is around 34Kb.
    Could you pls let me know how to resolve this..
    Thanks
    Sateesh

    Hi All,
    I am trying to load the XML Files into a Table using the SQL Loader and i am getting the Error
    Record 1: Rejected - Error on table COMMONASSETCATALOG.
    ORA-30951: Element or attribute at Xpath /AC/T[1]/T[1]/T[1]/T[1]/T[1]/Doc[@] exceeds maximum length
    The <Doc> Element which is child of the <T> contains an XML Schema inside it..
    The Doc Element is declared in Schema as
    <xs:complexType name="AsDocType">
              <xs:annotation>
                   <xs:documentation>A (Doc)ument, a container for any type of file</xs:documentation>
              </xs:annotation>
              <xs:sequence minOccurs="0" maxOccurs="unbounded">
                   <xs:any namespace="##any" processContents="lax"/>
              </xs:sequence>
              <xs:attributeGroup ref="AsDocAtts"/>
         </xs:complexType>
    The Size of the XML Content that <Doc> Node has is around 34Kb.
    Could you pls let me know how to resolve this..
    Thanks
    Sateesh

  • Hello I have a problem with security questions and i cant reset to my email  The error was   Exceeded Maximum Attempts  We apologize, but we were unable to verify your account information with the answers you provided to our security questions. You have

    Hello
    I have a problem with security questions and i cant reset to my email
    The error was
    Exceeded Maximum Attempts
    We apologize, but we were unable to verify your account information with the answers you provided to our security questions.
    You have made too many attempts to answer these questions. So, for security reasons, you will not be able to reset password for the next eight hours.
    Click here      for assistance.
    i waited more than eight hours. and back to my account but it is the same ( no change ) i cant find forgot your answers
    http://www.traidnt.net/vb/attachment...134863-333.jpg
    can you help me please

    Alternatives for Help Resetting Security Questions and Rescue Mail
         1. Apple ID- All about Apple ID security questions.
         2. Rescue email address and how to reset Apple ID security questions
         3. Apple ID- Contacting Apple for help with Apple ID account security.
         4. Fill out and submit this form. Select the topic, Account Security.
         5.  Call Apple Customer Service: Contacting Apple for support in your
              country and ask to speak to Account Security.
    How to Manage your Apple ID: Manage My Apple ID

  • On load, getting error:  Field in data file exceeds maximum length

    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0    Production
    TNS for Solaris: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    I'm trying to load a table, small in size (110 rows, 6 columns).  One of the columns, called NOTES is erroring when I run the load.  It is saying that the column size exceeds max limit.  As you can see here, the table column is set to 4000 Bytes)
    CREATE TABLE NRIS.NRN_REPORT_NOTES
      NOTES_CN      VARCHAR2(40 BYTE)               DEFAULT sys_guid()            NOT NULL,
      REPORT_GROUP  VARCHAR2(100 BYTE)              NOT NULL,
      AREACODE      VARCHAR2(50 BYTE)               NOT NULL,
      ROUND         NUMBER(3)                       NOT NULL,
      NOTES         VARCHAR2(4000 BYTE),
      LAST_UPDATE   TIMESTAMP(6) WITH TIME ZONE     DEFAULT systimestamp          NOT NULL
    TABLESPACE USERS
    RESULT_CACHE (MODE DEFAULT)
    PCTUSED    0
    PCTFREE    10
    INITRANS   1
    MAXTRANS   255
    STORAGE    (
                INITIAL          80K
                NEXT             1M
                MINEXTENTS       1
                MAXEXTENTS       UNLIMITED
                PCTINCREASE      0
                BUFFER_POOL      DEFAULT
                FLASH_CACHE      DEFAULT
                CELL_FLASH_CACHE DEFAULT
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    I did a little investigating, and it doesn't add up.
    when i run
    select max(lengthb(notes)) from NRIS.NRN_REPORT_NOTES
    I get a return of
    643
    That tells me that the largest size instance of that column is only 643 bytes.  But EVERY insert is failing.
    Here is the loader file header, and first couple of inserts:
    LOAD DATA
    INFILE *
    BADFILE './NRIS.NRN_REPORT_NOTES.BAD'
    DISCARDFILE './NRIS.NRN_REPORT_NOTES.DSC'
    APPEND INTO TABLE NRIS.NRN_REPORT_NOTES
    Fields terminated by ";" Optionally enclosed by '|'
      NOTES_CN,
      REPORT_GROUP,
      AREACODE,
      ROUND NULLIF (ROUND="NULL"),
      NOTES,
      LAST_UPDATE TIMESTAMP WITH TIME ZONE "MM/DD/YYYY HH24:MI:SS.FF9 TZR" NULLIF (LAST_UPDATE="NULL")
    BEGINDATA
    |E2ACF256F01F46A7E0440003BA0F14C2|;|DEMOGRAPHICS|;|A01003|;3;|Demographic results show that 46 percent of visits are made by females.  Among racial and ethnic minorities, the most commonly encountered are Native American (4%) and Hispanic / Latino (2%).  The age distribution shows that the Bitterroot has a relatively small proportion of children under age 16 (14%) in the visiting population.  People over the age of 60 account for about 22% of visits.   Most of the visitation is from the local area.  More than 85% of visits come from people who live within 50 miles.|;07/29/2013 16:09:27.000000000 -06:00
    |E2ACF256F02046A7E0440003BA0F14C2|;|VISIT DESCRIPTION|;|A01003|;3;|Most visits to the Bitterroot are fairly short.  Over half of the visits last less than 3 hours.  The median length of visit to overnight sites is about 43 hours, or about 2 days.  The average Wilderness visit lasts only about 6 hours, although more than half of those visits are shorter than 3 hours long.   Most visits come from people who are fairly frequent visitors.  Over thirty percent are made by people who visit between 40 and 100 times per year.  Another 8 percent of visits are from people who report visiting more than 100 times per year.|;07/29/2013 16:09:27.000000000 -06:00
    |E2ACF256F02146A7E0440003BA0F14C2|;|ACTIVITIES|;|A01003|;3;|The most frequently reported primary activity is hiking/walking (42%), followed by downhill skiing (12%), and hunting (8%).  Over half of the visits report participating in relaxing and viewing scenery.|;07/29/2013 16:09:27.000000000 -06:00
    Here is the full beginning of the loader log, ending after the first row return.  (They ALL say the same error)
    SQL*Loader: Release 10.2.0.4.0 - Production on Thu Aug 22 12:09:07 2013
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Control File:   NRIS.NRN_REPORT_NOTES.ctl
    Data File:      NRIS.NRN_REPORT_NOTES.ctl
      Bad File:     ./NRIS.NRN_REPORT_NOTES.BAD
      Discard File: ./NRIS.NRN_REPORT_NOTES.DSC
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Bind array:     64 rows, maximum of 256000 bytes
    Continuation:    none specified
    Path used:      Conventional
    Table NRIS.NRN_REPORT_NOTES, loaded from every logical record.
    Insert option in effect for this table: APPEND
       Column Name                  Position   Len  Term Encl Datatype
    NOTES_CN                            FIRST     *   ;  O(|) CHARACTER
    REPORT_GROUP                         NEXT     *   ;  O(|) CHARACTER
    AREACODE                             NEXT     *   ;  O(|) CHARACTER
    ROUND                                NEXT     *   ;  O(|) CHARACTER
        NULL if ROUND = 0X4e554c4c(character 'NULL')
    NOTES                                NEXT     *   ;  O(|) CHARACTER
    LAST_UPDATE                          NEXT     *   ;  O(|) DATETIME MM/DD/YYYY HH24:MI:SS.FF9 TZR
        NULL if LAST_UPDATE = 0X4e554c4c(character 'NULL')
    Record 1: Rejected - Error on table NRIS.NRN_REPORT_NOTES, column NOTES.
    Field in data file exceeds maximum length...
    I am not seeing why this would be failing.

    HI,
    the problem is delimited data defaults to char(255)..... Very helpful I know.....
    what you need to two is tell sqlldr hat the data is longer than this.
    so change notes to notes char(4000) in you control file and it should work.
    cheers,
    harry

  • Ctxload error DRG-11530: token exceeds maximum length

    I downloaded the 11g examples (formerly the companion cd) with the supplied knowledge base (thesauri), unzipped it, installed it, and confirmed that the droldUS.dat file is there. Then I tried to use ctxload to create a default thesaurus, using that file, as per the online documentation. It creates the default thesaurus, but does not load the data, due to the error "DRG-11530: token exceeds maximum length". Apparently one of the terms is too long. But what can I use to edit the file? I tried notepad, but it was too big. I tried wordpad, but it was unreadable. I was able to create a default thesaurus using the much smaller sample thesaurus dr0thsus.txt, so I confirmed that there is nothing wrong with the syntax or privileges. Please see the copy of the run below. Is there a way to edit the droldUS.dat file or a workaround or am I not loading it correctly? Does the .dat file need to be loaded differently than the .txt file?
    CTXSYS@orcl_11g> select banner from v$version
      2  /
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    PL/SQL Release 11.1.0.6.0 - Production
    CORE    11.1.0.6.0      Production
    TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
    NLSRTL Version 11.1.0.6.0 - Production
    CTXSYS@orcl_11g> select count(*) from ctx_thesauri where ths_name = 'DEFAULT'
      2  /
      COUNT(*)
             0
    CTXSYS@orcl_11g> select count(*) from ctx_thes_phrases where thp_thesaurus = 'DE
    FAULT'
      2  /
      COUNT(*)
             0
    CTXSYS@orcl_11g> host ctxload -thes -user ctxsys/ctxsys@orcl -name default -file
    C:\app\Barbara\product\11.1.0\db_1\ctx\data\enlx\droldUS.dat
    Connecting...
    Creating thesaurus default...
    Thesaurus default created...
    Processing...
    DRG-11530: token exceeds maximum length
    Disconnected
    CTXSYS@orcl_11g> connect ctxsys/ctxsys@orcl
    Connected.
    CTXSYS@orcl_11g>
    CTXSYS@orcl_11g> select count(*) from ctx_thesauri where ths_name = 'DEFAULT'
      2  /
      COUNT(*)
             1
    CTXSYS@orcl_11g> select count(*) from ctx_thes_phrases where thp_thesaurus = 'DE
    FAULT'
      2  /
      COUNT(*)
             0
    CTXSYS@orcl_11g> exec ctx_thes.drop_thesaurus ('default')
    PL/SQL procedure successfully completed.
    CTXSYS@orcl_11g> host ctxload -thes -user ctxsys/ctxsys@orcl -name default -file
    C:\app\Barbara\product\11.1.0\db_1\ctx\sample\thes\dr0thsus.txt
    Connecting...
    Creating thesaurus default...
    Thesaurus default created...
    Processing...
    1000 lines processed
    2000 lines processed
    3000 lines processed
    4000 lines processed
    5000 lines processed
    6000 lines processed
    7000 lines processed
    8000 lines processed
    9000 lines processed
    10000 lines processed
    11000 lines processed
    12000 lines processed
    13000 lines processed
    14000 lines processed
    15000 lines processed
    16000 lines processed
    17000 lines processed
    18000 lines processed
    19000 lines processed
    20000 lines processed
    21000 lines processed
    21760 lines processed successfully
    Beginning insert...21760 lines inserted successfully
    Disconnected
    CTXSYS@orcl_11g> select count(*) from ctx_thesauri where ths_name = 'DEFAULT'
      2  /
      COUNT(*)
             1
    CTXSYS@orcl_11g> select count(*) from ctx_thes_phrases where thp_thesaurus = 'DE
    FAULT'
      2  /
      COUNT(*)
          9582
    CTXSYS@orcl_11g>

    Hi Roger,
    Thanks for the response. You are correct. I was confusing the terms thesaurus and knowledge base, which sometimes seem to be used interchangeably or synonymously, but are actually two different things. I read over the various sections of the documentation regarding the supplied knowledge base and supplied thesaurus more carefully and believe I understand now. Apparently, the dr0thsus.txt file that I did ultimately load using ctxload to create a default thesaurus is the supplied thesaurus that is intended to be used to create the default English thesaurus, which supports ctx_thes syn and such. The other droldUS.dat file that I mistakenly tried to load using ctxload is the supplied compiled knowledge base that supports ctx_doc themes and gist and such. In the past I have used ctx_thes.create_thesaurus to create a thesaurus, but using ctxload can also load a thesaurus from a text file with the data in a specified format. Once a thesaurus is loaded using ctxload, it can then be compiled using ctxkbtc to add it to the existing compiled knowledge base. So, the knowledge base is sort of a compilation of thesauri, which is what led to my confusion in terminology. I think I have it all straight in my mind now and hopefully this will help anybody else who searches for the same problem and finds this.
    Thanks,
    Barbara

  • Field in data file exceeds maximum length - CTL file error

    Hi,
    I am loading data in new system using CTL file. But I am getting error as 'Field in data file exceeds maximum length' for few records, other records are processed successfully. I have checked the length of the error record in the extract file, it is less than the length in the target table, VARCHAR2 (2000 Byte). Below is the example of error data,
    Hi Rebecca~I have just spoken to our finance department and they have agreed that the ABCs payments made can be allocated to the overdue invoices, can you send any future invoices direct to me so that I can get them paid on time.~Hope this is ok ~Thanks~Terry~
    Is this error caused because of the special characters in the string?
    Below is the ctl file I am using,
    OPTIONS (SKIP=2)
    LOAD DATA
    CHARACTERSET WE8ISO8859P1
    INFILE  '$FILE'
    APPEND
    INTO TABLE "XXDM_DM_17_ONACCOUNT_REC_SRC"
    WHEN (1)!= 'FOOTER='
    FIELDS TERMINATED BY '|'
    OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS (
                                  <Column_name>,
                                  <Column_name>,
                                  COMMENTS,
                                  <Column_name>,
                                  <Column_name>
    Thanks in advance,
    Aditya

    Hi,
    I suspect this is because of the built in default length of character datatypes in sqldr - it defaults to char(255) taking no notice of what the actual table definition is.
    Try adding CHAR(2000), to your controlfile so you end up with something like this:
    OPTIONS (SKIP=2)
    LOAD DATA
    CHARACTERSET WE8ISO8859P1
    INFILE  '$FILE'
    APPEND
    INTO TABLE "XXDM_DM_17_ONACCOUNT_REC_SRC"
    WHEN (1)!= 'FOOTER='
    FIELDS TERMINATED BY '|'
    OPTIONALLY ENCLOSED BY '"'
    TRAILING NULLCOLS (
                                  <Column_name>,
                                  <Column_name>,
                                  COMMENTS CHAR(2000),
                                  <Column_name>,
                                  <Column_name>
    Cheers,
    Harry

Maybe you are looking for