UnsyncCircularQueue$FullQ Queue exceed maximum capacity of: '65536' element

dear friends:
Does anyone knows the problem like this <BEA-002911> <WorkManager ClusterMessaging failed to schedule a request due to weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity of: '65536' elements
weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity of: '65536' elements
why this happend and how to solve it?
thanks a lot!

Hi,
Weblogic has a maximum storage for Pending Request queue configured at 65536. At this point, it will stop accepting incoming request to your Server Socket port.
This means that your Weblogic server / JVM very likely has depleted its internal Thread Pool triggering Weblogic to start queuing incoming requests; eventually reaching its maximum capacity.
From a resolution perspective, you will need to investigate the root cause, Thread Dump generation and analysis will be required in your case.
Please generate a JVM Thread Dump (kill -3 <pid> for UNIX OS) and share detail on Thread contention / problem pattern.
Thanks.
P-H
http://javaeesupportpatterns.blogspot.com/

Similar Messages

  • Errors with Queue exceed maximum capacity of: '65536' elements

    I faced this issue recently , where the messaging bridges from an external domain was not able to connect my my domain.
    When i checked the logs i found one of the server with the below error.
    ####<Feb 28, 2011 7:31:23 PM GMT> <Warning> <RMI> <dyh75a03> <managed06_orneop02> <ExecuteThread: '2' for queue: 'weblogic.socket.Muxer'> <<WLS Kernel>> <> <
    BEA-080003> <RuntimeException thrown by rmi server: weblogic.rmi.internal.BasicServerRef@a - hostID: '4275655823338432757S:dywlms06-orneop02.neo.openreach.co
    .uk:[61007,61007,-1,-1,61007,-1,-1,0,0]:dywlms01-orneop02.neo.openreach.co.uk:61002,dywlms02-orneop02.neo.openreach.co.uk:61003,dywlms03-orneop02.neo.openrea
    ch.co.uk:61004,dywlms04-orneop02.neo.openreach.co.uk:61005,dywlms05-orneop02.neo.openreach.co.uk:61006,dywlms06-orneop02.neo.openreach.co.uk:61007,dywlms07-o
    rneop02.neo.openreach.co.uk:61008,dywlms08-orneop02.neo.openreach.co.uk:61009,dywlms09-orneop02.neo.openreach.co.uk:61010:orneop02:managed06_orneop02', oid:
    '10', implementation: 'weblogic.transaction.internal.CoordinatorImpl@f6ede1'
    weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity of: '65536' elements.
    weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity of: '65536' elements
    at weblogic.utils.UnsyncCircularQueue.expandQueue(UnsyncCircularQueue.java:72)
    at weblogic.utils.UnsyncCircularQueue.put(UnsyncCircularQueue.java:94)
    at weblogic.kernel.ExecuteThreadManager.execute(ExecuteThreadManager.java:374)
    at weblogic.kernel.Kernel.execute(Kernel.java:345)
    at weblogic.rmi.internal.BasicServerRef.dispatch(BasicServerRef.java:312)
    at weblogic.rjvm.RJVMImpl.dispatchRequest(RJVMImpl.java:1113)
    at weblogic.rjvm.RJVMImpl.dispatch(RJVMImpl.java:1031)
    at weblogic.rjvm.ConnectionManagerServer.handleRJVM(ConnectionManagerServer.java:225)
    at weblogic.rjvm.ConnectionManager.dispatch(ConnectionManager.java:805)
    at weblogic.rjvm.t3.T3JVMConnection.dispatch(T3JVMConnection.java:782)
    at weblogic.socket.SocketMuxer.readReadySocketOnce(SocketMuxer.java:705)
    at weblogic.socket.SocketMuxer.readReadySocket(SocketMuxer.java:651)
    at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:123)
    at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:32)
    at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
    at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)
    >
    As a workaround , i had restart this server. After this bridges from ext domain was working.
    Has any one come across this issue before.
    Regards
    Deepak

    Found this issue again in one of the domains....the threads dumps is showing deadlocks.
    Found one Java-level deadlock:
    =============================
    "ExecuteThread: '3' for queue: 'weblogic.kernel.Default'":
    waiting to lock monitor 0868dea4 (object a4e99be8, a weblogic.transaction.internal.TransactionLoggerImpl$IOBuffer),
    which is held by "ExecuteThread: '6' for queue: 'weblogic.kernel.Default'"
    "ExecuteThread: '6' for queue: 'weblogic.kernel.Default'":
    waiting to lock monitor 00255304 (object b23be050, a weblogic.transaction.internal.ServerTransactionImpl),
    which is held by "ExecuteThread: '0' for queue: 'weblogic.kernel.Non-Blocking'"
    "ExecuteThread: '0' for queue: 'weblogic.kernel.Non-Blocking'":
    waiting to lock monitor 086677a4 (object a4e99c00, a weblogic.transaction.internal.TransactionLoggerImpl$IOBuffer),
    which is held by "ExecuteThread: '3' for queue: 'weblogic.kernel.Default'"
    Java stack information for the threads listed above:
    ===================================================
    "ExecuteThread: '3' for queue: 'weblogic.kernel.Default'":
         at weblogic.transaction.internal.TransactionLoggerImpl$LogDisk.release(TransactionLoggerImpl.java:1323)
         - waiting to lock <a4e99be8> (a weblogic.transaction.internal.TransactionLoggerImpl$IOBuffer)
         - locked <a4e99c00> (a weblogic.transaction.internal.TransactionLoggerImpl$IOBuffer)
         at weblogic.transaction.internal.TransactionLoggerImpl.release(TransactionLoggerImpl.java:389)
         at weblogic.transaction.internal.ServerTransactionImpl.releaseLog(ServerTransactionImpl.java:2767)
         at weblogic.transaction.internal.ServerTransactionManagerImpl.remove(ServerTransactionManagerImpl.java:1466)
         at weblogic.transaction.internal.ServerTransactionImpl.afterCommittedStateHousekeeping(ServerTransactionImpl.java:2645)
         at weblogic.transaction.internal.ServerTransactionImpl.setCommitted(ServerTransactionImpl.java:2669)
         at weblogic.transaction.internal.ServerTransactionImpl.localCommit(ServerTransactionImpl.java:1875)
         at weblogic.transaction.internal.ServerTransactionImpl.localCommit(ServerTransactionImpl.java:1163)
         at weblogic.transaction.internal.SubCoordinatorImpl.startCommit(SubCoordinatorImpl.java:274)
         at weblogic.transaction.internal.CoordinatorImpl_WLSkel.invoke(Unknown Source)
         at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:492)
         at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:435)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:147)
         at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:430)
         at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:35)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)
    "ExecuteThread: '6' for queue: 'weblogic.kernel.Default'":
         at weblogic.transaction.internal.ServerTransactionImpl.onDisk(ServerTransactionImpl.java:836)
         - waiting to lock <b23be050> (a weblogic.transaction.internal.ServerTransactionImpl)
         at weblogic.transaction.internal.TransactionLoggerImpl$LogDisk.write(TransactionLoggerImpl.java:1252)
         - locked <a4e99be8> (a weblogic.transaction.internal.TransactionLoggerImpl$IOBuffer)
         - locked <a4e99bb8> (a weblogic.transaction.internal.TransactionLoggerImpl$LogDisk)
         at weblogic.transaction.internal.TransactionLoggerImpl.flushLog(TransactionLoggerImpl.java:614)
         at weblogic.transaction.internal.TransactionLoggerImpl.store(TransactionLoggerImpl.java:305)
         at weblogic.transaction.internal.ServerTransactionImpl.log(ServerTransactionImpl.java:1850)
         at weblogic.transaction.internal.ServerTransactionImpl.globalPrepare(ServerTransactionImpl.java:2118)
         at weblogic.transaction.internal.ServerTransactionImpl.internalCommit(ServerTransactionImpl.java:259)
         at weblogic.transaction.internal.ServerTransactionImpl.commit(ServerTransactionImpl.java:228)
         at weblogic.transaction.internal.TransactionManagerImpl.commit(TransactionManagerImpl.java:303)
         at weblogic.jms.bridge.internal.MessagingBridge.onMessageInternal(MessagingBridge.java:1279)
         at weblogic.jms.bridge.internal.MessagingBridge.onMessage(MessagingBridge.java:1190)
         at weblogic.jms.adapter.JMSBaseConnection$29.run(JMSBaseConnection.java:1989)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:147)
         at weblogic.jms.adapter.JMSBaseConnection.onMessage(JMSBaseConnection.java:1985)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2686)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:2598)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)
    "ExecuteThread: '0' for queue: 'weblogic.kernel.Non-Blocking'":
         at weblogic.transaction.internal.TransactionLoggerImpl$LogDisk.release(TransactionLoggerImpl.java:1322)
         - waiting to lock <a4e99c00> (a weblogic.transaction.internal.TransactionLoggerImpl$IOBuffer)
         at weblogic.transaction.internal.TransactionLoggerImpl.release(TransactionLoggerImpl.java:389)
         at weblogic.transaction.internal.ServerTransactionImpl.releaseLog(ServerTransactionImpl.java:2767)
         at weblogic.transaction.internal.ServerTransactionManagerImpl.remove(ServerTransactionManagerImpl.java:1466)
         at weblogic.transaction.internal.ServerTransactionImpl.setRolledBack(ServerTransactionImpl.java:2597)
         at weblogic.transaction.internal.ServerTransactionImpl.ackRollback(ServerTransactionImpl.java:1093)
         - locked <b23be050> (a weblogic.transaction.internal.ServerTransactionImpl)
         at weblogic.transaction.internal.CoordinatorImpl.ackRollback(CoordinatorImpl.java:298)
         at weblogic.transaction.internal.CoordinatorImpl_WLSkel.invoke(Unknown Source)
         at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:492)
         at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:435)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:147)
         at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:430)
         at weblogic.rmi.internal.BasicExecuteRequest.execute(BasicExecuteRequest.java:35)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:224)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:183)
    Found 1 deadlock.

  • BEA-000802 : Queue exceed maximum capacity of: '65536' elements.

    Hi,
    I am getting following error in my weblogic server. Its version is 10.3.3.0
    And after some time, server is not showing its state. Usually if the server is running properly, its state is "OK". But after the following error messages, my server is not showing any state and applications are not running.
    ####<Nov 11, 2010 5:18:50 AM GMT+05:30> <Error> <Kernel> <APP1> <Server1> <[ACTIVE] ExecuteThread: '360' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1289432930650> <BEA-000802> <ExecuteRequest failed
    weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity of: '65536' elements.
    weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity of: '65536' elements
         at weblogic.utils.UnsyncCircularQueue$FullQueueException.<init>(UnsyncCircularQueue.java:147)
         at weblogic.utils.UnsyncCircularQueue$FullQueueException.<init>(UnsyncCircularQueue.java:144)
         at weblogic.utils.UnsyncCircularQueue.expandQueue(UnsyncCircularQueue.java:71)
         at weblogic.utils.UnsyncCircularQueue.put(UnsyncCircularQueue.java:94)
         at weblogic.rjvm.MsgAbbrevJVMConnection$WritingState.sendNow(MsgAbbrevJVMConnection.java:511)
         at weblogic.rjvm.MsgAbbrevJVMConnection.canSendMsg(MsgAbbrevJVMConnection.java)
         at weblogic.rjvm.MsgAbbrevJVMConnection.sendMsg(MsgAbbrevJVMConnection.java:181)
         at weblogic.rjvm.MsgAbbrevJVMConnection.sendMsg(MsgAbbrevJVMConnection.java:142)
         at weblogic.rjvm.ConnectionManager.sendMsg(ConnectionManager.java:593)
         at weblogic.rjvm.RJVMImpl.send(RJVMImpl.java:901)
         at weblogic.rjvm.MsgAbbrevOutputStream.flushAndSend(MsgAbbrevOutputStream.java:394)
         at weblogic.rjvm.MsgAbbrevOutputStream.sendOneWay(MsgAbbrevOutputStream.java:400)
         at weblogic.rjvm.BasicOutboundRequest.sendOneWay(BasicOutboundRequest.java:105)
         at weblogic.rmi.internal.BasicRemoteRef.sendOneway(BasicRemoteRef.java:271)
         at weblogic.rmi.internal.BasicRemoteRef.invoke(BasicRemoteRef.java:214)
         at weblogic.management.logging.DomainLogHandlerImpl_1033_WLStub.publishLogEntries(Unknown Source)
         at weblogic.logging.DomainLogBroadcasterClient$2.run(DomainLogBroadcasterClient.java:209)
         at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    And even I am getting out of memory error warning.
    ####<Nov 11, 2010 7:34:46 AM GMT+05:30> <Critical> <Health> <APP1> <Server1> <weblogic.GCMonitor> <<anonymous>> <> <> <1289441086135> <BEA-310003> <Free memory in the server is 2,953,904 bytes. There is danger of OutOfMemoryError>
    My server arguments are
    -XX:PermSize=256m -XX:MaxPermSize=256m -Xms1024M -Xmx2048M
    Please tell me where the things are going wrong.

    This wls instance can't afford huge requests.
    See the following error
    ####<Nov 11, 2010 5:18:50 AM GMT+05:30> <Error> <Kernel> <APP1> <Server1> <[ACTIVE] ExecuteThread: '360' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>>
    This ExecuteThread number is 360th.
    I think your instance is already too busy.
    I guess OOME is second issue.
    If i have to solve this problem, i consider several follow things.
    1. Check the total requests.
         - I guess we need to sizing of wls instances.
    2. Extend wls instance count.
         - Under 100 ExecuteThread per wls instance
    3. Find long term requests, especially STUCK thread.
         - Tuning that requests under JTA timeout.
    4. Set the -Xms and -Xmx as same value.
         - For performance.
    I hope this will be helpful to you

  • Queue exceed maximum capacity of '256' elements.

              Has anyone seen this one before??
              Thanks
              Thomas
              weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity
              of: '256' elements.
              weblogic.utils.UnsyncCircularQueue$FullQueueException:      of: '256' elements
              at weblogic.utils.UnsyncCircularQueue.expandQueue(UnsyncCircularQueue.java:72)
              at weblogic.utils.UnsyncCircularQueue.put(UnsyncCircularQueue.java:94)
              at weblogic.kernel.ExecuteThreadManager.execute(ExecuteThreadManager.java:343)
              at weblogic.kernel.Kernel.execute(Kernel.java:338)
              at weblogic.rmi.internal.BasicServerRef.dispatch(BasicServerRef.java:304)
              at weblogic.rjvm.RJVMImpl.dispatchRequest(RJVMImpl.java:923)
              at weblogic.rjvm.RJVMImpl.dispatch(RJVMImpl.java:844)
              at weblogic.rjvm.ConnectionManagerServer.handleRJVM(ConnectionManagerServer.java:222)
              at weblogic.rjvm.ConnectionManager.dispatch(ConnectionManager.java:794)
              at weblogic.rjvm.t3.T3JVMConnection.dispatch(T3JVMConnection.java:570)
              at weblogic.socket.SocketMuxer.readReadySocketOnce(SocketMuxer.java:681)
              at weblogic.socket.SocketMuxer.readReadySocket(SocketMuxer.java:627)
              at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:123)
              at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:32)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
              

    Hi,
              The queue in question here is not a JMS queue. Instead,
              this is the "execute queue" of jobs waiting to be
              serviced by a thread in its thread pool.
              I think the size of these queues is configurable on the
              thread pool. Note that if dead-lock is locking up threads
              and causing the problem,
              you will likely need to increase the number of threads
              or dedicate server-side resources to their own thread pools
              rather than (or in addition to) increasing the size
              of the execute queues. (A thread dump helps
              in diagnose dead-locks.)
              Likely this error is on a server side thread pool, so
              there should be warnings/error messages in the server logs
              that describe what is happening.
              Tom
              Thomas Kreig wrote:
              > Has anyone seen this one before??
              > Thanks
              > Thomas
              >
              > weblogic.utils.UnsyncCircularQueue$FullQueueException: Queue exceed maximum capacity
              > of: '256' elements.
              > weblogic.utils.UnsyncCircularQueue$FullQueueException:      of: '256' elements
              > at weblogic.utils.UnsyncCircularQueue.expandQueue(UnsyncCircularQueue.java:72)
              > at weblogic.utils.UnsyncCircularQueue.put(UnsyncCircularQueue.java:94)
              > at weblogic.kernel.ExecuteThreadManager.execute(ExecuteThreadManager.java:343)
              > at weblogic.kernel.Kernel.execute(Kernel.java:338)
              > at weblogic.rmi.internal.BasicServerRef.dispatch(BasicServerRef.java:304)
              > at weblogic.rjvm.RJVMImpl.dispatchRequest(RJVMImpl.java:923)
              > at weblogic.rjvm.RJVMImpl.dispatch(RJVMImpl.java:844)
              > at weblogic.rjvm.ConnectionManagerServer.handleRJVM(ConnectionManagerServer.java:222)
              > at weblogic.rjvm.ConnectionManager.dispatch(ConnectionManager.java:794)
              > at weblogic.rjvm.t3.T3JVMConnection.dispatch(T3JVMConnection.java:570)
              > at weblogic.socket.SocketMuxer.readReadySocketOnce(SocketMuxer.java:681)
              > at weblogic.socket.SocketMuxer.readReadySocket(SocketMuxer.java:627)
              > at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:123)
              > at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:32)
              > at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:197)
              > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:170)
              >
              

  • ORA-30951: Element or attribute at Xpath /AC//Doc[@] exceeds maximum length

    Hi All,
    I am trying to load the XML Files into a Table using the SQL Loader and i am getting the Error
    Record 1: Rejected - Error on table COMMONASSETCATALOG.
    ORA-30951: Element or attribute at Xpath /AC/T[1]/T[1]/T[1]/T[1]/T[1]/Doc[@] exceeds maximum length
    The <Doc> Element which is child of the <T> contains an XML Schema inside it..
    The Doc Element is declared in Schema as
    <xs:complexType name="AsDocType">
              <xs:annotation>
                   <xs:documentation>A (Doc)ument, a container for any type of file</xs:documentation>
              </xs:annotation>
              <xs:sequence minOccurs="0" maxOccurs="unbounded">
                   <xs:any namespace="##any" processContents="lax"/>
              </xs:sequence>
              <xs:attributeGroup ref="AsDocAtts"/>
         </xs:complexType>
    The Size of the XML Content that <Doc> Node has is around 34Kb.
    Could you pls let me know how to resolve this..
    Thanks
    Sateesh

    Hi All,
    I am trying to load the XML Files into a Table using the SQL Loader and i am getting the Error
    Record 1: Rejected - Error on table COMMONASSETCATALOG.
    ORA-30951: Element or attribute at Xpath /AC/T[1]/T[1]/T[1]/T[1]/T[1]/Doc[@] exceeds maximum length
    The <Doc> Element which is child of the <T> contains an XML Schema inside it..
    The Doc Element is declared in Schema as
    <xs:complexType name="AsDocType">
              <xs:annotation>
                   <xs:documentation>A (Doc)ument, a container for any type of file</xs:documentation>
              </xs:annotation>
              <xs:sequence minOccurs="0" maxOccurs="unbounded">
                   <xs:any namespace="##any" processContents="lax"/>
              </xs:sequence>
              <xs:attributeGroup ref="AsDocAtts"/>
         </xs:complexType>
    The Size of the XML Content that <Doc> Node has is around 34Kb.
    Could you pls let me know how to resolve this..
    Thanks
    Sateesh

  • Forward to gmail/hotmail Event ID 3030 552 5.7.0 Number of Received: DATA headers exceeds maximum permitted

    A user wants me to forward his Exchange 2003 recipient’s email to his Gmail account. 
    In Active Directory Users and Computers I created a Contact and then in the recipient “Delivery Options” Forward to: I put that contact.
    He receives all the forwards of email created internally at Gmail, but only some email that comes from external domains make it to Gmail. 
    Most (but not all) external email when forwarded to his Gmail account, by Exchange 2003, creates an NDR (non-delivery report) saying “DATA headers exceeds maximum permitted” (shown below). I’m pretty sure this message is generated by Exchange 2003 because it
    is the same if the external, forwarded to, account is Gmail or Hotmail.
    Any ideas how to solve this?
    -------Error Msg in Outlook----------
    John Smith on 12/12/2014 5:26 PM
                The recipient could not be processed because it would violate the security policy in force
               <ourdomain.com #5.7.0 smtp;552 5.7.0 Number of 'Received:' DATA headers exceeds maximum permitted>
    “ourdomain.com”, above, is the name of our email domain.
    -------- Event Viewer Error-----------
    Event Type: Error
    Event Source: MSExchangeTransport
    Event Category: NDR 
    Event ID: 3030
    Date: 12/12/2014
    Time: 5:08:58 PM
    User: N/A
    Computer: WIN2K3
    Description:
    A non-delivery report with a status code of 5.7.0 was generated for recipient rfc822;[email protected] (Message-ID  <001301d01671$54abb8c0$fe032a40$@com>).

    Hi,
    Based on the error Number of 'Received:' DATA headers exceeds maximum permitted. This message header size could exceeds message header size limits.
    Message header size limits  These limits apply to the total size of all message header fields that are present in a message. The size of the message body or attachments isn’t considered. Because the header fields
    are plain text, the size of the header is determined by the number of characters in each header field and by the total number of header fields. Each character of text consumes 1 byte.
    So, please check the message header size limits setting on receive connector by the following cmdlet:
    Get-ReceiveConnector “Connector name” | FL MaxHeaderSize
    Then check the problematic message header and compare them to check this issue. If these message header exceeds the message header size limit, we can use the following cmdlet to change the maximum header size:
    Set-ReceiveConnector “Connector Name” –MaxHeaderSize “value”
    The MaxHeaderSize parameter specifies in bytes the maximum size of the SMTP message header that the Receive connector accepts before it closes the connection. The default value is 65536 bytes. When you enter a value, qualify the value with one of
    the following units:
    B (bytes)
    KB (kilobytes)
    MB (megabytes)
    GB (gigabytes)
    Unqualified values are treated as bytes. The valid input range for this parameter is from 1 through 2147483647 bytes.
    Note: Some third-party firewalls or proxy servers apply their own message header size limits. These third-party firewalls or proxy servers may have difficulty processing messages that contain attachment file names that are greater than 50 characters
    or attachment file names that contain non-US-ASCII characters.
    Best Regards.

  • If Dimension exceeds maximum size then what will we do???

    Hi Experts,
                   If Dimension exceeds maximum size then what will we do???
                   My dought was how to increase the dimension size in SSAS 2008 R2???
           i am using SQL SERVER 2008 R2.
                   i had faced this question in one of the interview.so,Could you explain with an example.
    Best Regards,
    sirikumar

    You can't exceed the maximum, else you get error. The maximum is a huge number:
    Object
    Maximum sizes/numbers
    Databases in an instance
    2^31-1 = 2,147,483,647
    Dimensions in a database
    2^31-1 = 2,147,483,647
    Attributes in a dimension
    2^31-1 = 2,147,483,647
    Members in a dimension attribute
    2^31-1 = 2,147,483,647
    User-defined hierarchies in a dimension
    2^31-1 = 2,147,483,647
    Levels in a user-defined hierarchy
    2^31-1 = 2,147,483,647
    Cubes in a database
    2^31-1 = 2,147,483,647
    LINK:
    Maximum Capacity Specifications (Analysis Services)
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Reached maximum capacity of pool "eis/PRAdapterCF", making "0" new resource

    Hi All,
    We configured weblogic with pega. I am getting below error. Any idea ?
    I enabled test conection reserve. and increased test frequency 900
    ####<Nov 13, 2012 1:25:46 PM CST> <Info> <Common> <test4> <PEGA> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1352834746443> <BEA-000627> <Reached maximum capacity of pool "eis/PRAdapterCF", making "0" new resource instances instead of "1".>
    ####<Nov 13, 2012 1:25:46 PM CST> <Info> <Common> <test4> <PEGA> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1352834746444> <BEA-000628> <Created "1" resources for pool "eis/PRAdapterCF", out of which "1" are available and "0" are unavailable.>
    ####<Nov 13, 2012 1:25:46 PM CST> <Info> <Common> <test4> <PEGA> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1352834746749> <BEA-000627> <Reached maximum capacity of pool "eis/PRAdapterCF", making "0" new resource instances instead of "1".>
    ####<Nov 13, 2012 1:25:46 PM CST> <Info> <Common> <test4> <PEGA> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1352834746750> <BEA-000628> <Created "1" resources for pool "eis/PRAdapterCF", out of which "1" are available and "0" are unavailable.>
    ####<Nov 13, 2012 1:25:46 PM CST> <Info> <Common> <test4> <PEGA> <[ACTIVE] ExecuteThread: '9' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1352834746837> <BEA-Thanks,
    Edited by: 834617 on Nov 13, 2012 11:32 AM

    Hi Kal,
    we already enabled Inactive connection timeout is 900 still its throughing the info.
    Thanks,
    Edited by: 834617 on Nov 28, 2012 10:39 AM

  • Field in data file exceeds maximum length

    Hi,
    I am trying to run the following SQL*Loader control job on my Oracle 11gR2 . Running the SQL*Loader control job results in the ‘Field in data file exceeds maximum length’ error message. Below, I am listing the control file.Please Suggest. Thanks
    It's giving me an error when I run SQL Loader on it,
    Record 61: Rejected - Error on table RMS_TABLE, column GEOM.SDO_POINT.X.
    Field in data file exceeds maximum length.
    Here is my SQL Loader Control file,
    LOAD DATA
    INFILE *
    TRUNCATE
    CONTINUEIF NEXT(1:1) = '#'
    INTO TABLE RMS_TABLE
    FIELDS TERMINATED BY '|'
    TRAILING NULLCOLS (
       Status NULLIF Status = BLANKS,
       Score,
       Match_type NULLIF Match_type = BLANKS,
       Match_addr NULLIF Match_addr = BLANKS,
       Side NULLIF Side = BLANKS,
       User_fld NULLIF User_fld = BLANKS,
       Addr_type NULLIF Addr_type = BLANKS,
       ARC_Street NULLIF ARC_Street = BLANKS,
       ARC_City NULLIF ARC_City = BLANKS,
       ARC_State NULLIF ARC_State = BLANKS,
       ARC_ZIP NULLIF ARC_ZIP = BLANKS,
       INCIDENT_N NULLIF INCIDENT_N = BLANKS,
       CDATE NULLIF CDATE = BLANKS,
       CTIME NULLIF CTIME = BLANKS,
       DISTRICT NULLIF DISTRICT = BLANKS,
    LOCATION
    NULLIF LOCATION = BLANKS,
       MAPLOCATIO
    NULLIF MAPLOCATIO = BLANKS,
       LOCATION_T
    NULLIF LOCATION_T = BLANKS,
       DAYCODE
    NULLIF DAYCODE = BLANKS,
       CAUSE
    NULLIF CAUSE = BLANKS,
       GEOM COLUMN OBJECT
         SDO_GTYPE       INTEGER EXTERNAL,
         SDO_POINT COLUMN OBJECT
           (X            FLOAT EXTERNAL,
            Y            FLOAT EXTERNAL)
    CREATE TABLE RMS_TABLE (
      Status VARCHAR2(1),
      Score NUMBER,
      Match_type VARCHAR2(2),
      Match_addr VARCHAR2(120),
      Side VARCHAR2(1),
      User_fld VARCHAR2(120),
      Addr_type VARCHAR2(20),
      ARC_Street VARCHAR2(100),
      ARC_City VARCHAR2(40),
      ARC_State VARCHAR2(20),
      ARC_ZIP VARCHAR2(10),
      INCIDENT_N VARCHAR2(9),
      CDATE VARCHAR2(10),
      CTIME VARCHAR2(8),
      DISTRICT VARCHAR2(4),
      LOCATION VARCHAR2(128),
      MAPLOCATIO VARCHAR2(100),
      LOCATION_T VARCHAR2(42),
      DAYCODE VARCHAR2(1),
      CAUSE VARCHAR2(17),
      GEOM MDSYS.SDO_GEOMETRY);

    Hi,
    Looks like you have a problem with record 61 in your data file. Can you please post it in reply.
    Regards
    Ivan

  • ORA-01044: size of buffer bound to variable exceeds maximum

    Hello Oracle Gurus,
    I have a tricky problem.
    I have a stored procedure which has to retun more than 100,000 records. In my stored procedure, I have "TABLE OF VARCHAR2(512) INDEX BY BINARY_INTEGER". It fails when I try to get 80,000 records.
    I get an error "ORA-01044: size 40960000 of buffer bound to variable exceeds maximum 33554432"
    A simple calculation shows that 512*80000=40960000.
    Oracle help suggests to reduce buffer size (i.e., number of records being returned or size of variable).
    But, reducing the number of records returned or reducing the size of variable is not possible because of our product design constraints.
    Are there any other options like changing some database startup parameters to solve this problem?
    Thanks,
    Sridhar

    We are migrating an application running on Oracle 8i to 9i and found the same problem with some of the stored procedures.
    Our setup:
    + Oracle 9.2.0.3.0
    + VB6 Application using OLEDB for Oracle ...
    + MDAC 2.8 msdaora.dll - 2.80.1022.0 (srv03_rtm.030324-2048)
    I am calling a stored procedure from VB like this one:
    {? = call trev.p_planung.GET_ALL_KONTEN(?,?,{resultset 3611, l_konto_id, l_name,l_ro_id, l_beschreibung, l_typ, l_plg_id})}
    If setting the parameter "resultset" beyond a certain limit, I will eventually get this ORA-01044 error. This even happens, if the returned number of records is smaller than what supplied in the resultset parameter (I manually set the "resultset" param in the stored procedure string). E.g.:
    resultset = 1000 -> ORA-06513: PL/SQL: Index der PL/SQL-Tabelle ungültig für Language-Array vom Host
    resultset = 2000 -> OK (actual return: 1043 Recordsets)
    resultset = 3000 -> ORA-01044: Größe 6000000 des Puffers für Variable überschreitet Höchstwert von 4194304
    resultset = 3500 -> ORA-01044: Größe 7000000 des Puffers für Variable überschreitet Höchstwert von 4194304
    ... therefore one record is calculated here 7000000/3500=2000 bytes.
    In Oracle 8i we never had this problem. As this is a huge application using a lot stored procedures, changing all "select" stored procedures to "get data by chunks" (suggestet in some forum threads in OTN) ist not an option.
    Interesting: I can call the stored procedure above with the same parameters as given in VB from e.g. Quest SQL Navigator or sql plus successfully and retrieve all data!
    Is there any other known solution to this problem in Oracle 9i? Is it possible to increase the maximum buffer size (Oracle documentation: ORA-01044 ... Action: Reduce the buffer size.)? What buffer size is meant here - which part in the communication chain supplies this buffer?
    Any help highly appreciated!
    Sincerely,
    Sven Bombach

  • On load, getting error:  Field in data file exceeds maximum length

    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0    Production
    TNS for Solaris: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    I'm trying to load a table, small in size (110 rows, 6 columns).  One of the columns, called NOTES is erroring when I run the load.  It is saying that the column size exceeds max limit.  As you can see here, the table column is set to 4000 Bytes)
    CREATE TABLE NRIS.NRN_REPORT_NOTES
      NOTES_CN      VARCHAR2(40 BYTE)               DEFAULT sys_guid()            NOT NULL,
      REPORT_GROUP  VARCHAR2(100 BYTE)              NOT NULL,
      AREACODE      VARCHAR2(50 BYTE)               NOT NULL,
      ROUND         NUMBER(3)                       NOT NULL,
      NOTES         VARCHAR2(4000 BYTE),
      LAST_UPDATE   TIMESTAMP(6) WITH TIME ZONE     DEFAULT systimestamp          NOT NULL
    TABLESPACE USERS
    RESULT_CACHE (MODE DEFAULT)
    PCTUSED    0
    PCTFREE    10
    INITRANS   1
    MAXTRANS   255
    STORAGE    (
                INITIAL          80K
                NEXT             1M
                MINEXTENTS       1
                MAXEXTENTS       UNLIMITED
                PCTINCREASE      0
                BUFFER_POOL      DEFAULT
                FLASH_CACHE      DEFAULT
                CELL_FLASH_CACHE DEFAULT
    LOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    I did a little investigating, and it doesn't add up.
    when i run
    select max(lengthb(notes)) from NRIS.NRN_REPORT_NOTES
    I get a return of
    643
    That tells me that the largest size instance of that column is only 643 bytes.  But EVERY insert is failing.
    Here is the loader file header, and first couple of inserts:
    LOAD DATA
    INFILE *
    BADFILE './NRIS.NRN_REPORT_NOTES.BAD'
    DISCARDFILE './NRIS.NRN_REPORT_NOTES.DSC'
    APPEND INTO TABLE NRIS.NRN_REPORT_NOTES
    Fields terminated by ";" Optionally enclosed by '|'
      NOTES_CN,
      REPORT_GROUP,
      AREACODE,
      ROUND NULLIF (ROUND="NULL"),
      NOTES,
      LAST_UPDATE TIMESTAMP WITH TIME ZONE "MM/DD/YYYY HH24:MI:SS.FF9 TZR" NULLIF (LAST_UPDATE="NULL")
    BEGINDATA
    |E2ACF256F01F46A7E0440003BA0F14C2|;|DEMOGRAPHICS|;|A01003|;3;|Demographic results show that 46 percent of visits are made by females.  Among racial and ethnic minorities, the most commonly encountered are Native American (4%) and Hispanic / Latino (2%).  The age distribution shows that the Bitterroot has a relatively small proportion of children under age 16 (14%) in the visiting population.  People over the age of 60 account for about 22% of visits.   Most of the visitation is from the local area.  More than 85% of visits come from people who live within 50 miles.|;07/29/2013 16:09:27.000000000 -06:00
    |E2ACF256F02046A7E0440003BA0F14C2|;|VISIT DESCRIPTION|;|A01003|;3;|Most visits to the Bitterroot are fairly short.  Over half of the visits last less than 3 hours.  The median length of visit to overnight sites is about 43 hours, or about 2 days.  The average Wilderness visit lasts only about 6 hours, although more than half of those visits are shorter than 3 hours long.   Most visits come from people who are fairly frequent visitors.  Over thirty percent are made by people who visit between 40 and 100 times per year.  Another 8 percent of visits are from people who report visiting more than 100 times per year.|;07/29/2013 16:09:27.000000000 -06:00
    |E2ACF256F02146A7E0440003BA0F14C2|;|ACTIVITIES|;|A01003|;3;|The most frequently reported primary activity is hiking/walking (42%), followed by downhill skiing (12%), and hunting (8%).  Over half of the visits report participating in relaxing and viewing scenery.|;07/29/2013 16:09:27.000000000 -06:00
    Here is the full beginning of the loader log, ending after the first row return.  (They ALL say the same error)
    SQL*Loader: Release 10.2.0.4.0 - Production on Thu Aug 22 12:09:07 2013
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Control File:   NRIS.NRN_REPORT_NOTES.ctl
    Data File:      NRIS.NRN_REPORT_NOTES.ctl
      Bad File:     ./NRIS.NRN_REPORT_NOTES.BAD
      Discard File: ./NRIS.NRN_REPORT_NOTES.DSC
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 50
    Bind array:     64 rows, maximum of 256000 bytes
    Continuation:    none specified
    Path used:      Conventional
    Table NRIS.NRN_REPORT_NOTES, loaded from every logical record.
    Insert option in effect for this table: APPEND
       Column Name                  Position   Len  Term Encl Datatype
    NOTES_CN                            FIRST     *   ;  O(|) CHARACTER
    REPORT_GROUP                         NEXT     *   ;  O(|) CHARACTER
    AREACODE                             NEXT     *   ;  O(|) CHARACTER
    ROUND                                NEXT     *   ;  O(|) CHARACTER
        NULL if ROUND = 0X4e554c4c(character 'NULL')
    NOTES                                NEXT     *   ;  O(|) CHARACTER
    LAST_UPDATE                          NEXT     *   ;  O(|) DATETIME MM/DD/YYYY HH24:MI:SS.FF9 TZR
        NULL if LAST_UPDATE = 0X4e554c4c(character 'NULL')
    Record 1: Rejected - Error on table NRIS.NRN_REPORT_NOTES, column NOTES.
    Field in data file exceeds maximum length...
    I am not seeing why this would be failing.

    HI,
    the problem is delimited data defaults to char(255)..... Very helpful I know.....
    what you need to two is tell sqlldr hat the data is longer than this.
    so change notes to notes char(4000) in you control file and it should work.
    cheers,
    harry

  • Column name length exceeds maximum allowed

    Hello,
    I get this error when am trying to create a table. ERROR: Column name length exceeds maximum allowed length(30).
    Is it able to extend this length to be more than 30 ? By the way I am using Oracle 11g
    Regards,
    Moussa El Tayeb
    about.me/MoussaEltayeb

    Hello,
    also Oracle has some limits. For more Information see the logical limits
    http://docs.oracle.com/cd/E14072_01/server.112/e10820/limits.htm
    regards
    Peter

  • Lax validation errors on schema import ('version' exceeds maximum length)

    I have a schema as per below. I'm trying to import it into Oracle 10.2.0.2.0. However, I'm getting the following lax validation error:
    Error loading ora_business_rule.xsd:ORA-30951: Element or attribute at Xpath /schema[@version] exceeds maximum length
    I can fix it by modifying the attribute and shortening it but I'd like to know why it's occuring. Insofar as I can tell there is no imposed limit on the size of schema attributes according to the W3C standard. Which then makes me wonder: does Oracle impose limits on the length of all attributes or is this specific to 'version' ? If there is a limit, what is the upper bound (in bytes) ? Where is this documented?
    Cheers,
    Daniel
    <?xml version="1.0" encoding="UTF-8"?>
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:br="http://foo.com/BusinessRule_PSG_V001" targetNamespace="http://foo.com/BusinessRule_PSG_V001" elementFormDefault="qualified" attributeFormDefault="unqualified" version="last committed on $LastChangedDate: 2006-05-19 11:00:52 +1000 (Fri, 19 May 2006) $">
         <xs:element name="edit">
              <xs:complexType>
                   <xs:sequence>
                        <xs:element name="edit_id" type="xs:string"/>
                        <xs:element ref="br:business_rule"/>
                   </xs:sequence>
              </xs:complexType>
         </xs:element>
         <xs:element name="derivation">
              <xs:complexType>
                   <xs:sequence>
                        <xs:element name="derivation_id" type="xs:string"/>
                        <xs:element ref="br:derivation_type"/>
                        <xs:element ref="br:business_rule"/>
                   </xs:sequence>
              </xs:complexType>
         </xs:element>
         <xs:element name="derivation_type">
              <xs:simpleType>
                   <xs:restriction base="xs:NMTOKENS">
                        <xs:enumeration value="complex"/>
                        <xs:enumeration value="format"/>
                        <xs:enumeration value="formula"/>
                        <xs:enumeration value="recode"/>
                        <xs:enumeration value="SAS code"/>
                        <xs:enumeration value="transfer"/>
                        <xs:enumeration value="count"/>
                        <xs:enumeration value="sum"/>
                        <xs:enumeration value="max"/>
                        <xs:enumeration value="min"/>
                   </xs:restriction>
              </xs:simpleType>
         </xs:element>
         <xs:element name="business_rule"></xs:element>
    </xs:schema>

    Opps -- Sorry it's a decision we took when looking at Version
    When we registered the Schema for Schemas during XDB bootstrap the Version attriubte was mapped to varchar2(12).
    SQL> desc xdb.xdb$schema_T
    Name                                      Null?    Type
    SCHEMA_URL                                         VARCHAR2(700)
    TARGET_NAMESPACE                                   VARCHAR2(2000)
    VERSION                                            VARCHAR2(12)
    NUM_PROPS                                          NUMBER(38)
    FINAL_DEFAULT                                      XDB.XDB$DERIVATIONCHOICE
    BLOCK_DEFAULT                                      XDB.XDB$DERIVATIONCHOICE
    ELEMENT_FORM_DFLT                                  XDB.XDB$FORMCHOICE
    ATTRIBUTE_FORM_DFLT                                XDB.XDB$FORMCHOICE
    ELEMENTS                                           XDB.XDB$XMLTYPE_REF_LIST_T
    SIMPLE_TYPE                                        XDB.XDB$XMLTYPE_REF_LIST_T
    COMPLEX_TYPES                                      XDB.XDB$XMLTYPE_REF_LIST_T
    ATTRIBUTES                                         XDB.XDB$XMLTYPE_REF_LIST_T
    IMPORTS                                            XDB.XDB$IMPORT_LIST_T
    INCLUDES                                           XDB.XDB$INCLUDE_LIST_T
    FLAGS                                              RAW(4)
    SYS_XDBPD$                                         XDB.XDB$RAW_LIST_T
    ANNOTATIONS                                        XDB.XDB$ANNOTATION_LIST_T
    MAP_TO_NCHAR                                       RAW(1)
    MAP_TO_LOB                                         RAW(1)
    GROUPS                                             XDB.XDB$XMLTYPE_REF_LIST_T
    ATTRGROUPS                                         XDB.XDB$XMLTYPE_REF_LIST_T
    ID                                                 VARCHAR2(256)
    VARRAY_AS_TAB                                      RAW(1)
    SCHEMA_OWNER                                       VARCHAR2(30)
    NOTATIONS                                          XDB.XDB$NOTATION_LIST_T
    LANG                                               VARCHAR2(4000)
    SQL>

  • SDO_ORDINATES.X.Field in data file exceeds maximum length

    Hi All,
    While loading data in .SHP file into oracle spatial through SHP2SDO tool following error message appears:
    Error message:
    Record 54284: Rejected - Error on table GEO_PARCEL_CENTROID, column CENTROID_GEOM.SDO_ORDINATES.X.
    Field in data file exceeds maximum length.
    I read some where this is due to the SQL * Loader takes default column value to 255 characters. But there is confusion to me how to change the column size in control file because it is object data type. I am not sure this is correct or not.
    The control file show as below:
    LOAD DATA
    INFILE geo_parcel_centroid.dat
    TRUNCATE
    CONTINUEIF NEXT(1:1) = '#'
    INTO TABLE GEO_PARCEL_CENTROID
    FIELDS TERMINATED BY '|'
    TRAILING NULLCOLS (
    CENTROID_ID INTEGER EXTERNAL,
    APN_NUMBER      NULLIF APN_NUMBER = BLANKS,
    PROPERTY_A      NULLIF PROPERTY_A = BLANKS,
    PROPERTY_C      NULLIF PROPERTY_C = BLANKS,
    OWNER_NAME      NULLIF OWNER_NAME = BLANKS,
    THOMAS_GRI      NULLIF THOMAS_GRI = BLANKS,
    MAIL_ADDRE      NULLIF MAIL_ADDRE = BLANKS,
    MAIL_CITY_      NULLIF MAIL_CITY_ = BLANKS,
    MSLINK,
    MAPID,
    GMRotation,
    CENTROID_GEOM COLUMN OBJECT
    SDO_GTYPE INTEGER EXTERNAL,
    SDO_ELEM_INFO VARRAY TERMINATED BY '|/'
    (X FLOAT EXTERNAL),
    SDO_ORDINATES VARRAY TERMINATED BY '|/'
    (X FLOAT EXTERNAL)
    Any help on this would appreciate.
    Thanks,
    [email protected]

    Hi,
    Looks like you have a problem with record 61 in your data file. Can you please post it in reply.
    Regards
    Ivan

  • SQL Loader - Field in data file exceeds maximum length

    Dear All,
    I have a file which has more than 4000 characters in a field and I wish to load the data in a table with field length = 4000. but I receive error as
    Field in data file exceeds maximum lengthThe below given are the scripts and ctl file
    Table creation script:
    CREATE TABLE "TEST_TAB"
        "STR"  VARCHAR2(4000 BYTE),
        "STR2" VARCHAR2(4000 BYTE),
        "STR3" VARCHAR2(4000 BYTE)
      );Control file:
    LOAD DATA
    INFILE 'C:\table_export.txt'
    APPEND INTO TABLE TEST_TAB
    FIELDS TERMINATED BY '|'
    TRAILING NULLCOLS
    ( STR CHAR(4000) "SUBSTR(:STR,1,4000)" ,
    STR2 CHAR(4000) "SUBSTR(:STR2,1,4000)" ,
    STR3 CHAR(4000) "SUBSTR(:STR3,1,4000)"
    )Log:
    SQL*Loader: Release 10.2.0.1.0 - Production on Mon Jul 26 16:06:25 2010
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Control File:   C:\TEST_TAB.CTL
    Data File:      C:\table_export.txt
      Bad File:     C:\TEST_TAB.BAD
      Discard File:  none specified
    (Allow all discards)
    Number to load: ALL
    Number to skip: 0
    Errors allowed: 0
    Bind array:     64 rows, maximum of 256000 bytes
    Continuation:    none specified
    Path used:      Conventional
    Table TEST_TAB, loaded from every logical record.
    Insert option in effect for this table: APPEND
    TRAILING NULLCOLS option in effect
       Column Name                  Position   Len  Term Encl Datatype
    STR                                 FIRST  4000   |       CHARACTER           
        SQL string for column : "SUBSTR(:STR,1,4000)"
    STR2                                 NEXT  4000   |       CHARACTER           
        SQL string for column : "SUBSTR(:STR2,1,4000)"
    STR3                                 NEXT  4000   |       CHARACTER           
        SQL string for column : "SUBSTR(:STR3,1,4000)"
    value used for ROWS parameter changed from 64 to 21
    Record 1: Rejected - Error on table TEST_TAB, column STR.
    Field in data file exceeds maximum length
    MAXIMUM ERROR COUNT EXCEEDED - Above statistics reflect partial run.
    Table TEST_TAB:
      0 Rows successfully loaded.
      1 Row not loaded due to data errors.
      0 Rows not loaded because all WHEN clauses were failed.
      0 Rows not loaded because all fields were null.
    Space allocated for bind array:                 252126 bytes(21 rows)
    Read   buffer bytes: 1048576
    Total logical records skipped:          0
    Total logical records read:             1
    Total logical records rejected:         1
    Total logical records discarded:        0
    Run began on Mon Jul 26 16:06:25 2010
    Run ended on Mon Jul 26 16:06:25 2010
    Elapsed time was:     00:00:00.22
    CPU time was:         00:00:00.15Please suggest a way to get it done.
    Thanks for reading the post!
    *009*

    Hi Toni,
    Thanks for the reply.
    Do you mean this?
    CREATE TABLE "TEST"."TEST_TAB"
        "STR"  VARCHAR2(4001),
        "STR2" VARCHAR2(4001),
        "STR3" VARCHAR2(4001)
      );However this does not work as the error would be:
    Error at Command Line:8 Column:20
    Error report:
    SQL Error: ORA-00910: specified length too long for its datatype
    00910. 00000 -  "specified length too long for its datatype"
    *Cause:    for datatypes CHAR and RAW, the length specified was > 2000;
               otherwise, the length specified was > 4000.
    *Action:   use a shorter length or switch to a datatype permitting a
               longer length such as a VARCHAR2, LONG CHAR, or LONG RAW*009*
    Edited by: 009 on Jul 28, 2010 6:15 AM

Maybe you are looking for

  • I cannot download word documents. Error message says assocated helper application does not exist.

    Tried downloading word document from email. error message Associated helper application does not exist

  • Need help converting PDF to a word document!

    I paid to have the ability to convert a PDF document to a Word document.  When I click on "convert" the area turns gray, and nothing happens.  Once before, the document converted, but it did not come out the same.  For example, the charts did not con

  • Table contain user name and tcode

    Dear Experts, Can you tell me which Table contained user name and tcode field? Thanks and Best regards, wilson

  • Possible to force new sync?

    A look at my router stats shows this: 5. VDSL uptime: 2 days, 21:48:26 6. Data rate: 18542 / 60493 7. Maximum data rate: 21383 / 69520 8. Noise margin: 7.6 / 9.1 9. Line attenuation: 0.0 / 14.4 10. Signal attenuation: 0.0 / 14.6 I've noticed my maxim

  • Settling WBS with Billing element indicator

    Hi Experts, I'm trying to settling a Project that contains WBS with 'billing element' indicator marked and others not. When I executed CJ88 to settle WBS to CO-PA, only settled wbs without 'billing element' indicator marked. Could you indicate me why