Failed to allocate memory

hi everybody
im retrieving all the assets in the system in an internal table and convert it to csv format using the fm SAP_CONVERT_TO_CSV_FORMAT, there are abt 50000 records
when executing the program, im getting an error ' Fails to allocate memory' and the prg is stopped
is there a way i can stop that?

hi
you can use this FM for the download as a CVS format.
DATA: m_c_file        TYPE string.
  CALL FUNCTION 'GUI_DOWNLOAD'
    EXPORTING
      filename                = C:\DATA\test.CSV   
      filetype                = 'ASC'
*      write_field_separator   = 'X'
    TABLES
      data_tab                = itab
    EXCEPTIONS
      file_write_error        = 1
      no_batch                = 2
      gui_refuse_filetransfer = 3
      invalid_type            = 4
      no_authority            = 5
      unknown_error           = 6
      header_not_allowed      = 7
      separator_not_allowed   = 8
      filesize_not_allowed    = 9
      header_too_long         = 10
      dp_error_create         = 11
      dp_error_send           = 12
      dp_error_write          = 13
      unknown_dp_error        = 14
      access_denied           = 15
      dp_out_of_memory        = 16
      disk_full               = 17
      dp_timeout              = 18
      file_not_found          = 19
      dataprovider_exception  = 20
      control_flush_error     = 21
      OTHERS                  = 22.

Similar Messages

  • Partition Failed Partition failed with the error: POSIX reports: The operation couldn't be completed. CAN NOT ALLOCATE MEMORY

    I plug usb mass storage device to my macbook pro i7 than every thing is stack and i reinforce to shutdown and than I start again but mac fail to boot again.
    I tried different optiones in disk utility in mac os startup disk. but still fail to repair, erase and partition. I try to fix with the steps listed in
    https://discussions.apple.com/message/13288923#13288923
    but it fail to partition with the msg:
    Partition Failed
         Partition failed with the error: POSIX reports: The operation couldn’t be completed.cannot allocate memory
    pls pls pls help urgentely on this matter I stop working.
    Kind Regards,
    pe

    Hi again OGELTHORPE.
    I've already run two hardware tests. One quick one and one extended one. It was unable to pick anything up.
    It's like everything is running perfectly, but when I go to erase the HDD or partition it, it pops up with the error message. Even when I try to erase and partition using "Terminal", I received the same error message.
    It's beginning to do my head in, but will see if replacing the SATA cable will make any difference. If it doesn't, I'll hazard a guess it could be the Logic Board.
    Thanks again for your suggestions!
    L.

  • Crypto errors CTM ERROR: Failed to allocate x bytes of memory

    Hi There.
    I am currently getting a strange error when trying to use and crypto services on our ASA 5520 (8.0.3)
    Initially I observed that a connected VPN had dropped.
    Then when I attempted to use ASDM or SSH I was blocked.
    In the end I opened telnet as a test and this was successful. Syslog also shows that traffic is passing as normal.
    The only obvious error I can see when observing various debug traces is this;
    FW02# CTM: rsa session with no priority allocated @ 0xCF1FBBA0
    CTM: Session 0xCF1FBBA0 uses a nlite (Nitrox Lite) as its hardware engine
    CTM: rsa context allocated for session 0xCF1FBBA0
    CTM: rsa session with no priority allocated @ 0xCE7A5EA8
    CTM: Session 0xCE7A5EA8 uses a nlite (Nitrox Lite) as its hardware engine
    CTM: rsa context allocated for session 0xCE7A5EA8
    CTM: rsa session with no priority allocated @ 0xCEF249D0
    CTM: Session 0xCEF249D0 uses a nlite (Nitrox Lite) as its hardware engine
    CTM: rsa context allocated for session 0xCEF249D0
    CTM: dh session with no priority allocated @ 0xCEF249D0
    CTM: Session 0xCEF249D0 uses a nlite (Nitrox Lite) as its hardware engine
    CTM: dh context allocated for session 0xCEF249D0
    CTM ERROR: Failed to allocate 279 bytes of memory, ctm_nlite_generate_dh_key_pair:183
    Has anyone seen anything like this before as I am lost?
    Mike

    Thanks for that. It does look like its out of crypto memory...
    DMA memory:
       Unused memory:                 23849516 bytes (30%)
       Crypto reserved memory:        20537556 bytes (26%)
         Crypto free:                       0 bytes ( 0%)
         Crypto used:                20537556 bytes (26%)
       Block reserved memory:         34669024 bytes (44%)
         Block free:                 30734752 bytes (39%)
         Block used:                  3934272 bytes ( 5%)
       Used memory:                     185120 bytes ( 0%)
    Unless there is a way to specifically restart only the crypto engine or clear crypto memory then I guess I am looking at a reload?
    Mike

  • System.InsufficientMemoryException: Failed to allocate a managed memory buffer of 268435456 bytes. The amount of available memory may be low. --- System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.

    Appfabric 1.1 server setup on 3 Windows server 2008 R2 machines.
    Client windows 7 64 bit.
    Because there is no bulkupdate ,we are trying to persist around 4000 objects wrapped in a CLR object.
     [Serializable]
        public class CacheableCollection<T> : ICacheable, IEnumerable<T>
            where T : class, ICacheable
            [DataMember]
            private Dictionary<string, T> _items;
            public IEnumerator<T> GetEnumerator()
                return _items.Values.GetEnumerator();
            IEnumerator IEnumerable.GetEnumerator()
                return _items.Values.GetEnumerator();
            [DataMember]
            public string CacheKey { get; private set; }
            public T this[string cacheKey] { get { return _items[cacheKey]; } }
            public CacheableCollection(string cacheKey, T[] items)
                if (string.IsNullOrWhiteSpace(cacheKey))
                    throw new ArgumentNullException("cacheKey", "Cache key not specified.");
                if (items == null || items.Length == 0)
                    throw new ArgumentNullException("items", "Collection items not specified.");
                this.CacheKey = cacheKey;
                _items = items.ToDictionary(p => p.CacheKey, p => p);
    We tried with the following options on server and client
    Server:
     <advancedProperties>
                <partitionStoreConnectionSettings leadHostManagement="false" />
                <securityProperties mode="None" protectionLevel="None">
                    <authorization>
                        <allow users="[email protected]" />
                        <allow users="[email protected]" />
                    </authorization>
                </securityProperties>
                <transportProperties maxBufferSize="500000000" />
            </advancedProperties>
    Client: 
     <transportProperties connectionBufferSize="131072" maxBufferPoolSize="500000000"
                           maxBufferSize="838860800" maxOutputDelay="2" channelInitializationTimeout="60000"
                           receiveTimeout="600000"/>
    I see different people experiencing different memory size issues. What is the actual memory limit of an  object that can be pushed to Appfabric. 
    Can some one please help ?
    Stack trace:
    Test method Anz.Cre.Pdc.Bootstrapper.Test.LoaderFuncCAOTest.AppFabPushAndRetrieveData threw exception: 
    System.InsufficientMemoryException: Failed to allocate a managed memory buffer of 268435456 bytes. The amount of available memory may be low. ---> System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
    System.Runtime.Fx.AllocateByteArray(Int32 size)
    System.Runtime.Fx.AllocateByteArray(Int32 size)
    System.Runtime.InternalBufferManager.PooledBufferManager.TakeBuffer(Int32 bufferSize)
    System.Runtime.BufferedOutputStream.ToArray(Int32& bufferSize)
    System.ServiceModel.Channels.BufferedMessageWriter.WriteMessage(Message message, BufferManager bufferManager, Int32 initialOffset, Int32 maxSizeQuota)
    System.ServiceModel.Channels.BinaryMessageEncoderFactory.BinaryMessageEncoder.WriteMessage(Message message, Int32 maxMessageSize, BufferManager bufferManager, Int32 messageOffset)
    System.ServiceModel.Channels.FramingDuplexSessionChannel.EncodeMessage(Message message)
    System.ServiceModel.Channels.FramingDuplexSessionChannel.OnSendCore(Message message, TimeSpan timeout)
    System.ServiceModel.Channels.TransportDuplexSessionChannel.OnSend(Message message, TimeSpan timeout)
    System.ServiceModel.Channels.OutputChannel.Send(Message message, TimeSpan timeout)
    Microsoft.ApplicationServer.Caching.WcfClientChannel.SendMessage(EndpointID endpoint, Message message, TimeSpan timeout, WaitCallback callback, Object state, Boolean async)
    Microsoft.ApplicationServer.Caching.WcfClientChannel.Send(EndpointID endpoint, Message message, TimeSpan timeout)
    Microsoft.ApplicationServer.Caching.WcfClientChannel.Send(EndpointID endpoint, Message message)
    Microsoft.ApplicationServer.Caching.DRM.SendRequest(EndpointID address, RequestBody request)
    Microsoft.ApplicationServer.Caching.RequestBody.Send()
    Microsoft.ApplicationServer.Caching.DRM.SendToDestination(RequestBody request, Boolean recordRequest)
    Microsoft.ApplicationServer.Caching.DRM.ProcessRequest(RequestBody request, Boolean recordRequest)
    Microsoft.ApplicationServer.Caching.DRM.ProcessRequest(RequestBody request, Object session)
    Microsoft.ApplicationServer.Caching.RoutingClient.SendMsgAndWait(RequestBody reqMsg)
    Microsoft.ApplicationServer.Caching.DataCache.SendReceive(RequestBody reqMsg)
    Microsoft.ApplicationServer.Caching.DataCache.ExecuteAPI(RequestBody reqMsg)
    Microsoft.ApplicationServer.Caching.DataCache.InternalPut(String key, Object value, DataCacheItemVersion oldVersion, TimeSpan timeout, DataCacheTag[] tags, String region)
    Microsoft.ApplicationServer.Caching.DataCache.Put(String key, Object value, String region)
    Anz.Cre.Pdc.DataCache.DataCacheAccess.Put[T](String cacheName, String regionName, T value) in C:\SVN\2.3_Drop3\app\Src\Anz.Cre.Pdc.DataCache\DataCacheAccess.cs: line 141
    Anz.Cre.Pdc.DataCache.DataCacheAccess.Put[T](String cacheName, String regionName, Boolean flushRegion, T value) in C:\SVN\2.3_Drop3\app\Src\Anz.Cre.Pdc.DataCache\DataCacheAccess.cs: line 372
    Anz.Cre.Pdc.Bootstrapper.Test.LoaderFuncCAOTest.AppFabPushAndRetrieveData() in C:\SVN\2.3_Drop3\app\Src\Anz.Cre.Pdc.Bootstrapper.Test\LoaderFuncCAOTest.cs: line 281

    Essentially what we are trying to do is the following.
    we have different kinds of objects in our baseline. Objects of type CAO, Exposures, Limits that change everyday after close of business day. 
    We wanted to push these different objects in to respective named regions in the cache. 
    Region Name     Objects
    CAO                   ienumerable<caos>
    Exposures           ienumerable<exposures>
    Limits                ienumerable<limits>
    we have a producer that pushes this data in to the cache and consumers of this data acting on the data when its available.
    Now the issue we are facing is when we try to push around 4000 cao objects (roughly in the size of 300MB when serialized using xml) ,we are getting the above error. Increasing the size on the client and cache cluster didnt help.
    The other alternative we were thinking about is chunking and pushing because appfabric doesnt support streaming. We might be able to push this data successfuly if we chunk. But how about the consumers ? wouldnt they face the same memory issue when we use
    getallobjectsinregion ?
    We thought if there was a way to figure out the keys in the region then probably the consumers can get one by one. However there is no such API. 
    The only option i see is using Appfabric notifications which msdn says isnt a reliable way.
    Please help.

  • I get the following error when formatting a external hard drive. Partition failed with the error:  POSIX reports: The operation couldn't be completed. Cannot allocate memory

    I get the following error when formatting a external hard drive.
    Partition failed with the error: 
    POSIX reports: The operation couldn’t be completed. Cannot allocate memory
    I have a Macbook pro 13" A1278. I purchased it around december 2010. I have a HHD 3.5" drive desktop select II 1.5TB. I purchased it around Febuary/March 2011 to use with my Macbook pro. I formatted it and moved all my files to there, I got it so my macbook's hard drive wouldn't brake and I lose everything on my macbook.
    Is there any fixes?

    First, try a system reset.  It cures many ills and it's quick, easy and harmless...
    Hold down the on/off switch and the Home button simultaneously until the screen blacks out or you see the Apple logo.  Ignore the "Slide to power off" text if it appears.  You will not lose any apps, data, music, movies, settings, etc.
    If the Reset doesn't work, try a Restore.  Note that it's nowhere near as quick as a Reset.  From iTunes, select the iPad/iPod and then select the Summary tab.  Follow directions for Restore and be sure to say "yes" to the backup.  You will be warned that all data (apps, music, movies, etc.) will be erased but, as the Restore finishes, you will be asked if you wish the contents of the backup to be copied to the iPad/iPod.  Again, say "yes."

  • U6678700.drv fails with "AutoPatch error: Unable to allocate memory in proc

    I am running an adpatch session to apply the u6678700.drv upgrade driver to an 11.5.10.2 instance running on AIX 6.1.
    Just after the "JDBC connect string from AD_APPS_JDBC_URL is......" statement, I receive "AutoPatch error: Unable to allocate memory in procedure aiumab()."
    I have confirmed and reconfirmed that all O/S, Software, ulimit and kernel settings are at the required minimum values/levels per Document ID# 761569.1.
    What else can I check into as to the cause of this error?

    There are no entries in the DB Log of any relevance to the error.
    ADPATCH LOG FILE:
    Uploading PL/SQL direct execute exceptions from file ...
    Done uploading PL/SQL direct execute exceptions from file.
    Done validating PL/SQL direct execute exceptions file.
    JDBC connect string from AD_APPS_JDBC_URL is
    (DESCRIPTION=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=lajeffeux03)(PORT=1521)))(CONNECT_DATA=(SID=UPG)))
    AutoPatch error:
    Unable to allocate memory in procedure aiumab().
    You should check the file
    /oracle/d01/oraupg/apps/apps_st/appl/admin/UPG/log/6678700_adpatch.log
    for errors.

  • Can't restore iPod classic 60GB video.  Cannot Allocate Memory - Disk Utility

    Sorry if this has been asked before but I've been searching the boards and am yet to find a result.
    I have an old iPod classic 60GB (video) which has a dodgy HDD.  It was clicking and in a constant sad face/charging loop.  I've managed to get it display "Please restore iPod" message and it is now being recognised by my macbook.  I've tried to restore it via iTunes but when I try that I get an error message tht says "The iPod could not be restores because it is busy".
    I've tried toformat the iPod using Disk Utility too but i get an error:
    Disk Erase failed with the error:
    POSIX reports: The operation couldn’t be completed. Cannot allocate memory
    The oddest thing is that under disk utility my iPod drive is showing up as a 2TB drive:
    All I'm realy trying to do is restore the iPod so I can try to recover some of the tracks using Disk Drill.  Can anybody help?

    See if you can find a window based PC and try to do a Restore there.

  • Urgent help with ORA-01062: unable to allocate memory for define buffer

    Hello, Folks!
    I have a c++ code that is using OCI API that is running both in
    windows and in spark.
    (The same c++ code compiled and running in both platforms)
    and asking the same query.
    In windows, everything is OK but in spark
    it failes...
    The ORACLE Server is installed on win2003 station
    Both client and server ORACLE version is 10.2.0.1.0
    The code is running on spark(oracle instant client is installed)
    This query is a simple select query that selects only one field
    of type VARCHAR2(4000) (the same problem with happen with any
    string type field larger than 150 characters)
    The error occured when calling for OCIDefineByPos method
    when associating an item in a select-list with the type and output
    data buffer.
    The error message is: ORA-01062: unable to allocate memory for define
    buffer
    (This error message signifies that I need to use piecewise operation...)
    But it happens even if I make this varchar2 field to be of size larger
    than 150.
    It is not fair to use piecewise fetch for such a small fields sizes.
    May be there is not configuration setting that can enlarge this
    I know that I wrote here a very superficial description.
    If somebody knows something about this issue, please help
    Thanks

    I had a special luck today after searching the solution per weeks:)I have got a solution.
    When I get the size of the oci field, in the following expressioin
    l_nResult = OCIAttrGet(l_oParam->pOCIHandle(), OCI_DTYPE_PARAM, &(orFieldMD.m_nSize), NULL, OCI_ATTR_DATA_SIZE, m_oOCIErrInfo.pOCIError());
    orFieldMD.m_nSize was of type ub4 but according the manual it must be ub2.
    As a result, the number returned was very large (junk?) and I passed this value to OCIDefineByPos
    Now I changed the type and everything is working!!!
    In windows there is not problem with this expression :)
    Thanks
    Issahar

  • In my server RMAN can't allocate memory to the virtual instance

    Hi
    I want to restore my database in a new server
    At first I want to restore "spfile" from controlfile autobackup, there is no spfile or pfile of my database in the new server
    I do these steps
    1 . expoer ORACLE_SID=Sales
    2. rman target /
    3. rman > set dbid 817528985
    4. rman > startup force nomount
    after step 4 I get this error :
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of startup command at 09/22/2010 14:03:21
    RMAN-04014: startup failed: ORA-04031: unable to allocate 28704 bytes of shared memory ("shared pool","unknown object","sga heap(1,0)","kebm test replies")
    I get this error while I have about *80GB* free memory on the new server
    and my parameters in /etc/sysctl.conf file is :
    kernel.sysrq = 0
    kernel.core_uses_pid = 1
    kernel.shmall = 5242880
    kernel.shmmax = 42949672960
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 1048576
    net.core.rmem_max = 1048576
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    I need to say that I can create a new database in that server by dbca that its
    sga_max_size is 9632M and pga_aggregate_target is 3206M
    Do you know what is wrong ? Do you know why RMAN can't allocate memory to the virtual instance ?
    thanks

    The database which could not start is because of the low memory on the system or in the sga_max_size there is high value set. So the system could not allocate so large memory as it does not have free so much. There may be other reasons like OS limitation in order of usage the memory.
    Set a lower amount of memory in the database.
    Eg:
    SQL> alter system set sga_max_size=1600M scope=spfile;
    System altered.
    SQL> alter system set sga_target=1600M;
    System altered.
    refer the link it will be useful:http://arjudba.blogspot.com/2008/05/startup-fails-with-ora-27102-out-of.html
    else use manually copy the spfile/pfile to the new server
    refer the link, it may be useful to you.http://oracleinstance.blogspot.com/2010/08/disaster-recovery-using-rman-demo.html
    Edited by: rajeysh on Sep 22, 2010 4:40 PM

  • Cannot allocate memory error

    Hello,
    I am using:
    Oracle: Berkeley DB XML 2.5.16: (December 22, 2009)
    Berkeley DB 4.8.26: (December 18, 2009)
    When attempting to open a container in an environment where the application process has been running for a while I get the following error:
    XmlDatabaseError: XmlDatabaseError 12, Error: Cannot allocate memory
    The Python bindings do not have support for errcall. I have added errcall support to the bindings but I do not appear to be receiving any message (but it is new code so I do not trust it completely).
    Running the dbxml shell tool I get:
    [root@localhost db]# dbxml
    Joined existing environment
    dbxml> open test.dbxml
    stdin:1: openContainer failed, Error: Cannot allocate memory
    dbxml>I don't appear to be out of locks:
    127786  Last allocated locker ID
    0x7fffffff      Current maximum unused locker ID
    9       Number of lock modes
    20000   Maximum number of locks possible
    10000   Maximum number of lockers possible
    20000   Maximum number of lock objects possible
    1       Number of lock object partitions
    639     Number of current locks
    691     Maximum number of locks at any one time
    10      Maximum number of locks in any one bucket
    0       Maximum number of locks stolen by for an empty partition
    0       Maximum number of locks stolen for any one partition
    1282    Number of current lockers
    1320    Maximum number of lockers at any one time
    571     Number of current lock objects
    622     Maximum number of lock objects at any one time
    3       Maximum number of lock objects in any one bucket
    0       Maximum number of objects stolen by for an empty partition
    0       Maximum number of objects stolen for any one partition
    998958  Total number of locks requested
    997936  Total number of locks released
    0       Total number of locks upgraded
    13055   Total number of locks downgraded
    0       Lock requests not available due to conflicts, for which we waited
    0       Lock requests not available due to conflicts, for which we did not wait
    0       Number of deadlocks
    0       Lock timeout value
    0       Number of locks that have timed out
    0       Transaction timeout value
    0       Number of transactions that have timed out
    10MB 152KB      The size of the lock region
    0       The number of partition locks that required waiting (0%)
    0       The maximum number of times any partition lock was waited for (0%)
    0       The number of object queue operations that required waiting (0%)
    0       The number of locker allocations that required waiting (0%)
    0       The number of region locks that required waiting (0%)
    3       Maximum hash bucket lengthOr transactions:
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    35511/10059360  File/offset for last checkpoint LSN
    Thu Aug 12 17:25:26 2010        Checkpoint timestamp
    0x80007128      Last transaction ID allocated
    100     Maximum number of active transactions configured
    0       Active transactions
    3       Maximum active transactions
    28968   Number of transactions begun
    16699   Number of transactions aborted
    12269   Number of transactions committed
    0       Snapshot transactions
    0       Maximum snapshot transactions
    0       Number of transactions restored
    40KB    Transaction region size
    0       The number of region locks that required waiting (0%)
    Active transactions:Or mutexes:
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    6MB 112KB       Mutex region size
    0       The number of region locks that required waiting (0%)
    4       Mutex alignment
    1       Mutex test-and-set spins
    50000   Mutex total count
    39829   Mutex free count
    10171   Mutex in-use count
    10201   Mutex maximum in-use count
    Mutex counts
    39829   Unallocated
    509     db handle
    1       env dblist
    2       env handle
    1       env region
    4       lock region
    691     logical lock
    1       log filename
    1       log flush
    2       log region
    815     mpoolfile handle
    3896    mpool buffer
    63      mpool filehandle
    17      mpool file bucket
    1       mpool handle
    4099    mpool hash bucket
    1       mpool region
    1       mutex region
    62      sequence
    1       twister
    1       txn active list
    1       transaction checkpoint
    1       txn regionAny ideas?

    See unable to allocate memory for mutex; resize mutex region
    Edited by: Vitaliy Katochka on Nov 23, 2010 4:31 PM

  • POSIX reports: The operation couldn't be completed. Cannot allocate memory

    Hi all.
    Please forgive my naivety. I'm a newbie here.
    I have searched endlessly to find a solution to my problem, but cannot seem to find one.
    I have just replaced a hard drive on a 13" Macbook pro (purchased end of 2009). It developed a fault, and would not boot. When turned on, would only produce a blank screen with a flashing folder featuring a question mark.
    When going through disk utility, the system could not detect any internal hard drive, which made me believe it was faulty.
    I ordered a 500GB Toshiba Hard Drive, and installed it as it should be. In Disk Utility, the system detects the Toshiba hard drive (it comes up in the list you'd expect it to). Whenever I go to 'Erase' or 'Partition', however, I get the following error message.
    "Disk Erase/Partition failed
    Disk Erase/Partition failed with the error:
    POSIX reports: The operation couldn't be completed. Cannot allocate memory"
    Is anybody able to offer this novice any suggestions as to what I might do? I have tried some "Terminal" entries, but they don't seem to help in the slightest.
    Thanks in advance,
    L.
    P.S. The Macbook is using Snow Leopard.

    Hi again OGELTHORPE.
    I've already run two hardware tests. One quick one and one extended one. It was unable to pick anything up.
    It's like everything is running perfectly, but when I go to erase the HDD or partition it, it pops up with the error message. Even when I try to erase and partition using "Terminal", I received the same error message.
    It's beginning to do my head in, but will see if replacing the SATA cable will make any difference. If it doesn't, I'll hazard a guess it could be the Logic Board.
    Thanks again for your suggestions!
    L.

  • WANTED: java.io.IOException: Cannot allocate memory

    hi all!
    anybody seen/knows about this stacktrace:
    java.io.IOException: Cannot allocate memory at
    java.net.PlainDatagramSocketImpl.send(Native Method) at
    java.net.DatagramSocket.send(DatagramSocket.java:581) at
    I have search the corr. PlainDatagramSocketImpl.c impl and didnt find anything about this message?
    Facts: It has nothing to do with max packet size (65k) or so..
    best regards
    michael

    Could it be that this is the call/place where the error occurs? (because the java/io/IOException is thrown here!)
    PlainDatagramSocketImpl send call's [Linux 2.4.21] socket.c sys_sendto (see below).
    java/net/PlainDatagramSocketImpl.c
    * Send the datagram.
    * If we are connected it's possible that sendto will return
    * ECONNREFUSED indicating that an ICMP port unreachable has
    * received.
    ret = NET_SendTo(fd, fullPacket, packetBufferLen, 0,
              (struct sockaddr *)rmtaddrP, len);
    if (ret < 0) {
         switch (ret) {
         case JVM_IO_ERR :
              if (errno == ECONNREFUSED) {
              JNU_ThrowByName(env, JNU_JAVANETPKG "PortUnreachableException",
    "ICMP Port Unreachable");
              } else {
              NET_ThrowByNameWithLastError(env, "java/io/IOException", "sendto failed");
              break;
         case JVM_IO_INTR:
         JNU_ThrowByName(env, "java/io/InterruptedIOException",
                   "operation interrupted");
              break;
    if (mallocedPacket) {
         free(fullPacket);
    return;
    Linux 2.4.21, socket.c:
    asmlinkage long sys_sendto(int fd, void * buff, size_t len, unsigned flags,
                   struct sockaddr *addr, int addr_len)
         struct socket *sock;
         char address[MAX_SOCK_ADDR];
         int err;
         struct msghdr msg;
         struct iovec iov;
         sock = sockfd_lookup(fd, &err);
         if (!sock)
              goto out;
         iov.iov_base=buff;
         iov.iov_len=len;
         msg.msg_name=NULL;
         msg.msg_iov=&iov;
         msg.msg_iovlen=1;
         msg.msg_control=NULL;
         msg.msg_controllen=0;
         msg.msg_namelen=0;
         if(addr)
              err = move_addr_to_kernel(addr, addr_len, address);
              if (err < 0)
                   goto out_put;
              msg.msg_name=address;
              msg.msg_namelen=addr_len;
         if (sock->file->f_flags & O_NONBLOCK)
              flags |= MSG_DONTWAIT;
         msg.msg_flags = flags;
         err = sock_sendmsg(sock, &msg, len);
    out_put:          
         sockfd_put(sock);
    out:
         return err;
    int sock_sendmsg(struct socket sock, struct msghdr msg, int size)
         int err;
         struct scm_cookie scm;
         err = scm_send(sock, msg, &scm);
         if (err >= 0) {
              err = sock->ops->sendmsg(sock, msg, size, &scm);
              scm_destroy(&scm);
         return err;

  • Java.lang.OutOfMemoryError: Cannot allocate memory in tsStartJavaThread

    Running Java Application on Web logic managed server fails with following error:
    java.lang.OutOfMemoryError: Cannot allocate memory in tsStartJavaThread (lifecycle.c:1096).
    Java heap 1G reserved, 741076K committed
    Paged memory=26548K/3145712K.
    Your Java heap size might be set too high.
    Try to reduce the Java heap size using -Xmx:<size> (e.g. "-Xmx128m").
    at java.lang.Thread.start0(Native Method)
    at java.lang.Thread.start(Thread.java:640)
    at java.util.concurrent.ThreadPoolExecutor.addIfUnderCorePoolSize(ThreadPoolExecutor.java:703)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:652)
    at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92)
    Memory doesn't seem to be an issue, because -Xmx = 1GB is specified in VM args.
    This application needs only 200MB to run (Obtained by running the application in eclipse and
    checking the heap memory usage).
    Not sure whats causing this error? Application runs as single (main) thread
    and towards the end of the program multiple threads(they do JDBC tasks) are
    are spawned. In this particular case, 3 threads were about to be launched, when
    this error occured. Please help in pointing out what the issue is and how this
    can be resolved.
    Here are further details on Jrockit version and VM arguments:
    Following JRockit is used on the Weblogic machine.
    $java -version
    java version "1.6.0_22"
    Java(TM) SE Runtime Environment (build 1.6.0_22-b04)
    Oracle JRockit(R) (build R28.1.1-14-139783-1.6.0_22-20101206-0241-linux-ia32, compiled mode)
    Following are the JVM arguments:
    java -jrockit -Xms512m -Xmx1024m -Xss10240k
    Thanks in advance.

    Noting that you are using a IBM vm and a rather old one at that...
    Threads take java memory. Normally the solution is to increase the maximum java heap. Doing that depends on the memory that the system supports and the maximum that the VM allows and the default that it uses.
    You might want to verify the command line options you are using. You might want to also find out what happens if you use a larger number or smaller one.
    And if all else fails you can use a thread pool rather than trying to create seperate threads.

  • TNS:cannot allocate memory - is there limit to the num databases on one box

    Hi,
    Server Spec:
    Windows 2008 R2 x64 with all updates applied
    Oracle 11.2.0.1 Standard Edition with Patch 16 applied (which matches all of the live databases version)
    10GB RAM
    I have a standby server which has all my standby databases on it
    I have recently added about 8 new standby databases taking it to 102 and since then I keep getting an intermittent TNS error; TNS-12531: TNS:cannot allocate memory
    I have scripts that loop through all the databases one at a time, they start the standby database, apply new archived redo logs and shut the database down
    This has worked perfectly for years but I started getting the issues after added a few more databases, it probably failes 1 in 3 times so it is not consistent.
    With all of the databases just idling the server is using 6GB out of the 10GB available RAM so there is plenty of free memory
    If I remove half a dozen databases from the scripts it starts to work again.
    I created 2 new listeners and split the databases so half run on one listener and half on the other, it still has intermittent failures
    Server has been restarted many times and sometimes will run through a couple of times before failing and sometimes will fail on the first run through
    Failures are normally towards the end of the script, ie. has applied the logs to 95 databases and fails on the 96th!
    I've reorded the databases in the script and it always fails towards the end, not on a specific database
    I have also added restarts to the listeners during the scripts which made no difference
    I have now ammended my scripts so the actual windows service is not running for any database and the script does a net start before applying the logs and a net stop afterwards. It now runs through without a problem but I would like to get to the bottom of the issue and keep the windows services running as starting and stopping them adds time to the job of apply the logs across all the databases
    Thanks
    Robert

    Osama_mustafa wrote:
    Just as note don't add this Value by yourself , you have to depend on document .I'm not sure I understand? There are no real suggestions in the doc, only one example
    TNS-12531 On Windows 64-bit [ID 1384337.1:}You need to check the third value for the registry entry \HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\SubSystems\:
    Eg: SharedSection=1024,20480,1024
    The third argument is the maximum amount of heap memory allocated to non interactive desktops. Increase this third value and check when the TNS error disappears. There is no optimum value, it depends from one system to > another.
    Restart of System may be required for changes to take effect
    I have set the third number to 2048 and it has been running fine since

  • Mmap: Cannot allocate memory

    Hi,
    I am running Berkeley DB XML 2.4.16 on Red Hat Enterprise Linux AS release 3 (Taroon Update 6).
    I have a small test program (Java) that configures the environment with the following.
    config.setAllowCreate(true);
    config.setInitializeCache(true);
    config.setTransactional(true);
    config.setInitializeLocking(true);
    config.setInitializeLogging(true);
    config.setVerbose(VerboseConfig.DEADLOCK, true);
    config.setVerbose(VerboseConfig.RECOVERY, true);
    config.setVerbose(VerboseConfig.WAITSFOR, true);
    config.setVerbose(VerboseConfig.REGISTER, true);
    config.setRegister(true);
    config.setRunRecovery(true);
    The environment data directory has a DB_CONFIG with the following single line.
    set_cachesize 1 524288000 3
    When run, it produces the following error.
    mmap: Cannot allocate memory
    PANIC: Cannot allocate memory
    unable to join the environment
    What is somewhat bizarre is that I also have a equivalent C++ application that has no problem opening the environment with even a 2.5GB cache size.
    I have also observed this on other flavors of Linux and also with earlier versions of Berkeley DB XML.
    Any insight would be helpful.
    Thanks,
    Neil
    Edited by: vannote on Apr 15, 2009 3:21 PM

    Hi Lauren,
    I can confirm that the DB_CONFIG is in the environment directory and it is indeed being read as the cache files created in the environment directory do reflect the DB_CONFIG values on a successful open.
    If I set the cache size values in the DB_CONFIG to ~1.8GB or less, the Java version of the application has no problem opening the environment. Any value larger than that, it fails with the previously mentioned mmap error.
    If I remove the DB_CONFIG and explicitly set the cache size/count in code I observe the same behavior.
    Thanks,
    Neil
    Additional Information...
    Yes, I noticed that I contradicted myself and am able to get a larger cache size today than yesterday (1.8 vs. less than 1.5). I realized my test machine is heavily resource deprived. This would not be the case on the target machine which has 16GB of RAM and plenty of swap space available and that machine can't go beyond ~1GB cache size.
    But I guess the constrained resources shouldn’t influence a file mapping on a machine with plenty of disk space as the native version of the application confirms and behaves as expected
    Edited by: vannote on Apr 16, 2009 8:01 AM - Additional Information
    Edited by: vannote on Apr 16, 2009 8:41 AM - Spelling

Maybe you are looking for

  • How can i disable a button in the application tool bar of a standard trans?

    Hi Experts,    I want to disable the create button in the application tool bar of the standard transaction cv03n. Is there any method?

  • What happens to a logged in user?

    Sorry for the newbie question, but I'm a recent "switcher". I need to access my new iMac from work pc and plan on using VNC or something similar. (I could use my Mac laptop and Apple Remote Desktop, too). I was just wondering what will happen if my w

  • How to creat purchase register in SAP r/3 system?

    Dear All, Please tell me how to creat purchase register in sap R/3 system.

  • Photo file size

    I'm being told that the total file size for photos on our business website should be 40KB or less per page. I can't seem to get the pictures that small. I usually end up in the 70-80 KB range. I know how to export and select specific pixel sizes but

  • Automatic population table form

    I have table form. I need to automatic populate certain fields of this table form with values based on selections made by users in their select lists. As in this example http://htmldb.oracle.com/pls/otn/f?p=31517:106:3816553832235531::::: Suppose, 1