Resizing a block of memory

I need to be able to resize a block of memory. Is there a function in java simmilar to realloc in C ?

I need to be able to resize a block of memory. Is
there a function in java simmilar to realloc in C ?Java memory management is completely different from C...
no untyped memory allocation is possible, so there's nothing
quite like malloc or realloc.
The closest thing there is, is to use byte arrays... and when
you need to 'resize' such an array, to allocate a new (bigger)
byte array, and then copy contents from existing one, usually using
System.arraycopy(). So, something like:
static byte[] realloc(byte[] orig, int newSize)
if (newSize <= orig.length) {
return orig;
byte [] newArray = new byte[newSize];
System.arraycopy(orig, 0, newArray, 0, orig.length);
return newArray;

Similar Messages

  • Can you prevent a block of memory from being swapped?

    I'm wondering if there's a way to "lock" a section of memory in Java so that it will not be written out to the swap file by the OS.
    Let's say you're writing a client application that will connect to a server, and allow the user (after authentication, of course) to download sensitive information to the client machine and view it. Because the client machine may be in an unsecure environment (i.e. public library terminal, cybercafe, etc), it is desirable to ensure that no trace of the information remains, even in the swap file. The data in question is largely text, so it wouldn't be more than a few hundred KB, therefore I can't imagine it would be a problem to reserve this memory (i.e. it's not like we're starving out other applications running on the same client machine by reserving ALL physical memory).
    Basically I'm looking for something in the API that lets me say to the OS: "don't swap out this block of memory, no matter what". Is there a way to do this?
    I found this in the bug database but it doesn't sound entirely like what I'm looking for (and it's marked as "will not fix" anyway):
    http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4852696

    The simple answer is "no".
    This is a valid concern about the use of crypto libraries in Java.
    Because Java has garbage collection, and can copy anything to any memory region during garbage collection, in practice the crypto library writers choose not to bother about non-swapped memory.
    Implementing non-swapped memory in some operating systems like Windows is not a simple task: I believe that in Windows you need to write a device driver and a Windows Service, like the "Protected Storage Service" that you can find in Windows. The "Protected Storage" is not very used in Windows - it implements only the CryptEncryptData and CryptDecryptData APIs. The real crypto work is done in normal memory, if you check the CryptoAPI Providers that come with Windows.

  • Initialize block of memory in CIN

    Hello, I am having quite a time trying to initialize a block of memory and then hand a pointer to that memory back to LabVIEW.
    1. I pass an uInt8 Array of 3000 elements (6kB) into my LabVIEW CIN
    2. I prepare a block of memory in the CIN.
    3. I initialize the memory by loading a static table into the block of memory
    4. I WANT to move the initialized memory space into the uInt8 Array, starting at index[0]
    The code below compiles, but when I run the CIN in labview, LV crashes.  
    What am I doing wrong?
    Thank you for any help,
    Victor
    /* Typedefs */
    typedef struct {
        int32 dimSize;
        uInt8 globalData2[1];
        } TD1;
    typedef TD1 **TD1Hdl;
    typedef struct _LANGUAGE_ENTRY
            UCHAR           Language;
            PVOID             pTable;
        } LANGUAGE_ENTRY;
    extern LTABLE LANGUAGE_TABLE
    void *memPtr;
    extern "C"{
    MgErr CINRun(TD1Hdl globalData);
    MgErr CINRun(TD1Hdl globalData)
        uInt16 usSize;
        LANGUAGE_ENTRY MyLanguages[2];
        usSize = 5576; //GlobalDataSize
        memPtr = malloc( usSize );
        memset( memPtr, 0, sizeof(memPtr) );
        MyLanguages[0].Language = ENGLISH;
        MyLanguages[0].pTable = (void*)LTABLE LANGUAGE_TABLE;
        MyLanguages[1].Language = 0;
        MyLanguages[1].pTable = 0;
        Initialize ( MyLanguages, memPtr);
        MoveBlock(memPtr, (*globalData)->globalData2, usSize);
        return noErr;

    Hi Victor,
    I think the reason LabVIEW is crashing is because it isn't happy with the pointer you are giving it or the way you are giving it. I personally am not that familiar with C code, but my suggestion would be to use the malloc function. The LabVIEW help on the Memory Manager (Fundamentals » Calling Code Written in Text-Based Programming Languages » Concepts » CINs » LabVIEW Manager Routines) is a great place to start. It describes the functions to use to pass pointers as parameters and links to the Memory Manager page which talks about Using Pointers for Dynamic Memory Allocation. You could also create the array and specify the memory to use for it rather than the other way around and then move the array there. I hope this helps!
    Stephanie

  • How to discard blocks from memory ?

    Hi,
    How to discard blocks from memory ?
    Thank you.

    To add to what has already been posted. The full Oracle version should be included on all posts since many features and most bugs are version dependend.
    Oracle will flush dirty blocks from the buffer cache and overlay unchanged blocks as needed to fit new requests to the buffer cache. It also reuses space in the shared pool so the question of which blocks you want to flush from memory and why can be important to getting the most appropriate answer to your questions.
    On a production instance it should be unnecessary to manually flush either the buffer cache or shared pool as a normal part of opertations.
    HTH -- Mark D Powell --

  • Getting UI resize event block my VI

    Hi,
    I try to resize a chart if the user is resizing the vi UI.
    Unfortunaltly, the option from the menu edition "Resize with the front
    panel" is unavailable for my chart (maybe because it's in a tab
    panel). So a way to resize it is to get the event "UI resize" and to
    manage the size of my graphe.
    But as I enter in my event, I seem to never go away from this event
    structure.
    Actually, just to stop my vi with the famous "stop button", I do the
    following actions :
    I launch my vi
    I resize my UI
    I put the stop button (its state is changing but my vi don't stop)
    I need to resize my UI for the stop button state to be considered and
    my VI stop.
    I certainly do something wrong ;-(
    My event structure is inside the main "while" loop.
    I did not
    use the option "block the front face until this event ends*"
    Thanks for all.
    *excuse for the translation, I use a french version.

    Bonjour,
    J'imagine que dans le code, le bouton STOP utilisé est directement câblé à la condition de fin de la boucle while. Ayant moi même fais le test j'ai un comportement similaire à ce que tu décris dans ta question.
    Il vaut mieux ajouter une condition d'événement '"stop":Valeur changée' en incluant le terminal dans la condition d'événement. Ensuite câbler le stop à la condition de fin de boucle.
    J'attache un VI enregistré sous LV6.1 pour illustrer cette réponse.
    Hope this helps !
    Julien
    Attachments:
    Gestion_d'événements.vi ‏27 KB

  • How do I copy a block of memory (image) in LV array

    Hi,
    I need to read and copy image data from memory to LV array. I use HWaccess VI (read a byte) but the loop time is too long, is there any solution with specific DLL which realise a total copy in LV array ?
    Thanks
    Francois.
    My system: P4/Win2000/LV 6.0/IMACVISION

    If you just want to copy an IMAQ Image into a 2D array, I recommend using the IMAQ ImageToArray.vi.
    If you have NI Vision installed, then you could also use the IMAQ GetImagePixelPtr.vi to return the memory location of the first pixel and reference the rest from there.
    Kyle V
    Applications Engineer
    National Instruments
    www.ni.com/ask

  • My phone was stolen, I have moved it in lost mode,but I can see that whoever has it,is changing phone name and can use it until I activate lost mode again.Can I do it permanently? How can I cancel ICloud back for the stolen phone,it blocks Cloud memory?

    My phone was stolen, I have moved it in lost mode,but I can see that whoever has it,is changing phone name and can use it until I activate lost mode again.Can I do it permanently? How can I cancel ICloud back for the stolen phone, as I cannot delete the stolen phone backup (says that backup is being in use) and this is keeping memory occupied in the Cloud and I cannot back-up the other devices.

    http://support.apple.com/kb/ht2526

  • Resizing by blocks

    Hello,
    I'm creating a custom JComponent that has lots of lines, like a JList, but they are all part of the same component. I want to make it resizable, but in a way that it only grows or shrinks if there's space for each line to change its size, because the lines need to be all the same size. For example: if there are 10 lines, then resizing its container by up to 9 pixels wouldn't change anything.
    I've considered using a ComponentListener to watch when the JPanel (where this component is in) is resized, but the API says that's not a good idea.
    The problem is finding the way to make the getPreferredSize() and getMinimumSize() work, or finding an alternative solution. I'm using a GridBagLayout.

    I hope I understood what you were asking for. Weirdly enough, reading the source code of FlowLayout really helped me. Using a GridBagLayout is another option but I don't like them too much.import javax.swing.*;
    import java.awt.*;
    * User: weebib
    * Date: 15 janv. 2005
    * Time: 17:39:25
    public class MultiLine {
         private static class ListLayout implements LayoutManager {
              public void removeLayoutComponent(Component comp) {}
              public void addLayoutComponent(String name, Component comp) {}
              public void layoutContainer(Container target) {
                   synchronized (target.getTreeLock()) {
                        Insets insets = target.getInsets();
                        int nmembers = target.getComponentCount();
                        int x = insets.left;
                        int y = insets.top;
                        int height = target.getHeight() / nmembers;
                        for (int i = 0; i < nmembers; i++) {
                             Component m = target.getComponent(i);
                             if (m.isVisible()) {
                                  Dimension d = m.getPreferredSize();
                                  m.setBounds(x, y, target.getWidth(), height);
                                  y += height;
              public Dimension preferredLayoutSize(Container target) {
                   synchronized (target.getTreeLock()) {
                        Dimension dim = new Dimension(0, 0);
                        int nmembers = target.getComponentCount();
                        for (int i = 0; i < nmembers; i++) {
                             Component m = target.getComponent(i);
                             if (m.isVisible()) {
                                  Dimension d = m.getPreferredSize();
                                  dim.height += d.height;
                                  dim.width = Math.max(dim.width, d.width);
                        Insets insets = target.getInsets();
                        dim.width += insets.left + insets.right;
                        dim.height += insets.top + insets.bottom;
                        return dim;
              public Dimension minimumLayoutSize(Container target) {
                   synchronized (target.getTreeLock()) {
                        Dimension dim = new Dimension(0, 0);
                        int nmembers = target.getComponentCount();
                        for (int i = 0; i < nmembers; i++) {
                             Component m = target.getComponent(i);
                             if (m.isVisible()) {
                                  Dimension d = m.getMinimumSize();
                                  dim.height += d.height;
                                  dim.width = Math.max(dim.width, d.width);
                        Insets insets = target.getInsets();
                        dim.width += insets.left + insets.right;
                        dim.height += insets.top + insets.bottom;
                        return dim;
         public static void main(String[] args) {
              try {
                   UIManager.setLookAndFeel(UIManager.getCrossPlatformLookAndFeelClassName());
              } catch (Exception e) {
                   e.printStackTrace();
              final JFrame frame = new JFrame(MultiLine.class.getName());
              frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
              JPanel panel = new JPanel(new ListLayout());
              for (int i = 0; i < 3; i++) {
                   panel.add(new JLabel("label"));
                   panel.add(new JTextField(20));
                   JPanel subPanel = new JPanel();
                   subPanel.setOpaque(true);
                   subPanel.setBackground(Color.RED);
                   panel.add(subPanel);
                   panel.add(new JTextArea());
              frame.setContentPane(new JScrollPane(panel));
              SwingUtilities.invokeLater(new Runnable() {
                   public void run() {
                        frame.setSize(400, 300);
                        frame.show();
    }

  • My scrollbars disappear when resizing my LabView8.6 Block Diagram window

    My design can't be seen in it's entirety within the Block Diagram window so I need to use the vertical and horizontal scrollbars to navigate the design.  The scrollbars disappear if I resize the Block Diagram window past "some point".  If I shrink the window very small, the scrollbars reappear.  Other programs I've generated don't seem to have this problem.  Any suggestions?

    Hi,
    Check the scrollbars settings, right click on vertical/horizontal scrollbars and change the settings. If the problem still exists, U can send me ur vi, let me have a look to know the exact problem.
    regards
    sunil 

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • ICMP Timeout Alarm due to TCP Protocol Memory Allocation Failure ?

    Hello Experts ,
      >> Device uptime suggests there was no reboot
    ABCSwitch uptime is 28 weeks, 13 hours, 50 minutes
    System returned to ROM by power-on
    System restarted at 13:09:45 UTC Mon Aug 5 2013
    System image file is "flash:c2950-i6k2l2q4-mz.121-22.EA12.bin"
    >> But observed logs mentioning Memory Allocation Failure for TCP Protocol Process ( Process ID 43) due to Memory Fragmentation
    003943: Feb 18 02:14:27.393 UTC: %SYS-2-MALLOCFAIL: Memory allocation of 36000 bytes failed from 0x801E876C, alignment 0
    Pool: Processor Free: 120384 Cause: Memory fragmentation
    Alternate Pool: I/O Free: 682800 Cause: Memory fragmentation
    -Process= "TCP Protocols", ipl= 0, pid= 43
    -Traceback= 801C422C 801C9ED0 801C5264 801E8774 801E4CDC 801D9A8C 8022E324 8022E4BC
    003944: Feb 18 02:14:27.397 UTC: %SYS-2-CFORKMEM: Process creation of TCP Command failed (no memory).
    -Process= "TCP Protocols", ipl= 0, pid= 43
    -Traceback= 801E4D54 801D9A8C 8022E324 8022E4BC
    According to Cisco documentation for Troubleshooting Memory issues on Cisco IOS 12.1 (http://www.cisco.com/c/en/us/support/docs/ios-nx-os-software/ios-software-releases-121-mainline/6507-mallocfail.html#tshoot4 ), which suggests the TCP Protocols Process could not be started due to Memory being fragmented
    Memory Fragmentation Problem or Bug
    This situation means that a process has consumed a large amount of processor memory and then released most or all of it, leaving fragments of memory still allocated either by this process, or by other processes that allocated memory during the problem. If the same event occurs several times, the memory may fragment into very small blocks, to the point where all processes requiring a larger block of memory cannot get the amount of memory that they need. This may affect router operation to the extent that you cannot connect to the router and get a prompt if the memory is badly fragmented.
    This problem is characterized by a low value in the "Largest" column (under 20,000 bytes) of the show memory command, but a sufficient value in the "Freed" column (1MB or more), or some other wide disparity between the two columns. This may happen when the router gets very low on memory, since there is no defragmentation routine in the IOS.
    If you suspect memory fragmentation, shut down some interfaces. This may free the fragmented blocks. If this works, the memory is behaving normally, and all you have to do is add more memory. If shutting down interfaces doesn't help, it may be a bug. The best course of action is to contact your Cisco support representative with the information you have collected.
    >>Further TCP -3- FORKFAIL logs were seen
    003945: Feb 18 02:14:27.401 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003946: Feb 18 02:14:27.585 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003947: Feb 18 02:14:27.761 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003948: Feb 18 02:14:27.929 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003949: Feb 18 02:14:29.149 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    According to Error Explanation from Cisco Documentation (http://www.cisco.com/c/en/us/td/docs/ios/12_2sx/system/messages/122sxsms/sm2sx09.html#wp1022051)
    suggests the TCP handles from a client could not be created or initialized
    Error Message %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    Explanation The system failed to create a process to handle requests  from a client. This condition could be caused by insufficient  memory.
    Recommended Action Reduce other system activity to ease  memory demands.
    But I am still not sure about the exact root cause is as
    1.The GET/GETNEXT / GET BULK messages from SNMP Manager (Here, IBM Tivoli Netcool  ) uses default SNMP Port 161 which is
       UDP and not TCP
    2. If its ICMP Polling failure from IBM Tivoli Netcool , ICMP is Protocol Number 1 in Internet Layer of TCP/IP Protocol Suite  and TCP is Protocol                 Number 6 in the Transport Layer of TCP/IP Protocol Suite .
    So I am still not sure how TCP Protocol Process Failure could have caused ICMP Timeout . Please help !
    Could you please help me on what TCP Protocol Process handles in a Cisco Switch ?
    Regards,
    Anup

    Hello Experts ,
      >> Device uptime suggests there was no reboot
    ABCSwitch uptime is 28 weeks, 13 hours, 50 minutes
    System returned to ROM by power-on
    System restarted at 13:09:45 UTC Mon Aug 5 2013
    System image file is "flash:c2950-i6k2l2q4-mz.121-22.EA12.bin"
    >> But observed logs mentioning Memory Allocation Failure for TCP Protocol Process ( Process ID 43) due to Memory Fragmentation
    003943: Feb 18 02:14:27.393 UTC: %SYS-2-MALLOCFAIL: Memory allocation of 36000 bytes failed from 0x801E876C, alignment 0
    Pool: Processor Free: 120384 Cause: Memory fragmentation
    Alternate Pool: I/O Free: 682800 Cause: Memory fragmentation
    -Process= "TCP Protocols", ipl= 0, pid= 43
    -Traceback= 801C422C 801C9ED0 801C5264 801E8774 801E4CDC 801D9A8C 8022E324 8022E4BC
    003944: Feb 18 02:14:27.397 UTC: %SYS-2-CFORKMEM: Process creation of TCP Command failed (no memory).
    -Process= "TCP Protocols", ipl= 0, pid= 43
    -Traceback= 801E4D54 801D9A8C 8022E324 8022E4BC
    According to Cisco documentation for Troubleshooting Memory issues on Cisco IOS 12.1 (http://www.cisco.com/c/en/us/support/docs/ios-nx-os-software/ios-software-releases-121-mainline/6507-mallocfail.html#tshoot4 ), which suggests the TCP Protocols Process could not be started due to Memory being fragmented
    Memory Fragmentation Problem or Bug
    This situation means that a process has consumed a large amount of processor memory and then released most or all of it, leaving fragments of memory still allocated either by this process, or by other processes that allocated memory during the problem. If the same event occurs several times, the memory may fragment into very small blocks, to the point where all processes requiring a larger block of memory cannot get the amount of memory that they need. This may affect router operation to the extent that you cannot connect to the router and get a prompt if the memory is badly fragmented.
    This problem is characterized by a low value in the "Largest" column (under 20,000 bytes) of the show memory command, but a sufficient value in the "Freed" column (1MB or more), or some other wide disparity between the two columns. This may happen when the router gets very low on memory, since there is no defragmentation routine in the IOS.
    If you suspect memory fragmentation, shut down some interfaces. This may free the fragmented blocks. If this works, the memory is behaving normally, and all you have to do is add more memory. If shutting down interfaces doesn't help, it may be a bug. The best course of action is to contact your Cisco support representative with the information you have collected.
    >>Further TCP -3- FORKFAIL logs were seen
    003945: Feb 18 02:14:27.401 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003946: Feb 18 02:14:27.585 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003947: Feb 18 02:14:27.761 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003948: Feb 18 02:14:27.929 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    003949: Feb 18 02:14:29.149 UTC: %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    -Traceback= 8022E33C 8022E4BC
    According to Error Explanation from Cisco Documentation (http://www.cisco.com/c/en/us/td/docs/ios/12_2sx/system/messages/122sxsms/sm2sx09.html#wp1022051)
    suggests the TCP handles from a client could not be created or initialized
    Error Message %TCP-3-FORKFAIL: Failed to start a process to negotiate options.
    Explanation The system failed to create a process to handle requests  from a client. This condition could be caused by insufficient  memory.
    Recommended Action Reduce other system activity to ease  memory demands.
    But I am still not sure about the exact root cause is as
    1.The GET/GETNEXT / GET BULK messages from SNMP Manager (Here, IBM Tivoli Netcool  ) uses default SNMP Port 161 which is
       UDP and not TCP
    2. If its ICMP Polling failure from IBM Tivoli Netcool , ICMP is Protocol Number 1 in Internet Layer of TCP/IP Protocol Suite  and TCP is Protocol                 Number 6 in the Transport Layer of TCP/IP Protocol Suite .
    So I am still not sure how TCP Protocol Process Failure could have caused ICMP Timeout . Please help !
    Could you please help me on what TCP Protocol Process handles in a Cisco Switch ?
    Regards,
    Anup

  • Memory leak in string class

    We have developed our application in Solaris 10 environment. While running Purify on that it shows leak in the string class. This leak is incremental and so process size keeps in increasing. If we replace string with char array, the leaks disappears and process size also becomes stable.
    Following is the snapshot of the memory leak stack reported by Purify:
    MLK: 4505 bytes leaked in 85 blocks
    * This memory was allocated from:
    malloc [rtlib.o]
    operator new(unsigned) [libCrun.so.1]
    void*operator new(unsigned) [rtlib.o]
    __rwstd::__string_ref<char,std::char_traits<char>,std::allocator<char> >*std::basic_string<char,std::char_traits<char>,std::allocator<char> >::__getRep(unsigned,unsigned) [libCstd.so.1]
    char*std::basic_string<char,std::char_traits<char>,std::allocator<char> >::replace(unsigned,unsigned,const char*,unsigned,unsigned,unsigned) [libCstd.so.1]
    std::basic_string<char,std::char_traits<char>,std::allocator<char> >&std::basic_string<char,std::char_traits<char>,std::allocator<char> >::operator=(const char*) [libCstd.so.1]
    Has anyone faced this problem earlier or is there any patch available for this?

    Over time, we have found and fixed memory leaks in the C++ runtime libraries.
    Get the latest patches for the compiler you are using. (You didn't say which one.) You can find all patches here:
    [http://developers.sun.com/sunstudio/downloads/patches/index.jsp]
    Also get the latest Solaris patch for the C++ runtime libraries, listed on the same web page.
    If that doesn't fix the problem, please file a bug report at
    [http://bugs.sun.com]
    with a test case that can be compiled and run to demonstrate the problem.

  • AVL tree in shared memory

    Can anyone help, I am not being deliverately stupid? I can create an avl tree, and I can create a shared memory area, but how do I put an AVL tree into the shared memory area?

    You are probably creating the nodes of the tree using malloc.
    Malloc is just handing you an area of memory. You already
    have a block of memory, which is the shared memory block.
    If you go to the part of the AVL program that you already have
    and replace the malloc calls with calls to a routine that you
    will write to give you blocks of your shared memory then you
    are golden.
    However, note that you may also have other details to take
    care of such as making sure that the programs with which you
    are sharing the memory block does not try to read or write the
    block at the same time as someone else is using the block.
    You can use a mutex, semaphore, or readers/writer lock for this
    purpose.

  • Releasing Memory

    Following observations were made in the task manager of windows NT for Oracle.exe
    initially process Oracle.exe was taking CPU 00 and MEM USAGE 52604 K
    then a INSERT Is issued for insertion of about 50,000 records
    process Oracle.exe was taking CPU 99 and MEM USAGE continuesly grows from 52604 K to 106644 K
    after commit;
    process Oracle.exe was taking CPU 00 and MEM USAGE remains 106644 K
    again a INSERT Is issued for insertion of 50,000 records
    process Oracle.exe was taking CPU 99 and MEM USAGE grows from 106644 K to 160684 k
    after commit;
    process Oracle.exe was taking CPU 00 and MEM USAGE remains to 160684 k
    can this 160684 k MEMORY be released Without Restarting the database ???

    Assuming you've configured your SGA & PGA appropriately for your machine, what's the concern? Oracle is going to cache the most recently modified blocks-- the memory will be allocated to another Oracle process should it be needed.
    Justin

  • RE: (forte-users) memory management

    Brenda,
    When a partition starts, it reserves the MinimumAllocation. Within this
    memory space, objects are created and more and more of this memory is
    actually used. When objects are no longer referenced, they remain in memory
    and the space they occupy remains unusable.
    When the amount of free memory drops below a certain point, the garbage
    collector kicks in, which will free the space occopied by all objects that
    are no longer referenced.
    If garbage collecting can't free enough memory to hold the additional data
    loaded into memory, then the partition will request another block of memory,
    equal to the IncrementAllocation size. The partition will try to stay within
    this new boundary by garbage collecting everytime the available part of this
    memory drops below a certain point. If the partition can't free enough
    memory, it will again request another block of memory.
    This process repeats itself until the partition reaches MaximumAllocation.
    If that amount of memory still isn't enough, then the partition crashes.
    Instrument ActivePages shows the memory reserved by the partition.
    AllocatedPages shows the part of that memory actually used.
    AvailablePages shows the part ot that memory which is free.
    Note that once memory is requested from the operating system, it's never
    released again. Within this memory owned by the partition, the part actually
    used will always be smaller. But this part will increase steadily, until the
    garbage collecter is started and a part of it is freed again.
    There are some settings that determine when the garbage collector is
    started, but I'm not sure which ones they are.
    The garbage collector can be started from TOOL using
    "task.Part.OperatingSystem.RecoverMemory()", but I'm not sure if that will
    always actually start the garbage collector.
    If you track AllocatedPages of a partition, it's always growing, even if the
    partition isn't doing anything. I don't know why.
    If you add AllocatedPages and AvailablePages, you shoud get the value of
    ActivePages, but you won't. You always get a lower number and sometimes even
    considerably lower. I don't know why.
    Pascal Rottier
    Atos Origin Nederland (BAS/West End User Computing)
    Tel. +31 (0)10-2661223
    Fax. +31 (0)10-2661199
    E-mail: Pascal.Rottiernl.origin-it.com
    ++++++++++++++++++++++++++++
    Philip Morris (Afd. MIS)
    Tel. +31 (0)164-295149
    Fax. +31 (0)164-294444
    E-mail: Rottier.Pascalpmintl.ch
    -----Original Message-----
    From: Brenda Cumming [mailto:brenda_cummingtranscanada.com]
    Sent: Tuesday, January 23, 2001 6:40 PM
    To: Forte User group
    Subject: (forte-users) memory management
    I have been reading up on memory management and the
    OperatingSystemAgent, and could use some clarification...
    When a partition is brought online, is the ActivePages value set to the
    MinimumAllocation value, and expanded as required?
    And what is the difference between the ExpandAtPercent and
    ContractAtPercent functions?
    Thanks in advance,
    Brenda
    For the archives, go to: http://lists.xpedior.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.com

    The Forte runtime is millions of lines of compiled C++ code, packaged into
    shared libraries (DLL's) which are a number of megabytes in size. The
    space is taken by the application binary, plus the loaded DLL's, plus
    whatever the current size of garbage collected memory is.
    Forte allocates a garbage-collected heap that must be bigger than the size
    of the allocated objects. So if you start with an 8MB heap, you will always
    have at least 8MB allocated, no matter what objects you actually
    instantiate. See "Memory Issues" in the Forte System Management Guide.
    -tdc
    Tom Childers
    iPlanet Integration Server Engineering
    At 10:37 PM 6/11/01 +0200, [email protected] wrote:
    Hi all,
    I was wondering if anyone had any experience in deploying clients on NT
    concerning
    the memory use of these client apps.
    What is the influence of the various compiler options (optimum
    performance, memory use etc)?
    We seem to see a lot of the memory is taken by the Forte client apps (seen
    in the Task Manager
    of NT) in respect to the other native Window apps. For example an
    executable of approx 4Mb takes up to
    15Mb of memory. When I look at the objects retained in memory after
    garbage collection, these are about
    2Mb. Where do the other Mb's come from?

Maybe you are looking for