Memory allocation and release

Hi,
i'm having some doubts on the memory issues, like allocation, release, EEPROM and RAM:
Question 1:
private method1()
byte[]a = new byte[10];
byte[]b = JCSystem.makeTransientByteArray(...);
byte c;
When will some memory be allocated to variables a, b and c, and when will that memory be released?
Question 2:
JCRE (until 2.2 at least) doesn't have Garbage collector, but if the card itself has that mechanism, will the applet automatically use it?
Thanks in advance!

It's not a question on how many EEPROM writes are done each day/hout/minute/second. It's a question of whether the data must be saved across sessions. RAM is mainly used for intermediate computations & session data. EEPROM is used to store persistent info (user info, credit, phonebook, etc...).
RAM is also a good way to optimize processing time. If you have to manipulate a lot of persistent data during an APDU, it's a good idea to copy everything in a "cache" (transient buffer) and/or local variables, do all of your processing on the cached values, and then perform the persistent write at the end of the command.
As to your last question on how much RAM is acceptable, it depends on the context. If you know that your applet will be alone, feel free to use as much as the platform can give you. If not, try to be reasonable. Cryptographic intensive applets generally use a lot of RAM to store intermediate computation results.
From personal experience, I've written very simple applets that needed about 20 transient bytes, and complex ones that needed up to 1500 transient bytes. If you really need to set a limit, 200 bytes is already a considerable amount of transient space and should be more than enough for most applets. But then again, my guess is as good as any.

Similar Messages

  • External memory allocation and management using C / LabVIEW 8.20 poor scalability

    Hi,
    I have multiple C functions that I need to interface. I need
    to support numeric scalars, strings and booleans and 1-4 dimensional
    arrays of these. The programming problem I try to avoid is that I have
    multiple different functions in my DLLs that all take as an input or
    return all these datatypes. Now I can create a polymorphic interface
    for all these functions, but I end-up having about 100 interface VIs
    for each of my C function. This was still somehow acceptable in LabVIEW
    8.0 but in LabVIEW 8.2 all these polymorphic VIs in my LVOOP project
    gets read into memory at project open. So I have close to 1000 VIs read into memory when ever I open my project. It takes now about ten minutes to
    open the project and some 150 MB of memory is consumed instantly. I
    still need to expand my C interface library and LabVIEW doesn't simply
    scale up to meet the needs of my project anymore.
    I now
    reserve my LabVIEW datatypes using DSNewHandle and DSNewPtr functions.
    I then initialize the allocated memory blocks correctly and return the
    handles to LabVIEW. LabVIEW complier interprets Call Library Function
    Node terminals of my memory block as a specific data type.
    So
    what I thought was following. I don't want LabVIEW compiler to
    interpret the data type at compile time. What I want to do is to return
    a handle to the memory structure together with some metadata describing
    the data type. Then all of my many functions would return this kind of
    handle. Let's call this a data handle. Then I can later convert this
    handle into a real datatype either by typecasting it somehow or by
    passing it back to C code and expecting a certain type as a return.
    This way I can reduce the number of needed interface VIs close to 100
    which is still acceptable (i.e. LabVIEW 8.2 doesn't freeze).
    So
    I practically need a similar functionality as variant has. I cannot use
    variants, since I need to avoid making memory copies and when I convert
    to and from variant, my memory consumption increases to three fold. I
    handle arrays that consume almos all available memory and I cannot
    accept that memory is consumed ineffectively.
    The question is,
    can I use DSNewPtr and DSNewHandle functions to reserve a memory block
    but not to return a LabVIEW structure of that size. Does LabVIEW
    carbage collection automatically decide to dispose my block if I don't
    correctly return it from my C immediately but only later at next call
    to C code. Can I typecast a 1D U8 array to array of any dimensionality and any numeric data type without memory copy (i.e. does typecast work the way it works in C)?
    If I cannot find a solution with this LabVIEW 8.20 scalability issue, I have to really consider transferring our project from LabVIEW to some other development environent like C++ or some of the .NET languages.
    Regards,
    Tomi
    Tomi Maila

    I have to answer to myself since nobody else has yet answered me. I came up with one solution that relies on LabVIEW queues. Queues of different type are all referred the same way and can also be typecased from one type to another. This means that one can use single element queues as a kind of variant data type, which is quite safe. However, one copy of the data is made when you enqueue and dequeue the data.
    See the attached image for details.
    Tomi Maila
    Attachments:
    variant.PNG ‏9 KB

  • [iphone sdk] memory allocations and application sandboxing

    folks,
    does the os automatically de-allocate any memory allocated when my app exists? reason i ask is the phone seems to get slower and slower over time with more crashes. a hard restart seems to fix the problem.
    i'm guessing that it is because i'm not cleaning things up on exit or something, but maybe there is something else wrong.
    john

    The academic answer is: It shouldn't matter how much memory you leak in your app after the app has been closed. I can't speak to how the device functions cause I can't test on one yet! But it's UNIX under the hood and that means each process is assigned it's own address space. Any memory allocated to a process is completely reclaimed when the process exits.
    I'm not sure what changes Apple made to the VM kernel subsystem for the iPhone. Unix is already tried and tested in this arena -- so if it's the default Darwin VM I would be very surprised if this is a bug. But since this is embedded they may have added some "shortcuts" for performance and efficiency... hard to say. Since you have the device, are you able to do any system level diagnostics? does the device lose free memory the more your start/stop your app?
    Also -- the device has 128MB of RAM. The 8 or 16GB is storage, which isn't used for RAM. The specs are hard to find, but I think I found the answer through Google on the amount of RAM in the iPhone and iPod Touch.
    Cheers,
    George

  • Short term memory allocator and Cache memor is out of memory

    Hi,
    I have three NW 6.5 sp8 servers in production. One of these, the one which holds Pervasive sql 9.7 began to show the following errors:
    Cache memory allocator out of available memory.
    Short term memory allocator is out of memory.
    360396 attempts to get more memory failed.
    request size in bytes 1048576 from Module SERVER.NLM
    I show here segstats.txt:
    *** Memory Pool Configuration for : DBASE_SERVER
    Time and date : 02:42:36 AM 12/02/2012
    Server version : NetWare 6.5 Support Pack 8
    Server uptime : 11d 04h 35m 28s
    SEG.NLM uptime : 0d 00h 01m 17s
    SEG.NLM version : v2.00.17
    Original Memory : 4,292,812,800 bytes (4.00 GB)
    ESM Memory : 805,302,272 bytes (768.0 MB)
    0xFFFFFFFF --------------------------------------------------------------
    | Kernel Reserved Space |
    | |
    | Size : 180,355,071 bytes (172.0 MB) |
    | |
    0xF5400000 --------------------------------------------------------------
    | User Address Space (L!=P) |
    | |
    | User Pool Size : 671,088,640 bytes (640.0 MB) |
    | High Water Mark : 559,710,208 bytes (533.8 MB) |
    | PM Pages In Use : 1,855,488 bytes (1.8 MB) |
    | |
    0xCD400000 --------------------------------------------------------------
    | Virtual Memory Address Space (L!=P) |
    | |
    | VM Address Space : 2,369,781,760 bytes (2.21 GB) |
    | Available : 801,435,648 bytes (764.3 MB) |
    | Total VM Pages : 800,870,400 bytes (763.8 MB) |
    | Free Clean VM : 785,563,648 bytes (749.2 MB) |
    | Free Cache VM : 15,306,752 bytes (14.6 MB) |
    | Total LP Pages : 0 bytes (0 KB) |
    | Free Clean LP : 0 bytes (0 KB) |
    | Free Cache LP : 0 bytes (0 KB) |
    | Free Dirty : 0 bytes (0 KB) |
    | NLM Memory In Use : 1,767,256,064 bytes (1.65 GB) |
    | NLM/VM Memory : 1,751,785,472 bytes (1.63 GB) |
    | Largest Segment : 2,097,152 bytes (2.0 MB) |
    | Lowest Kernel Page: 0 bytes (0 KB) |
    | : [0x00000000] |
    | High Water Mark : 2,243,096,576 bytes (2.09 GB) |
    | Alloc Failures : 370,804 |
    | |
    0x40000000 --------------------------------------------------------------
    | File System Address Space (L==P or L!=P) |
    | |
    | FS Address Space : 1,067,290,624 bytes (1017.8 MB) |
    | Available : 108,978,176 bytes (103.9 MB) |
    | Largest Segment : 3,362,816 bytes (3.2 MB) |
    | |
    | NSS Memory (85%) : 613,683,200 bytes (585.3 MB) |
    | NSS (avail cache) : 610,455,552 bytes (582.2 MB) |
    | |
    0x00627000 --------------------------------------------------------------
    | DOS / SERVER.NLM |
    | |
    | Size : 6,451,200 bytes (6.2 MB) |
    | |
    0x00000000 --------------------------------------------------------------
    Total NLMs loaded on the server: 307
    Top 20 Memory Consuming NLMs
    NLM Name Version Date Total NLM Memory
    ================================================== =============
    1. NWMKDE.NLM 9.70.07 Nov 14, 2008 813,035,623 bytes (775.4 MB)
    2. SERVER.NLM 5.70.08 Oct 3, 2008 467,216,096 bytes (445.6 MB)
    3. NSS.NLM 3.27.02 Nov 11, 2009 203,168,848 bytes (193.8 MB)
    4. NCPL.NLM 3.02 May 6, 2009 41,854,837 bytes (39.9 MB)
    5. NWSQLMGR.NLM 9.70.07 Nov 14, 2008 39,309,132 bytes (37.5 MB)
    6. DS.NLM 20217.07 Jan 30, 2009 24,851,303 bytes (23.7 MB)
    7. APACHE2.NLM 2.00.63 Apr 25, 2008 19,863,493 bytes (18.9 MB)
    8. CIOS.NLM 1.60 Feb 12, 2008 10,569,567 bytes (10.1 MB)
    9. OWCIMOMD.NLM 3.02 Nov 27, 2007 9,318,616 bytes (8.9 MB)
    10. APRLIB.NLM 0.09.17 Apr 25, 2008 8,959,760 bytes (8.5 MB)
    11. APACHE2.NLM 2.00.63 Apr 25, 2008 7,702,469 bytes (7.3 MB)
    12. FATFS.NLM 1.24 Aug 27, 2007 5,859,413 bytes (5.6 MB)
    13. NWPA.NLM 3.21.02 Oct 29, 2008 4,990,686 bytes (4.8 MB)
    14. PKI.NLM 3.32 Aug 25, 2008 4,069,957 bytes (3.9 MB)
    15. WS2_32.NLM 6.24.01 Feb 14, 2008 3,623,596 bytes (3.5 MB)
    16. NWMPM100.NLM 9.70.07 Nov 14, 2008 3,597,747 bytes (3.4 MB)
    17. NWODBCEI.NLM 9.70.07 Nov 14, 2008 3,459,159 bytes (3.3 MB)
    18. PORTAL.NLM 4.03 Sep 22, 2008 3,404,576 bytes (3.2 MB)
    19. JVM.NLM 1.43 Oct 16, 2008 2,701,919 bytes (2.6 MB)
    20. NLDAP.NLM 20218.11 Jan 30, 2009 2,579,131 bytes (2.5 MB)
    Top 20 NLM - Memory Trends
    NLM Name Original Memory Current Change
    ================================================== =========
    1. NWMKDE.NLM 842,068,071 bytes 813,035,623 bytes -27.7 MB
    2. SERVER.NLM 463,894,240 bytes 467,216,096 bytes 3.2 MB
    3. NSS.NLM 203,168,848 bytes 203,168,848 bytes 0 KB
    4. NCPL.NLM 41,850,741 bytes 41,854,837 bytes 4 KB
    5. NWSQLMGR.NLM 39,092,044 bytes 39,309,132 bytes 212 KB
    6. DS.NLM 24,896,359 bytes 24,851,303 bytes -44 KB
    7. APACHE2.NLM 19,855,301 bytes 19,863,493 bytes 8 KB
    8. CIOS.NLM 10,569,567 bytes 10,569,567 bytes 0 KB
    9. OWCIMOMD.NLM 9,277,656 bytes 9,318,616 bytes 40 KB
    10. APRLIB.NLM 8,959,760 bytes 8,959,760 bytes 0 KB
    11. APACHE2.NLM 7,702,469 bytes 7,702,469 bytes 0 KB
    12. FATFS.NLM 5,859,413 bytes 5,859,413 bytes 0 KB
    13. NWPA.NLM 4,957,918 bytes 4,990,686 bytes 32 KB
    14. PKI.NLM 4,135,493 bytes 4,069,957 bytes -64 KB
    15. WS2_32.NLM 3,619,500 bytes 3,623,596 bytes 4 KB
    16. NWMPM100.NLM 3,597,747 bytes 3,597,747 bytes 0 KB
    17. NWODBCEI.NLM 3,459,159 bytes 3,459,159 bytes 0 KB
    18. PORTAL.NLM 3,400,480 bytes 3,404,576 bytes 4 KB
    19. JVM.NLM 2,701,919 bytes 2,701,919 bytes 0 KB
    20. NLDAP.NLM 2,505,403 bytes 2,579,131 bytes 72 KB
    Logical Memory Summary Information
    ================================================== ===============================
    File System Cache Information
    FS Cache Free : 4,591,616 bytes (4.4 MB)
    FS Cache Fragmented : 104,386,560 bytes (99.6 MB)
    FS Cache Largest Segment : 3,362,816 bytes (3.2 MB)
    Logical System Cache Information
    LS Cache Free : 0 bytes (0 KB)
    LS Cache Fragmented : 722,448,384 bytes (689.0 MB)
    LS OS Reserved Data : 333,455,360 bytes (318.0 MB)
    LS Cache Largest Segment : 2,097,152 bytes (2.0 MB)
    LS Cache Largest Position : 2DE00000
    Summary Statistics
    Total Address Space : 4,294,967,296 bytes (4.00 GB)
    Total Free : 4,591,616 bytes (4.4 MB)
    Total Fragmented : 826,834,944 bytes (788.5 MB)
    Highest Physical Address : CFE53000
    User Space : 671,088,640 bytes (640.0 MB)
    User Space (High Water Mark) : 559,710,208 bytes (533.8 MB)
    NLM Memory (High Water Mark) : 2,243,096,576 bytes (2.09 GB)
    Kernel Address Space In Use : 2,572,759,040 bytes (2.40 GB)
    Available Kernel Address Space : 43,929,600 bytes (41.9 MB)
    Memory Summary Screen (.ms)
    ================================================== ===============================
    KNOWN MEMORY Bytes Pages Bytes Pages
    Server: 3487425552 851422 Video: 8192 2
    Dos: 86000 20 Other: 131072 32
    FS CACHE KERNEL NLM MEMORY
    Original: 3483172864 850384 Code: 46854144 11439
    Current: 108978176 26606 Data: 27242496 6651
    Dirty: 0 0 Sh Code: 49152 12
    Largest seg: 3362816 821 Sh Data: 20480 5
    Non-Movable: 81920 20 Help: 172032 42
    Other: 4235538432 4292855635 Message: 1236992 302
    Avail NSS: 610439168 149033 Alloc L!=P: 1661366272 405607
    Movable: 8192 2 Alloc L==P: 14843904 3624
    Total: 1751785472 427682
    VM SYSTEM
    Free clean VM: 785563648 191788
    Free clean LP: 0 0
    Free cache VM: 15306752 3737
    Free cache LP: 0 0
    Free dirty: 0 0
    In use: 1855488 453
    Total: 801435648 195663
    Memory Configuration (set parameters)
    ================================================== ==============================
    Auto Tune Server Memory = ON
    File Cache Maximum Size = 1073741825
    File Service Memory Optimization = 1
    Logical Space Compression = 1
    Garbage Collection Interval (ON) = 299.9 seconds
    VM Garbage Collector Period (ON) = 300.0 seconds
    server -u<number> = 671088640
    NSS Configuration File:
    C:\NWSERVER\NSSSTART.CFG
    File does not exist,
    or is zero byte in size.
    DS Configuration File:
    SYS:\_NETWARE\_NDSDB.INI
    File does not exist,
    or is zero byte in size.
    TSAFS Memory Information/Configuration
    ================================================== ==============================
    Cache Memory Threshold : 1%
    Read Buffer Size : 65536 bytes
    Max Data Sets for Read Ahead : 2
    Read Threads Per Job : 4
    NSS Memory Information/Configuration
    ================================================== ==============================
    Current NSS Memory Settings
    Cache Balance Percentage : 85%
    Cache Memory Allocated : 585.3 MB
    Available Cache from NSS : 582.2 MB
    Current NSS Caching Percentages
    Buffer cache hit percentage : 63%
    Name Tree cache hit percentage : 94%
    File cache hit percentage : 99%
    NSS Flush Status: Not Flushed
    Server High/Low Water Mark Values
    ================================================== ==============================
    NLM Memory High Water Mark = 2,243,096,576 bytes
    File System High Water Mark = 443,108 bytes
    User Space Information:
    User Space High Water Mark = 559,710,208 bytes
    Committed Pages High Water Mark = 87 pages
    Mapped VM Pages High Water Mark = 3,875 pages
    Reserved Pages High Water Mark = 400,103 pages
    Swapped Pages High Water Mark = 3,785 pages
    Available Low Water Mark = 294,670,336
    ESM Memory High Water Mark = 173 pages
    It seems that server.nlm is growing without limits. When tat occurs, I have the mentioned errors.
    Though NWMKDE seems to have grown. It remains steady around the showed values.
    I'm not brave enough to apply the memcalc's recommended fixes because the following line:
    set file cache maximum size=822083584
    returns an error saying the minimun value should be 1073741824.
    Can someone help me because I'm completely blind here.
    Thanks in advance.
    Gabriel

    I take it this is primarily a database server, in which case it's OK that Btrieve is using so much memory? You wouldn't want this to be a general file server too. Is the memory error causing any actual problem?
    Server is asking for only 1mb, and due to fragmentation there is little free memory (actually 2mb left, which is a little odd, but neither here nor there).
    Also, let's see your bti.cfg, which is the Btrieve config file. I'll paste in below an ArcServe TID on Btrieve using excessive memory:
    Symptoms
    Btrieve was upgraded to version 8.5 during the installation of ARCserve r11.1. The cachesize in the BTI.cfg microkernel section is at 20 MB (20480). (Pervasive would like this setting placed to 20% of the server memory or database size which ever is less.) The server will keep adding 20 additional Megs of memory to the total amount of memory the server is using for database transactions after each backup job. This can be verified by performing the following at the server console:
    LOAD MONITOR
    Scroll down to System Resources under Available Options and hit enter.
    Scroll down to Alloc Memory (Bytes) and hit enter.
    Locate NWMKDE.nlm in the Resource Tags list.
    Sort by memory bytes and you will slowly see nwmkde.nlm move to the top of the usage list. Unless the server is rebooted the small memory allocations stays at the increased amount.
    Explanation
    Starting with Btrieve version 8.5 and higher, Pervasive has been working to make the Btrieve database more dynamic. They have created a two-tier memory allocation approach. The first level is controlled by the cache size setting in the BTI.cfg. If this becomes inadequate, the second level will be accessed. The default setting for the second level is 60% of the server's total memory.
    The following line in the BTI.cfg will control the second level of memory caching:
    MaxCacheUsage=60; default is 60% of memory.
    An example would be a server with 100 MB of memory and the following settings in sys:\system\bti.cfg:
    [microkernel]
    cachesize=20480
    MaxCacheUsage=60
    This will cause the nwmkde.nlm to use 20 MB (20480) of memory initially and grow up to 60 percent of the total server memory or 60 MB.
    Now you also have to throw Max worker threads into the mix. A setting of Max worker threads = 3 in the BTI.cfg > Btrieve Communications Manager section will also use server memory. It will use 1 MB per thread. In this example, 3 Megs of additional memory will be used. That will bring the total amount of memory used by nwmkde.nlm to 20 MB (20480) + 3 MB = 23 MB when the server is first booted. After running some backups, this number could go up to as high as 60 MB (60% of server memory) if the server dynamically requires it.
    Resolution
    The MaxCacheUsage=60 setting must be set down from this 60% number. Pervasive recommends setting this from 0 to 20. The server needs to be rebooted for this change to take effect.

  • How to check actual allocated and used memory for java process in solaris?

    Hi,
    I'm testing performance for java application on solaris 10. And I would like to know how to measure actual memory allocated and used for java process.
    I'm setting -Xms512m -Xmx512m for my java process and I use prstat command to monitor it's memory. But I found that when I run prstat to check, in SIZE column is more than I setting (found 644).
    So I'm don't know the actual memory that java process used. (In this case,is it mean the process use memory (644) more than setting (512)?)
    Thank you.

    With Xms/Xmx you specify the Java heap size. On top of that comes the permanent generation (default max size 64m) and the C part of the process (the JVM itself with all its libraries and data).
    With "ps -e -o pid,vsz,rss,args" you get the virtual and set resident size of your processes.
    Nick.

  • How can I get the memory allocation info of a java thread?

    Now I am going to write a program to monitor the execution condition of the java threads. But it seems that the classes in standard edition of JDK does not provide facilities to get the information such as the memory allocation and CPU time of a running thread. How can I do with it? Can I use JNI or JVMDI to get them? If it could do, how?

    Thanks a lot. I just browsed the specification of jvmpi. It is interesting and it seems that I can get the information I need. However, if I want to get the information in my program, I mean, if I want to build a class which may use JNI method to invoke the function written with JVMPI, and then forward the data to other upper layer objects, can it be done?

  • LabVIEW Memory Allocation

    Hey,
    Is it possible to allocate predefined RAM Memory and accumulate data's into it?
    Before going in to detail – I am currently looking to write the inspection results in database for statistical analysis. I hope it will always consume some time to write it in database for each component / iteration. So decided to accumulate all the data in memory and write it at one shot.
    In detail, user has to inputs the memory size via front panel control. Let us assume for writing 1 row of string information occupies “XX” bytes. (Not yet sure how to calculate memory size of 1D string array of 10 elements (max of 20 character in each string)). Dividing the user input memory size with 1 row of memory size will give how many rows we can write at maximum say “N”.
    Use the for loop with “N” iteration and accumulates the 1D info to 2D array of information (auto indexing) and write it in Database at one shot.
    Any help or direction may helps a lot. 
    Waiting for the reply 
    Sasi.
    Certified LabVIEW Associate Developer
    If you can DREAM it, You can DO it - Walt Disney

     As far I know LabVIEW internally handles the memory allocation and we don't have any option to allocate it. There might be a way by using a windows dll but no direct function atleast.
    As you said you are going to use the for loop, in this case LabVIEW pre-allocates the memory depending on the data type and you don't have to worry about that. For details about the memory according to the data type you can check this link.
    The best solution is the one you find it by yourself

  • When syncing iOS devices to iTunes, the memory allocations shown in iTunes are all over the place

    There must be threads about this, but apologies for my inability to find any.
    As the title says: I plug my iPhone or iPad into my iMac, and when I look at the memory allocation as displayed in iTunes, 6GB or whatever are free. Then I sync again, and suddenly 8GB are free. Sync again, and now 2GB are free - even though nothing has changed. I get this with both iOS devices.
    Additionally, often what iTunes shows has no relationship to what the device says when I go into Settings: last week iTunes believed there were 0GB in Photos - because after all I had erased them! - but the iPad recorded 25GB, somehow.
    Is this a known issue, or is something screwy with my system?
    Thanks in advance!

    Thanks, but that's not really the issue. I do manage it manually - I've got a lot of smart lists, smart photo albums, and so on, all set up to sync with the iOS devices.
    The problem is that the iPhone/iPad memory, as displayed in iTunes when the device is plugged in, seems to be very inaccurate and variable. When I plug the iPhone or iPad in, iTunes tells me (for example) that, with current settings, 5GB will be free. It syncs, and then suddenly 2GB are free. I sync again - changing nothing, so there should be no change in memory allocation - and suddenly 6GB are free. Somehow, despite syncing no books at all, there's a bizarre 60MB "Book" that iTunes tells me is sometimes on the iPhone, and sometimes not. That little stretch of purple just suddenly appears some days.
    At times it tells me it's too full and can't sync at all. So I wipe, for example, 6GB off the "too full" iOS device, and after doing that suddenly 10GB are free, so I put the 6GB back on no problem. And after I add that 6GB back on, maybe there are 8GB free, or maybe 2GB, or maybe 4GB. In my world, 10GB-6GB=4GB, but iTunes is using quantum arithmetic or something.
    Wiping a lot of photos off was great fun. I had iTunes unsync and remove all photos - so, in that line at the bottom, all the photo memory allocation disappeared. Cool. Mission accomplished. Then I clicked to another tab in iTunes, and suddenly the photo memory jumped back up to 15GB. Looked back in the Photos tab - nope, I'd told it to sync no photos at all. Checked the iPhone itself: oh, neat. 25GB of photos, despite not syncing any. Eventually, after repeating the process several times, all the photos were eventually gone, but the whole process was bizarrely complicated.
    Basically, when I plug the iPhone or iPad into iTunes, iTunes seems to have no idea what the memory allocation actually is - which occasionally makes syncing complicated.
    It's not a giant issue, but it's quite annoying - and I'm wondering if this is just me, or if this is a common issue. I think this issue may also be discussed in some of the "can't delete songs" or "can't delete photos" threads, but some of those threads are so long and cover so many different problems that they're kind of hard to read.

  • Applets and memory not being released by Java Plug-in

    Hi.
    I am experiencing a strange memory-management behavior of the Java Plug-in with Java Applets. The Java Plug-in seems not to release memory allocated for non-static member variables of the applet-derived class upon destroy() of the applet itself.
    I have built a simple "TestMemory" applet, which allocates a 55-megabytes byte array upon init(). The byte array is a non-static member of the applet-derived class. With the standard Java Plug In configuration (64 MB of max JVM heap space), this applet executes correctly the first time, but it throws an OutOfMemoryException when pressing the "Reload / Refresh" browser button or if pressing the "Back" and then the "Forward" browser buttons. In my opionion, this is not an expected behavior. When the applet is destroyed, the non-static byte array member should be automatically invalidated and recollected. Isn't it?
    Here is the complete applet code:
    // ===================================================
    import java.awt.*;
    import javax.swing.*;
    public class TestMemory extends JApplet
      private JLabel label = null;
      private byte[] testArray = null;
      // Construct the applet
      public TestMemory()
      // Initialize the applet
      public void init()
        try
          // Initialize the applet's GUI
          guiInit();
          // Instantiate a 55 MB array
          // WARNING: with the standard Java Plug-in configuration (i.e., 64 MB of
          // max JVM heap space) the following line of code runs fine the FIRST time the
          // applet is executed. Then, if I press the "Back" button on the web browser,
          // then press "Forward", an OutOfMemoryException is thrown. The same result
          // is obtained by pressing the "Reload / Refresh" browser button.
          // NOTE: the OutOfMemoryException is not thrown if I add "testArray = null;"
          // to the destroy() applet method.
          testArray = new byte[55 * 1024 * 1024];
          // Do something on the array...
          for (int i = 0; i < testArray.length; i++)
            testArray[i] = 1;
          System.out.println("Test Array Initialized!");
        catch (Exception e)
          e.printStackTrace();
      // Component initialization
      private void guiInit() throws Exception
        setSize(new Dimension(400, 300));
        getContentPane().setLayout(new BorderLayout());
        label = new JLabel("Test Memory Applet");
        getContentPane().add(label, BorderLayout.CENTER);
      // Start the applet
      public void start()
        // Do nothing
      // Stop the applet
      public void stop()
        // Do nothing
      // Destroy the applet
      public void destroy()
        // If the line below is uncommented, the OutOfMemoryException is NOT thrown
        // testArray = null;
      //Get Applet information
      public String getAppletInfo()
        return "Test Memory Applet";
    // ===================================================Everything works fine if I set the byte array to "null" upon destroy(), but does this mean that I have to manually set to null all applet's member variables upon destroy()? I believe this should not be a requirement for non-static members...
    I am able to reproduce this problem on the following PC configurations:
    * Windows XP, both JRE v1.6.0 and JRE v1.5.0_11, both with MSIE and with Firefox
    * Linux (Sun Java Desktop), JRE v1.6.0, Mozilla browser
    * Mac OS X v10.4, JRE v1.5.0_06, Safari browser
    Your comments would be really appreciated.
    Thank you in advance for your feedback.
    Regards,
    Marco.

    Hi Marco,
    my guess as to why JPI would keep references around, if it does keep them, is that it propably is an implementation side effect. A lot of things are cached in the name of performance and it is easy to leave things laying around in your cache. Maybe the page with the associated images/applets is kept in the browser cache untill the browser needs some memory and if the browser memory manager is not co-operating with the JPI/JVM memory manager the browser is not out of memory, thus not releasing its caches but the JVM may be out of memory. Thus the browser indirectly keeps the reference that it realy does not need. This reference could be inderect through some 'applet context' or what ever the browser uses to interact with JPI, don't realy know any of these details, just imaging what must/could be going on there. Browser are amazingly complicated beast.
    This behaviour that you are observing, weather the origin is something like I speculated or not, is not nice but I would not expect it to be fixed even if you filed a bug report. I guess we are left with relleasing all significatn memory structures in destroy. A simple way to code this is not to store anything in the member fields of the applet but in a separate class; then one has to do is to null that one reference from the applet to that class in the destroy method and everything will be relased when necessary. This way it is not easy to forget to release things.
    Hey, here is a simple, imaginary, way in which the browser could cause this problem:
    The browser, of course needs a reference to the applet, call it m_Applet here. Presume the following helper function:
    Applet instantiateAndInit(Class appletClass) {
    Applet applet=appletClass.newInstance();
    applet.init();
    return applet;
    When the browser sees the applet tag it instantiates and inits the new applet as follows:
    m_Applet=instantiateAndInit(appletClass);
    As you can readily see, the second time the instantiation occurs, the m_Applet holds the reference to the old applet until after the new instance is created and initlized. This would not cause a memory leak but would require that twice the memory needed by the applet would be required to prevent OutOfMemory.I guess it is not fair to call this sort of thing a bug but it is questionable design.In real life this is propably not this blatant, but could happen You could try, if you like, by allocating less than 32 Megs in your init. If you then do not run out of memory it is an indication that there are at most two instances of your applet around and thus it could well be someting like I've speculated here.
    br Kusti

  • Templates and Dynamic Memory Allocation Templates

    Hi , I was reading a detailed article about templates and I came across the following paragraph
    template<class T, size_t N>
    class Stack
    T data[N]; // Fixed capacity is N
    size_t count;
    public:
    void push(const T& t);
    };"You must provide a compile-time constant value for the parameter N when you request an instance of this template, such as *Stack<int, 100> myFixedStack;*
    Because the value of N is known at compile time, the underlying array (data) can be placed on the run time stack instead of on the free store.
    This can improve runtime performance by avoiding the overhead associated with dynamic memory allocation.
    Now in the above paragraph what does
    "This can improve runtime performance by avoiding the overhead associated with dynamic memory allocation." mean ?? What does template over head mean ??
    I am a bit puzzled and i would really appreciate it if some one could explain to me what this sentence means thanks...

    The run-time memory model of a C or C++ program consists of statically allocated data, automatically allocated data, and dynamically allocated data.
    Data objects (e.g. variables) declared at namespace scope (which includes global scope) are statically allocated. Data objects local to a function that are declared static are also statically allocated. Static allocation means the storage for the data is available when the program is loaded, even before it begins to run. The data remains allocated until after the program exits.
    Data objects local to a function that are not declared static are automatically allocated when the function starts to run. Example:
    int foo() { int i; ... } Variable i does not exist until function foo begins to run, at which time space for it appears automatically. Each new invocation of foo gets its own location for i independent of other invocations of foo. Automatic allocation is usually referred to as stack allocation, since that is the usual implementation method: an area of storage that works like a stack, referenced by a dedicated machine register. Allocating the automatic data consists of adding (or subtracting) a value to the stack register. Popping the stack involves only subtracting (or adding) a value to the stack register. When the function exits, the stack is popped, releasing storage for all its automatic data.
    Dynamically allocated storage is acquired by an explicit use of a new-expression, or a call to an allocation function like malloc(). Example:
    int* ip = new int[100]; // allocate space for 100 integers
    double* id = (double*)malloc(100*sizeof(double)); // allocate space for 100 doublesDynamic storage is not released until you release it explicitly via a delete-expression or a call to free(). Managing the "heap", the area from where dynamic storage is acquired, and to which it is released, can be quite time-consuming.
    Your example of a Stack class (not to be confused with the program stack that is part of the C or C++ implementation) uses a fixed-size (that is, fixed at the point of template instance creation) automatically-allocated array to act as a stack data type. It has the advantage of taking zero time to allocate and release the space for the array. It has the disadvantages of any fixed-size array: it can waste space, or result in a program failure when you try to put N+1 objects into it, and it cannot be re-sized once created.

  • [iPhone SDK] Nib vs. JIT allocation: memory pressure and programmer time

    I'm going to ask a question that there's no single answer to. Instead, I'm more interested to know how different people approach the same problem.
    Lately I've been finding I'm of two minds when I have a UI element or view controller that isn't always used. On the one hand, the iPhone is a fairly memory-constrained environment, and the rules for developing for it are pretty explicit: don't use memory until you need it. That suggests "just in time" or "lazy" allocation: don't allocate it until you actually need it for the first time. After that, hold onto it in case you need it again, but be ready to free it if you receive a memory warning. My code is full of methods with names like "ensureViewerLoaded" which I call just before I need the object in question to allocate it if necessary.
    (For context: my applications are generally UIKit-based utility or data-browsing/editing apps with little graphical or computational complexity and very little memory pressure overall. So far, I've only released one app; at its greediest, it consumes about 1 MB. This isn't because I'm a brilliant programmer--although I do try to be parsimonious--it's just because the app doesn't need much memory to do what it's supposed to do.)
    But on the other hand, allocating those same objects in nib files (or, in other cases, loading them from plists or object archives) makes me more nimble. I've found that it reduces the weight of my code, paring it down to the logic I really need to be paying attention to. That lets me code faster--which means I can modify faster and release faster. And one thing I'm finding is that on the App Store, more releases means more sales. And more sales means I get to buy a MacBook Air with an SSD sooner, and thus save my back a few pounds of strain and my patience a few seconds of compilation. ;^)
    I guess what I'm asking is, how are you resolving these sorts of trade-offs? Where are you trading phone resources for programmer resources, and where are you deciding not to? And how is the type of app you're writing influencing that decision?

    My applications don't need much memory either, so I let everything happen in the nib file. I typically have about three views to worry about, and nothing significantly graphical in any of them.

  • Free Stmt Handle NOT release Memory allocated inDefineByPos[LongRaw]

    My Question is : Why Free Stmt Handle does not release the memory OCI allocated during OCIDefineByPos for data type "Long Raw"
    Please notice the Bold Fonts in the dtrace logging. Ptr is the pointer returned by malloc. Size is the memory size allocated.
    I use OCI to fetch records, each round I will fetch 1000 records. I found memory increase round by round. So I use dtrace to probe the memory malloc/free. At the end of each round, I will free the handle of SQL Statement. According the documentation, all the sub handles belongs to stmt handle should be free also. But I noticed that memory allocated during DefinebyPos for LongRaw did not get released.
    1. The SQL Send to DB each round is the same.
    2. I do the "Define" each round. So It's actually a "RE-Define" each round.
    The length of Long Raw is 512K. Here's the dtrace result when I call the OCIDefineByPos. There's a 2MB memory allocated, I don't know the reason.
    CPU ID FUNCTION:NAME
    3 45971 free:entry Ptr=0x247652d0 Size=0 TS=1568035638893113 FreeTime=2008 Jul 20 23:26:09
    libc.so.1`free
    libclntsh.so.10.1`sktsfFree+0x18
    libclntsh.so.10.1`kpummfpg+0xb6
    libclntsh.so.10.1`kghfrempty+0x17c
    libclntsh.so.10.1`kghgex+0x13c
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kpuhhalo+0x1fd
    libclntsh.so.10.1`kpuex_reallocTempBufOnly+0x5f
    libclntsh.so.10.1`kpuertb_reallocTempBuf+0x36
    libclntsh.so.10.1`kpudefn+0x2d1
    libclntsh.so.10.1`kpudfn+0x39e
    libclntsh.so.10.1`OCIDefineByPos+0x38
    scrubber`_ZN29clsDatabasePhysicalHostOracle6DefineEN7voyager15eBayTypeDefEnumEiPhiPs11EnumCharSet+0x17f
    3 45966 malloc:return Ptr=0x252ad100 Size=524356 TS=1568035638916654 AllocTime=2008 Jul 20 23:26:09
    libc.so.1`malloc+0x49
    libclntsh.so.10.1`kpummapg+0xcc
    libclntsh.so.10.1`kghgex+0x1aa
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kpuhhalo+0x1fd
    libclntsh.so.10.1`kpuex_reallocTempBufOnly+0x5f
    libclntsh.so.10.1`kpuertb_reallocTempBuf+0x36
    libclntsh.so.10.1`kpudefn+0x2d1
    libclntsh.so.10.1`kpudfn+0x39e
    libclntsh.so.10.1`OCIDefineByPos+0x38
    scrubber`_ZN29clsDatabasePhysicalHostOracle6DefineEN7voyager15eBayTypeDefEnumEiPhiPs11EnumCharSet+0x17f
    scrubber`_ZN18clsDatabaseManager6DefineEN7voyager15eBayTypeDefEnumEiPhiPs+0x13c
    scrubber`_ZN17clsDatabaseOracle13DefineLongRawEiPhiPs+0x2f
    3 45971 free:entry Ptr=0x2476ab30 Size=0 TS=1568035639035363 FreeTime=2008 Jul 20 23:26:09
    libc.so.1`free
    libclntsh.so.10.1`sktsfFree+0x18
    libclntsh.so.10.1`kpummfpg+0xb6
    libclntsh.so.10.1`kghfrempty+0x17c
    libclntsh.so.10.1`kghgex+0x13c
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kpuhhalo+0x1fd
    libclntsh.so.10.1`kpuertb_reallocTempBuf+0x8f
    libclntsh.so.10.1`kpudefn+0x2d1
    libclntsh.so.10.1`kpudfn+0x39e
    libclntsh.so.10.1`OCIDefineByPos+0x38
    scrubber`_ZN29clsDatabasePhysicalHostOracle6DefineEN7voyager15eBayTypeDefEnumEiPhiPs11EnumCharSet+0x17f
    scrubber`_ZN18clsDatabaseManager6DefineEN7voyager15eBayTypeDefEnumEiPhiPs+0x13c
    3 45966 malloc:return Ptr=0x2532d150 Size=2097220 TS=1568035639040090 AllocTime=2008 Jul 20 23:26:09
    libc.so.1`malloc+0x49
    libclntsh.so.10.1`kpummapg+0xcc
    libclntsh.so.10.1`kghgex+0x1aa
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kghgex+0x1e6
    libclntsh.so.10.1`kghfnd+0x28a
    libclntsh.so.10.1`kghalo+0x669
    libclntsh.so.10.1`kpuhhalo+0x1fd
    libclntsh.so.10.1`kpuertb_reallocTempBuf+0x8f
    libclntsh.so.10.1`kpudefn+0x2d1
    libclntsh.so.10.1`kpudfn+0x39e
    libclntsh.so.10.1`OCIDefineByPos+0x38
    scrubber`_ZN29clsDatabasePhysicalHostOracle6DefineEN7voyager15eBayTypeDefEnumEiPhiPs11EnumCharSet+0x17f
    scrubber`_ZN18clsDatabaseManager6DefineEN7voyager15eBayTypeDefEnumEiPhiPs+0x13c
    scrubber`_ZN17clsDatabaseOracle13DefineLongRawEiPhiPs+0x2f
    scrubber`_ZN5dblib11DbExtractor18DefineLongRawFieldEiPhiPs+0x2b

    Assumption : You are using OCIHandleFree((dvoid*)DBctx->stmthp,(ub4)OCI_HTYPE_STMT);
    There are two scenarios here the LONG Raw mapping and release of memory.
    Unlike the other data types LONG Raw memory mapping is different since the data type requires more memory chunks.
    45971 free:entry Ptr=0x2476ab30 Size=0 TS=1568035639035363 FreeTime=2008 Jul 20 23:26:09
    libc.so.1`free
    Since from the above statement it clear that free is called . Hence there may be chance in other area of the code I suspect the memory leak is (Like native storage).
    Moreover the standard OS behavior is that it will not release the Heap/Stack until the process is complete even the free() is called by the process. This is for performance on memory management.Having said that it should not increase the memory drastically when not required.

  • Memory still occupied after removeallobjects and release NSMutableArray

    Hi Guys,
    I am trying to read in a file to a NSString and then creat a NSMutableArray to store the contents divided by "\n". However, after release the array, it looks like the memory still not released on iphone simulator. The memory was increasing to 20MB after alloc the array and was still occupied after releasing the array. The memory of NSString can be released no problem. No memory leaks.
    Any help will be appreciated.
    //read in test.txt
    NSString *defaultDBPath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"test.txt"];
    NSError *error;
    NSString *testtest = [[NSString alloc] initWithContentsOfFile:defaultDBPath encoding:NSASCIIStringEncoding error:&error];
    NSMutableArray *Rna = [[NSMutableArray alloc] initWithArray: [testtest componentsSeparatedByString:@"\n"]];
    [testtest release];
    //remove all temp arrays
    [Rna removeAllObjects];
    [Rna release];
    Message was edited by: wli061
    Message was edited by: wli061

    I have tried on device. It was same result. The reason should be that the autorelease array was not released yet. I even tried aurorelease pool, but no luck. Here are more codes:
    1. In app Delegate, I initial a MyViewController.
    - (void)applicationDidFinishLaunching:(UIApplication *)application {
    // Override point for customization after app launch
    MyViewController *aViewController = [[MyViewController alloc] initWithNibName:@"ControllerView" bundle:[NSBundle mainBundle]];
    self.myViewController = aViewController;
    [aViewController release];
    UIView *controllersView = [myViewController view];
    [window addSubview:controllersView];
    [window makeKeyAndVisible];
    - (void)dealloc {
    [myViewController release];
    [window release];
    [super dealloc];
    2. In MyViewController, I have a function linked to a button.
    - (IBAction)RnaRns:(id)sender
    NSString *defaultDBPath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"rna.crs"];
    NSError *error;
    NSString *testtest = [[NSString alloc] initWithContentsOfFile:defaultDBPath encoding:NSASCIIStringEncoding error:&error];
    NSArray *Rnaarray = [[testtest componentsSeparatedByString:@"
    "] autorelease];
    [testtest release];
    I wondered the Rnaarray could be only released when the MyViewController was released. I did try put - (IBAction)RnaRns:(id)sende in an object, then initial it in MyViewController and use the function . Then released the object. However, the Rnaarray was still not released. So the real question is how we make sure the autorelease is released and when. Autoreleasepool is not working for this case as well. I cannot afford losing these memory on iphone. Thanks.

  • B1IF Memory consumption and SQL Release

    Hi All,
    the problem i have at the moment is that SQL does not seem to release any memory of B1IF Transactional/SQL Query related usage.
    Our 1 client is set on 40gb ram, 28gb of that  is used by SQL alone of which about 16 is SAP B1 Relating and the rest is due to B1IF Transactions and queries.
    It seems like SQL does not release any memory, several posts i have come across relate only to memory management for B1IF(Tomcat6). but i cant seem to find anything relating to B1IF's SQL side Memory usage and ways to limit or recycle the unused memory.
    this is causing our customers systems to start lagging and slowing down in general.
    Any help with this problem would be greatly appreciated.
    Thanks,
    Gideon

    Hi Gideon
    AFAIK you don't manage the SQL Server's memory usage on application basis. In other words: SQL Server knows best how much memory to use and how much each application should consume. In order to help your OS you should set max memory usage for SQL Server:
    SQL Server not releasing memory after query executes - Stack Overflow
    Kind regards,
    Radek

  • SWFLoader.unloadAndStop() - does it unload swf and release memory?

    Hi all,
    Do SWFLoader FP10 unloadAndStop() and GC really unload swf?
    I have simpliest test case possible when parent app creates new SWFLoader, loads sub app and then unloads it.
    Every time when it does it I see memory grows. The sub app is tiny:
    <?xml version="1.0" encoding="utf-8"?>
    <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute">
        <mx:Label id="idLabel" color="blue" text="This is embedded application."/>   
    </mx:Application>
    The parent application is also simple
    public function loadSwf(name:String):void
      loader.addEventListener(Event.COMPLETE, completeHandle, false, 0, true);   
      loader.showBusyCursor = true;
      loader.scaleContent = true;
      loader.source=name;     // name is subappswf.swf the
      loader.load();
    private function completeHandle(event:Event):void
      loader.removeEventListener(Event.COMPLETE, completeHandle);
      loader.unloadAndStop(true);
    Profiler shows decent Peak Memory and even Cumulative Memory does not seem horrible. There are no objects hanging in sub-application except SystemManager. But Google Chrom Task Manager shows the real picture:  Shockwave Flash memory grows and as soon it reaches threshold ~0.5 GB, application becomes unresponsive and eventually FP crashes.
    What else should be done to free up memory?
    Regards,
    Ilya

    This is iInteresting...
    /*** Unloads an image or SWF file. After this method returns the
    * <code>source</code> property will be null. This is only supported
    * if the host Flash Player is version 10 or greater. If the host Flash
    * Player is less than version 10, then this method will unload the
    * content the same way as if <code>source</code> was set to null.
    * This method attempts to unload SWF files by removing references to
    * EventDispatcher, NetConnection, Timer, Sound, or Video objects of the
    * child SWF file. As a result, the following occurs for the child SWF file
    * and the child SWF file's display list:
    * <ul>
    * <li>Sounds are stopped.</li>
    * <li>Stage event listeners are removed.</li>
    * <li>Event listeners for <code>enterFrame</code>,
    * <code>frameConstructed</code>, <code>exitFrame</code>,
    * <code>activate</code> and <code>deactivate</code> are removed.</li>
    * <li>Timers are stopped.</li>
    * <li>Camera and Microphone instances are detached</li>
    * <li>Movie clips are stopped.</li>
    * </ul>
    * @param invokeGarbageCollector Provides a hint to the garbage collector to run
    * on the child SWF objects (<code>true</code>) or not (<code>false</code>).
    * If you are unloading many objects asynchronously, setting the
    * <code>gc</code> parameter to <code>false</code> might improve application
    * performance. However, if the parameter is set to <code>false</code>, media
    * and display objects of the child SWF file might persist in memory after
    * the child SWF has been unloaded.
    public function unloadAndStop(invokeGarbageCollector:Boolean = true):void
    useUnloadAndStop =
    true;unloadAndStopGC = invokeGarbageCollector;
    source =
    null; // this will cause an unload unless autoload is true
    if (!autoLoad)load(
    null);}
    It means that if autoload is true (default) unload will not happen and sub-app
    will persist in memory.
    Regards,
    Ilya

Maybe you are looking for

  • Queries related to iViews in EP.. :)

    Hello All, I have few queries related to iViews in EP: 1> How should I create an iView out of an EAR file thats deployed on server? 2> Do I always need a par file to create an iView? 3> Suppose I have a Web project (containing servlet). How will I cr

  • How do I synch a new magic mouse

    My old magic mouse died and I just bought a new one; how can I synch it? This instructions say I have to navigate to the menu, but without a mouse how can I do that? I'm attempting to do this without borrowing someone elses mouse.

  • Importing into a compressed table

    Is importing into a compressed table effective? I mean is data get compressed when using imp utility to load into a compressed table? from the oracle doc : Compression occurs while data is being bulk inserted or bulk loaded. These operations include:

  • Accessible tables in Indesign

    How do I create a table with TH header cell tags, using indesign 2014 for accessibility?

  • Response message back to Third party vendor webservices

    Third party vendor sends messages to a webservice. The SOA application takes the message and puts it into a JMS topic through a mediator with asynchronous interface. All good so far, the system now needs to send a response back to the third party's w