External memory allocation and management using C / LabVIEW 8.20 poor scalability

Hi,
I have multiple C functions that I need to interface. I need
to support numeric scalars, strings and booleans and 1-4 dimensional
arrays of these. The programming problem I try to avoid is that I have
multiple different functions in my DLLs that all take as an input or
return all these datatypes. Now I can create a polymorphic interface
for all these functions, but I end-up having about 100 interface VIs
for each of my C function. This was still somehow acceptable in LabVIEW
8.0 but in LabVIEW 8.2 all these polymorphic VIs in my LVOOP project
gets read into memory at project open. So I have close to 1000 VIs read into memory when ever I open my project. It takes now about ten minutes to
open the project and some 150 MB of memory is consumed instantly. I
still need to expand my C interface library and LabVIEW doesn't simply
scale up to meet the needs of my project anymore.
I now
reserve my LabVIEW datatypes using DSNewHandle and DSNewPtr functions.
I then initialize the allocated memory blocks correctly and return the
handles to LabVIEW. LabVIEW complier interprets Call Library Function
Node terminals of my memory block as a specific data type.
So
what I thought was following. I don't want LabVIEW compiler to
interpret the data type at compile time. What I want to do is to return
a handle to the memory structure together with some metadata describing
the data type. Then all of my many functions would return this kind of
handle. Let's call this a data handle. Then I can later convert this
handle into a real datatype either by typecasting it somehow or by
passing it back to C code and expecting a certain type as a return.
This way I can reduce the number of needed interface VIs close to 100
which is still acceptable (i.e. LabVIEW 8.2 doesn't freeze).
So
I practically need a similar functionality as variant has. I cannot use
variants, since I need to avoid making memory copies and when I convert
to and from variant, my memory consumption increases to three fold. I
handle arrays that consume almos all available memory and I cannot
accept that memory is consumed ineffectively.
The question is,
can I use DSNewPtr and DSNewHandle functions to reserve a memory block
but not to return a LabVIEW structure of that size. Does LabVIEW
carbage collection automatically decide to dispose my block if I don't
correctly return it from my C immediately but only later at next call
to C code. Can I typecast a 1D U8 array to array of any dimensionality and any numeric data type without memory copy (i.e. does typecast work the way it works in C)?
If I cannot find a solution with this LabVIEW 8.20 scalability issue, I have to really consider transferring our project from LabVIEW to some other development environent like C++ or some of the .NET languages.
Regards,
Tomi
Tomi Maila

I have to answer to myself since nobody else has yet answered me. I came up with one solution that relies on LabVIEW queues. Queues of different type are all referred the same way and can also be typecased from one type to another. This means that one can use single element queues as a kind of variant data type, which is quite safe. However, one copy of the data is made when you enqueue and dequeue the data.
See the attached image for details.
Tomi Maila
Attachments:
variant.PNG ‏9 KB

Similar Messages

  • Can I keep photos on external hard drive and still use iPhoto?

    I have virtually no space left on my eMac's 80GB hard drive. For a long time, I have kept my sizable iTunes library on an external hard drive connected to the eMac. It takes a bit of extra time to open up the iTunes library and to upload music from newly purchased CDs, but generally the external hard drive and eMac perform well together. My question is: Can I do the same with my photos (put them on the external hard drive) and still use iPhoto to access them and edit them? I have checked some of the Apple knowledge base documents, and there seem to be warnings against this. I am currently using iPhoto 6.0.6. I have purchased iPhoto 7.0 (with iLife '08) but haven't put the new software on the computer because there is not enough space. I would like to straighten out this space problem and get the benefits of the new software. (I have backed up quite a few of the photos onto CDs, but I like to have them handy--therefore on the eMac or on a hard drive connected to the eMac.) Thanks ahead to anyone with some good ideas on this topic.

    winesmile
    Yes it is, but you cannot access the at the same time as iPhoto can only open one library at a time.
    So, a solution might be to have the entire library on the external, and to carry the smaller subset on the laptop.
    To do this, follow the instructions above to move the library, but instead of deleting the library on the laptop (in the final step), put the pics from the earlier years into the iPhoto trash and empty it. (iPhoto Menu -> Empty Trash. Hint: Don't so them all at one go! Better a few a a time.)
    Now you have the full library on the external (and if you want to access the full library, use that) and the 07 & 08 years on your laptop.
    To switch between libraries, simply hold down the option (or alt) key when launching.
    As you add photos to the Library you can use iPhoto Library Manager to move pics and albums /events between libraries, and so keep them up-to-date with each other.
    Please note this is the only way to sync the libraries, there is no app that can do this automatically.
    Regards
    TD

  • HT1473 Help I just moved my music files to a external hard drive and am using the new crappy version ( i know my opinion) of itunes and cant add the files to my libray it gives me the add file to library option but not the add folder to library option wha

    Help I just moved my music files to a external hard drive and am using the new crappy version ( i know my opinion) of itunes and cant add the files to my libray it gives me the add file to library option but not the add folder to library option what am i doing wrong?

    In iTunes 11 uncheck the preferences setting in in the iTunes Preferences panel "Advanced > Copy Files to iTunes Media folder when adding to Library"

  • I bought an external hard drive and now use it as time machine. I copied my photos to it. Can I now delete my iphotos?

    My photo library is full. I bought an external hard drive and am using it as time machine. I copied my photo library to the external hard drive and burned all of my pictures onto DVD's. If I delete my photos from my computer, will the photos still be on the external hard drive and saved in time machine? I am parinoid about losing my photos (memories).

    I think you mean Your Hard Disc is Full as opposed to iPhoto.
    agrech21 wrote:
    My photo library is full.
    With regard to your iPhoto Library... you can Move it to another External Drive and use it from there and then you can use TM to Backup that Drive as well as as your Mac.
    1)  Move iPhoto Library to External Drive
    See  >  iPhoto: How to move the Library to an EHD
    First... Make sure the EHD is Formatted Mac OS Extended (journaled)...
    Format, Erase, or Reformat a drive
    Quit iPhoto if open.
    Open your Pictures folder and select the iPhoto Library.
    Drag the iPhoto Library to the External Hard Drive.
    To Use iPhoto on an External Drive...
    Hold down the Option key on the keyboard and open iPhoto. Keep the Option key held down until you are prompted to create or choose an iPhoto Library.
    Click Choose Library.
    Locate and select the iPhoto Library in its new location.
    And also make sure that all is working to your Satisfaction before Deleting anything.

  • Space Allocated and Space used for Table.

    Hi,
    Is there any way, we can find Space Allocated and Space used for Table wise.
    I know USER_TABLESPACES help use to find table space wise. But I would like to know, the space utilized by each table in the specific table space.

    Check this link from Tom Kyte about it.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:2092735390859556::::P11_QUESTION_ID:231414051079

  • Short term memory allocator and Cache memor is out of memory

    Hi,
    I have three NW 6.5 sp8 servers in production. One of these, the one which holds Pervasive sql 9.7 began to show the following errors:
    Cache memory allocator out of available memory.
    Short term memory allocator is out of memory.
    360396 attempts to get more memory failed.
    request size in bytes 1048576 from Module SERVER.NLM
    I show here segstats.txt:
    *** Memory Pool Configuration for : DBASE_SERVER
    Time and date : 02:42:36 AM 12/02/2012
    Server version : NetWare 6.5 Support Pack 8
    Server uptime : 11d 04h 35m 28s
    SEG.NLM uptime : 0d 00h 01m 17s
    SEG.NLM version : v2.00.17
    Original Memory : 4,292,812,800 bytes (4.00 GB)
    ESM Memory : 805,302,272 bytes (768.0 MB)
    0xFFFFFFFF --------------------------------------------------------------
    | Kernel Reserved Space |
    | |
    | Size : 180,355,071 bytes (172.0 MB) |
    | |
    0xF5400000 --------------------------------------------------------------
    | User Address Space (L!=P) |
    | |
    | User Pool Size : 671,088,640 bytes (640.0 MB) |
    | High Water Mark : 559,710,208 bytes (533.8 MB) |
    | PM Pages In Use : 1,855,488 bytes (1.8 MB) |
    | |
    0xCD400000 --------------------------------------------------------------
    | Virtual Memory Address Space (L!=P) |
    | |
    | VM Address Space : 2,369,781,760 bytes (2.21 GB) |
    | Available : 801,435,648 bytes (764.3 MB) |
    | Total VM Pages : 800,870,400 bytes (763.8 MB) |
    | Free Clean VM : 785,563,648 bytes (749.2 MB) |
    | Free Cache VM : 15,306,752 bytes (14.6 MB) |
    | Total LP Pages : 0 bytes (0 KB) |
    | Free Clean LP : 0 bytes (0 KB) |
    | Free Cache LP : 0 bytes (0 KB) |
    | Free Dirty : 0 bytes (0 KB) |
    | NLM Memory In Use : 1,767,256,064 bytes (1.65 GB) |
    | NLM/VM Memory : 1,751,785,472 bytes (1.63 GB) |
    | Largest Segment : 2,097,152 bytes (2.0 MB) |
    | Lowest Kernel Page: 0 bytes (0 KB) |
    | : [0x00000000] |
    | High Water Mark : 2,243,096,576 bytes (2.09 GB) |
    | Alloc Failures : 370,804 |
    | |
    0x40000000 --------------------------------------------------------------
    | File System Address Space (L==P or L!=P) |
    | |
    | FS Address Space : 1,067,290,624 bytes (1017.8 MB) |
    | Available : 108,978,176 bytes (103.9 MB) |
    | Largest Segment : 3,362,816 bytes (3.2 MB) |
    | |
    | NSS Memory (85%) : 613,683,200 bytes (585.3 MB) |
    | NSS (avail cache) : 610,455,552 bytes (582.2 MB) |
    | |
    0x00627000 --------------------------------------------------------------
    | DOS / SERVER.NLM |
    | |
    | Size : 6,451,200 bytes (6.2 MB) |
    | |
    0x00000000 --------------------------------------------------------------
    Total NLMs loaded on the server: 307
    Top 20 Memory Consuming NLMs
    NLM Name Version Date Total NLM Memory
    ================================================== =============
    1. NWMKDE.NLM 9.70.07 Nov 14, 2008 813,035,623 bytes (775.4 MB)
    2. SERVER.NLM 5.70.08 Oct 3, 2008 467,216,096 bytes (445.6 MB)
    3. NSS.NLM 3.27.02 Nov 11, 2009 203,168,848 bytes (193.8 MB)
    4. NCPL.NLM 3.02 May 6, 2009 41,854,837 bytes (39.9 MB)
    5. NWSQLMGR.NLM 9.70.07 Nov 14, 2008 39,309,132 bytes (37.5 MB)
    6. DS.NLM 20217.07 Jan 30, 2009 24,851,303 bytes (23.7 MB)
    7. APACHE2.NLM 2.00.63 Apr 25, 2008 19,863,493 bytes (18.9 MB)
    8. CIOS.NLM 1.60 Feb 12, 2008 10,569,567 bytes (10.1 MB)
    9. OWCIMOMD.NLM 3.02 Nov 27, 2007 9,318,616 bytes (8.9 MB)
    10. APRLIB.NLM 0.09.17 Apr 25, 2008 8,959,760 bytes (8.5 MB)
    11. APACHE2.NLM 2.00.63 Apr 25, 2008 7,702,469 bytes (7.3 MB)
    12. FATFS.NLM 1.24 Aug 27, 2007 5,859,413 bytes (5.6 MB)
    13. NWPA.NLM 3.21.02 Oct 29, 2008 4,990,686 bytes (4.8 MB)
    14. PKI.NLM 3.32 Aug 25, 2008 4,069,957 bytes (3.9 MB)
    15. WS2_32.NLM 6.24.01 Feb 14, 2008 3,623,596 bytes (3.5 MB)
    16. NWMPM100.NLM 9.70.07 Nov 14, 2008 3,597,747 bytes (3.4 MB)
    17. NWODBCEI.NLM 9.70.07 Nov 14, 2008 3,459,159 bytes (3.3 MB)
    18. PORTAL.NLM 4.03 Sep 22, 2008 3,404,576 bytes (3.2 MB)
    19. JVM.NLM 1.43 Oct 16, 2008 2,701,919 bytes (2.6 MB)
    20. NLDAP.NLM 20218.11 Jan 30, 2009 2,579,131 bytes (2.5 MB)
    Top 20 NLM - Memory Trends
    NLM Name Original Memory Current Change
    ================================================== =========
    1. NWMKDE.NLM 842,068,071 bytes 813,035,623 bytes -27.7 MB
    2. SERVER.NLM 463,894,240 bytes 467,216,096 bytes 3.2 MB
    3. NSS.NLM 203,168,848 bytes 203,168,848 bytes 0 KB
    4. NCPL.NLM 41,850,741 bytes 41,854,837 bytes 4 KB
    5. NWSQLMGR.NLM 39,092,044 bytes 39,309,132 bytes 212 KB
    6. DS.NLM 24,896,359 bytes 24,851,303 bytes -44 KB
    7. APACHE2.NLM 19,855,301 bytes 19,863,493 bytes 8 KB
    8. CIOS.NLM 10,569,567 bytes 10,569,567 bytes 0 KB
    9. OWCIMOMD.NLM 9,277,656 bytes 9,318,616 bytes 40 KB
    10. APRLIB.NLM 8,959,760 bytes 8,959,760 bytes 0 KB
    11. APACHE2.NLM 7,702,469 bytes 7,702,469 bytes 0 KB
    12. FATFS.NLM 5,859,413 bytes 5,859,413 bytes 0 KB
    13. NWPA.NLM 4,957,918 bytes 4,990,686 bytes 32 KB
    14. PKI.NLM 4,135,493 bytes 4,069,957 bytes -64 KB
    15. WS2_32.NLM 3,619,500 bytes 3,623,596 bytes 4 KB
    16. NWMPM100.NLM 3,597,747 bytes 3,597,747 bytes 0 KB
    17. NWODBCEI.NLM 3,459,159 bytes 3,459,159 bytes 0 KB
    18. PORTAL.NLM 3,400,480 bytes 3,404,576 bytes 4 KB
    19. JVM.NLM 2,701,919 bytes 2,701,919 bytes 0 KB
    20. NLDAP.NLM 2,505,403 bytes 2,579,131 bytes 72 KB
    Logical Memory Summary Information
    ================================================== ===============================
    File System Cache Information
    FS Cache Free : 4,591,616 bytes (4.4 MB)
    FS Cache Fragmented : 104,386,560 bytes (99.6 MB)
    FS Cache Largest Segment : 3,362,816 bytes (3.2 MB)
    Logical System Cache Information
    LS Cache Free : 0 bytes (0 KB)
    LS Cache Fragmented : 722,448,384 bytes (689.0 MB)
    LS OS Reserved Data : 333,455,360 bytes (318.0 MB)
    LS Cache Largest Segment : 2,097,152 bytes (2.0 MB)
    LS Cache Largest Position : 2DE00000
    Summary Statistics
    Total Address Space : 4,294,967,296 bytes (4.00 GB)
    Total Free : 4,591,616 bytes (4.4 MB)
    Total Fragmented : 826,834,944 bytes (788.5 MB)
    Highest Physical Address : CFE53000
    User Space : 671,088,640 bytes (640.0 MB)
    User Space (High Water Mark) : 559,710,208 bytes (533.8 MB)
    NLM Memory (High Water Mark) : 2,243,096,576 bytes (2.09 GB)
    Kernel Address Space In Use : 2,572,759,040 bytes (2.40 GB)
    Available Kernel Address Space : 43,929,600 bytes (41.9 MB)
    Memory Summary Screen (.ms)
    ================================================== ===============================
    KNOWN MEMORY Bytes Pages Bytes Pages
    Server: 3487425552 851422 Video: 8192 2
    Dos: 86000 20 Other: 131072 32
    FS CACHE KERNEL NLM MEMORY
    Original: 3483172864 850384 Code: 46854144 11439
    Current: 108978176 26606 Data: 27242496 6651
    Dirty: 0 0 Sh Code: 49152 12
    Largest seg: 3362816 821 Sh Data: 20480 5
    Non-Movable: 81920 20 Help: 172032 42
    Other: 4235538432 4292855635 Message: 1236992 302
    Avail NSS: 610439168 149033 Alloc L!=P: 1661366272 405607
    Movable: 8192 2 Alloc L==P: 14843904 3624
    Total: 1751785472 427682
    VM SYSTEM
    Free clean VM: 785563648 191788
    Free clean LP: 0 0
    Free cache VM: 15306752 3737
    Free cache LP: 0 0
    Free dirty: 0 0
    In use: 1855488 453
    Total: 801435648 195663
    Memory Configuration (set parameters)
    ================================================== ==============================
    Auto Tune Server Memory = ON
    File Cache Maximum Size = 1073741825
    File Service Memory Optimization = 1
    Logical Space Compression = 1
    Garbage Collection Interval (ON) = 299.9 seconds
    VM Garbage Collector Period (ON) = 300.0 seconds
    server -u<number> = 671088640
    NSS Configuration File:
    C:\NWSERVER\NSSSTART.CFG
    File does not exist,
    or is zero byte in size.
    DS Configuration File:
    SYS:\_NETWARE\_NDSDB.INI
    File does not exist,
    or is zero byte in size.
    TSAFS Memory Information/Configuration
    ================================================== ==============================
    Cache Memory Threshold : 1%
    Read Buffer Size : 65536 bytes
    Max Data Sets for Read Ahead : 2
    Read Threads Per Job : 4
    NSS Memory Information/Configuration
    ================================================== ==============================
    Current NSS Memory Settings
    Cache Balance Percentage : 85%
    Cache Memory Allocated : 585.3 MB
    Available Cache from NSS : 582.2 MB
    Current NSS Caching Percentages
    Buffer cache hit percentage : 63%
    Name Tree cache hit percentage : 94%
    File cache hit percentage : 99%
    NSS Flush Status: Not Flushed
    Server High/Low Water Mark Values
    ================================================== ==============================
    NLM Memory High Water Mark = 2,243,096,576 bytes
    File System High Water Mark = 443,108 bytes
    User Space Information:
    User Space High Water Mark = 559,710,208 bytes
    Committed Pages High Water Mark = 87 pages
    Mapped VM Pages High Water Mark = 3,875 pages
    Reserved Pages High Water Mark = 400,103 pages
    Swapped Pages High Water Mark = 3,785 pages
    Available Low Water Mark = 294,670,336
    ESM Memory High Water Mark = 173 pages
    It seems that server.nlm is growing without limits. When tat occurs, I have the mentioned errors.
    Though NWMKDE seems to have grown. It remains steady around the showed values.
    I'm not brave enough to apply the memcalc's recommended fixes because the following line:
    set file cache maximum size=822083584
    returns an error saying the minimun value should be 1073741824.
    Can someone help me because I'm completely blind here.
    Thanks in advance.
    Gabriel

    I take it this is primarily a database server, in which case it's OK that Btrieve is using so much memory? You wouldn't want this to be a general file server too. Is the memory error causing any actual problem?
    Server is asking for only 1mb, and due to fragmentation there is little free memory (actually 2mb left, which is a little odd, but neither here nor there).
    Also, let's see your bti.cfg, which is the Btrieve config file. I'll paste in below an ArcServe TID on Btrieve using excessive memory:
    Symptoms
    Btrieve was upgraded to version 8.5 during the installation of ARCserve r11.1. The cachesize in the BTI.cfg microkernel section is at 20 MB (20480). (Pervasive would like this setting placed to 20% of the server memory or database size which ever is less.) The server will keep adding 20 additional Megs of memory to the total amount of memory the server is using for database transactions after each backup job. This can be verified by performing the following at the server console:
    LOAD MONITOR
    Scroll down to System Resources under Available Options and hit enter.
    Scroll down to Alloc Memory (Bytes) and hit enter.
    Locate NWMKDE.nlm in the Resource Tags list.
    Sort by memory bytes and you will slowly see nwmkde.nlm move to the top of the usage list. Unless the server is rebooted the small memory allocations stays at the increased amount.
    Explanation
    Starting with Btrieve version 8.5 and higher, Pervasive has been working to make the Btrieve database more dynamic. They have created a two-tier memory allocation approach. The first level is controlled by the cache size setting in the BTI.cfg. If this becomes inadequate, the second level will be accessed. The default setting for the second level is 60% of the server's total memory.
    The following line in the BTI.cfg will control the second level of memory caching:
    MaxCacheUsage=60; default is 60% of memory.
    An example would be a server with 100 MB of memory and the following settings in sys:\system\bti.cfg:
    [microkernel]
    cachesize=20480
    MaxCacheUsage=60
    This will cause the nwmkde.nlm to use 20 MB (20480) of memory initially and grow up to 60 percent of the total server memory or 60 MB.
    Now you also have to throw Max worker threads into the mix. A setting of Max worker threads = 3 in the BTI.cfg > Btrieve Communications Manager section will also use server memory. It will use 1 MB per thread. In this example, 3 Megs of additional memory will be used. That will bring the total amount of memory used by nwmkde.nlm to 20 MB (20480) + 3 MB = 23 MB when the server is first booted. After running some backups, this number could go up to as high as 60 MB (60% of server memory) if the server dynamically requires it.
    Resolution
    The MaxCacheUsage=60 setting must be set down from this 60% number. Pervasive recommends setting this from 0 to 20. The server needs to be rebooted for this change to take effect.

  • Memory Allocation problem when using JNI

    For a Project we need to interface Labwindows-CVI/ Teststand with an application written in Java. we are using JNI. The code uses JNI_CreateJavaVM to start a JVM to run the Java interface code. The code did run for some time , but now ( without any obvious change nor on the CVI side neither on the Java side) JNI_CreateJavaVM fails with -4 error code, that means that the start of the JVM failed due to memory allocation failure. First investigation showed, that even if Windows Task Manager shows about 600M free physical memory, you can allocate in CVI only about 250M as a single block at the time we are calling  JNI_CreateJavaVM. That might be a little bit to less as we need to pass -Xmx192m to the JVM to run our code. Unfortunately just increasing the physical memory of that machine from 1.5G to 2G doesn't change anything. The free memory showed by Task Manager increases, but the allocatable memory block size does not. Are the any trick to optimize CVI/Teststand for that use case ?  Or maybe known problems with JNI ?
    Solved!
    Go to Solution.

    hi,
    have you tried other functions to allocate memory?
    the -Xmx command only sets the maximum heap size. You can try to use -Xms. This command sets the initial Java heap size. 

  • [iphone sdk] memory allocations and application sandboxing

    folks,
    does the os automatically de-allocate any memory allocated when my app exists? reason i ask is the phone seems to get slower and slower over time with more crashes. a hard restart seems to fix the problem.
    i'm guessing that it is because i'm not cleaning things up on exit or something, but maybe there is something else wrong.
    john

    The academic answer is: It shouldn't matter how much memory you leak in your app after the app has been closed. I can't speak to how the device functions cause I can't test on one yet! But it's UNIX under the hood and that means each process is assigned it's own address space. Any memory allocated to a process is completely reclaimed when the process exits.
    I'm not sure what changes Apple made to the VM kernel subsystem for the iPhone. Unix is already tried and tested in this arena -- so if it's the default Darwin VM I would be very surprised if this is a bug. But since this is embedded they may have added some "shortcuts" for performance and efficiency... hard to say. Since you have the device, are you able to do any system level diagnostics? does the device lose free memory the more your start/stop your app?
    Also -- the device has 128MB of RAM. The 8 or 16GB is storage, which isn't used for RAM. The specs are hard to find, but I think I found the answer through Google on the amount of RAM in the iPhone and iPod Touch.
    Cheers,
    George

  • Will TimesTen degrade performance if memory allocated close to use up?

    If I near insert data that close to use up the memory allocated to TimesTen, will the performance affected?
    Message was edited by:
    carfield

    There are two kinds of disk files used by TimesTen. Transaction log files (dsname.logn) which are created as transactions are executed. These are normally purged automatically by checkpoints (unless you have disabled checkpointing...) but incorrect use of some features such as replication, XLA, AWT cache groups and incremental backups can prevent log files being purged automatically.
    The other type of file used are the checkpoint files (there are 2 of them). These are each the size of the datastore (PermSize + maybe 20 Mb) and they will exist until you drop the entire datastore (ttDestroy). That is normal and correct behavior. Although the files may start out very small they will grow as you add data to the database. Even dropping all the tables etc. will not cause the files to shrink. This is just an artifact of how checkpoints are structured and is not a problem. Most other databases (including Oracle) behave similarly - they do not somehow free parts of database files back to the O/S just because some data is deleted or a table is dropped.
    I'm afraid I do not see what the issue is here...
    Chris

  • HT204382 what software do i need to play movies on avi file . have connected a external memory disk and all the movies are in avi format.

    What software do i need to install in order to be able
    to play movies in .avi format.
    have connected an external memory and all the entertainment
    files are in .avi format.
    Ansver i get : does not support

    VLC is a good choice. While I don't know if it will play your files, Perian is a helper app for Quicktime which allows you to play a lot of things in the Quicktime Player.
    There is also Flip4Mac which allows you to play windows media files in Quicktime.
    Finally, note that the extension on a video file doesn't really tell you what you need to play it. The .avi, .mpg, etc. is only the wrapper around the actual video. The video is encoded by some means and needs to be decoded. The video inside the .avi can be encoded in one of many different schemes.
    VLC, Perian, Flip4Mac, and many others, are all able to decode various flavors of encoding. Having all of those in your toolbox will help to allow you to play most of the things you'll find.

  • Memory allocation and release

    Hi,
    i'm having some doubts on the memory issues, like allocation, release, EEPROM and RAM:
    Question 1:
    private method1()
    byte[]a = new byte[10];
    byte[]b = JCSystem.makeTransientByteArray(...);
    byte c;
    When will some memory be allocated to variables a, b and c, and when will that memory be released?
    Question 2:
    JCRE (until 2.2 at least) doesn't have Garbage collector, but if the card itself has that mechanism, will the applet automatically use it?
    Thanks in advance!

    It's not a question on how many EEPROM writes are done each day/hout/minute/second. It's a question of whether the data must be saved across sessions. RAM is mainly used for intermediate computations & session data. EEPROM is used to store persistent info (user info, credit, phonebook, etc...).
    RAM is also a good way to optimize processing time. If you have to manipulate a lot of persistent data during an APDU, it's a good idea to copy everything in a "cache" (transient buffer) and/or local variables, do all of your processing on the cached values, and then perform the persistent write at the end of the command.
    As to your last question on how much RAM is acceptable, it depends on the context. If you know that your applet will be alone, feel free to use as much as the platform can give you. If not, try to be reasonable. Cryptographic intensive applets generally use a lot of RAM to store intermediate computation results.
    From personal experience, I've written very simple applets that needed about 20 transient bytes, and complex ones that needed up to 1500 transient bytes. If you really need to set a limit, 200 bytes is already a considerable amount of transient space and should be more than enough for most applets. But then again, my guess is as good as any.

  • I want to put my iTunes library on an external hard drive but leave my favourite songs on the computer to access without the external hard drive and when using iMovie. Is there a way to do this?

    I have filled my hard drive, half with music, half with photos. I want to take my music off, and put it onto an external hard drive. However, I would still like to have access to favourite songs and playlists when the external hard drive isn't plugged in. I would also like to access this music to use with iMovie. Is there a way to do this?

    You will have to engage a split library which means you will have to start answering questions on this forum because you will need to learn a lot about how iTunes works in order not that have a big mess at the end of it all.  It also won't be easy when the time comes to relocate it all to different drives.
    You can go to advanced preferences and turn off organize media and copy to items to media folder when adding items to library.  Next read about how to selectively consolidate items to the external drive.
    Sept. 2010, Consolidate selected content - https://discussions.apple.com/thread/2589812 and April 2014, https://discussions.apple.com/message/25414357
    "...selected the new tracks directly in iTunes, Control-clicked on the selection, and saw that now you can consolidate selected items." - http://hints.macworld.com/article.php?story=20090919000326840
    Remember that when you start iTunes without the external drive turned on iTunes will present you with a bunch of broken links.  If you have automatic downloads enabled with iCloud you may have to turn that off to prevent iTunes from repopulating your drive trying to deal with those "missing" tracks.
    If you add tracks in bunches to iTunes and want them to go to different media folders you will need to either add them by holding down the option key while dragging if the desired media folder is not the one set in preferences, or by changing the media folder in preferences.  Changing media folder preferences only applies to new files added, not old ones.
    Okay, I am going to stop typing because you might just reply to me, "Oh, okay, forget it," and I don't need the exercise.
    What are the iTunes library files? - http://support.apple.com/kb/HT1660
    More on iTunes library files and what they do - http://en.wikipedia.org/wiki/ITunes#Media_management
    What are all those iTunes files? - http://www.macworld.com/article/139974/2009/04/itunes_files.html
    Where are my iTunes files located? - http://support.apple.com/kb/ht1391

  • I'm going to buy a iMac soon,I'm wondering if I can save files from my windows pc to a external hard drive and then use the files on my mac?

    As above, if any one can recommend a external hard drive that will also be much appreciated.

    Depends on what the long term plans for the EHD is. I'd recommend any of these if you are looking for features and high quality. Personally I use the LaCie's (I have 4) but OWC or G-Tech would also be welcome on my desk.
    G-Tech G-DRIVE series
    LaCie d2 Quadra series
    OWC Mercury Elite Pro series
    As for your PC files that totally depends on the file type and if they can be read by comporable OS X apps. I'd suggest book marking and reading Switch 101

  • I think I've got a memory leak and could use some advice

    We've got ourselves a sick server/application and I'd like to gather a little community advice if I may. I believe the evidence supports a memory leak in my application somewhere and would love to hear a second opinion and/or suggestions.
    The issue has been that used memory (as seen by FusionReactor) will climb up to about 90%+ and then the service will start to queue requests and eventually stop processing them all together. A service restart will bring everything back up again and it could run for 2 days or 2 hours before the issue repeats itself. Due to the inconsistant up time, I can't be sure that it's not some trouble bit of code that runs only occasionally or if it's something that's a core part of the application. My current plan is to review the heap graph on the "sick" server and look for sudden jumps in memory usage then review the IIS logs for requests at those times to try and establish a pattern. If anyone has some better suggestions though, I'm all ears! The following are some facts about this situation that may be usefull.
    The "sick" server:
    - CF 9.0.1.274733 Standard
    - FusionReactor 4.0.9
    - Win2k8 Web R2 (IIS7.5)
    - Dual Xeon 2.8GHz CPUs
    - 4GB RAM
    JVM Config (same on "sick" and "good" servers):
    - Initial and Max heap: 1536
    -server -Xss10m -Dsun.io.useCanonCaches=false -XX:PermSize=192m  -XX:MaxPermSize=256m -XX:+UseParNewGC -Xincgc -Xbatch -Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib -Dcoldfusion.dotnet.disableautoconversion=true
    What I believe a "healthy" server graph should look like (from "good" server):
    And the "sick" server graph looks like this:

    @AmericanWebDesign, I would concur with BKBK (in his subsequent reply) that a more reasonable explanation for what you’re seeing (in the growth of heap) is something using and holding memory, which is not unusual for the shared variables scopes: session, application, and/or server. And the most common is sessions.
    If that’s enough to get you going, great. But I suspect most people need a little more info. If this matter were easy and straightforward, it could be solved in a tweet, but it’s not, so it can’t.
    Following are some more thoughts, addressing some of your concerns and hopefully pointing you in some new directions to find resolution. (I help people do it all the time, so the good news is that it can be done, and answers are out there for you.)
    Tracking Session Counts
    First, as for the observation we’re making about the potential impact of sessions, you may be inclined to say “but I don’t put that much in the session scope”. The real question to start with, though, is “how many sessions do you have”, especially when memory use is high like that (which may be different than how many you have right now). I’ve helped many people solve such problems when we found they had tens or hundreds of thousands of sessions.  How can you tell?
    a) Well, if you were on CF Enterprise, you could look at the Server Monitor. But since you’re not, you have a couple of choices.
    b) First, any CF shop could use a free tool called ServerStats, from Mark Lynch, which uses the undocumented servicefactory objects in CF to report a count of sessions, overall and per application, within an instance. Get it here: http://www.learnosity.com/techblog/index.cfm/2006/11/9/Hacking-CFMX--pulling-it-all-togeth er-serverStats . You just drop the files (within the zip) into a web-accessible directory and run the one CFM page to get the answer instantly.
    c) Since you mention using FusionReactor 4.0.9, here’s another option: those using FR 4 (or 4.5, a free update for you since you’re on FR 4) can use its available (but separately installed) FusionReactor Extensions for CF, a free plugin (for FR, at http://www.fusion-reactor.com/fr/plugins/frec.cfm). It causes FR to grab that session count (among many other really useful things about CF) to log it every 5 seconds, which can be amazingly helpful. And yes, FREC can grab that info whether one is on CF Standard or Enterprise.
    And let’s say you find you do have tens of thousands of sessions (or more). You may wonder, “how does that happen?“ The most common explanation is spiders and bots hitting your site (from legit or unexpected search engines and others). Some of these visit your site perhaps daily to gather up the content of all the pages of your site, crawling through every page. Each such page hit will create a new session. For more on why and how (and some mitigation), see:
    http://www.carehart.org/blog/client/index.cfm/2006/10/4/bots_and_spiders_and_poor_CF_perfo rmance
    About “high memory”
    All that said, I’d not necessarily conclude so readily that your “bad” memory graph is “bad”. It could just be “different”.
    Indeed, you say you plan to “look for sudden jumps in memory usage“, but if you look at your “bad” graph, it simply builds very slowly. I’d think this supports the notion that BKBK and I are asserting: that this is not some one request that “goes crazy” and uses lots of memory, but instead is the “death by a thousand cuts” as memory use builds slowly.  Even then, I’d not jump at a concern that “memory was high”.
    What really matters, when memory is “high” is whether you (or the JVM) can do a GC (garbage collection) to recover some (or perhaps much) of that “high, used memory”. Because it’s possible that while it “was” in use in the past (as the graph shows), it might no longer be “in use” at the moment . 
    Since you have FR, you can use its “System Metrics page” to do a GC, using the trash can in the top left corner of the top right-most memory graph. (Those with the CFSM can do a GC on its “Memory Usage Summary” page, and SeeFusion users can do it on its front page.)
    If you do a GC, and memory drops q lot, then you had memory that “had been” but no longer ”still was” in use, and so the high memory shown was not a problem. And the JVM can sometimes be lazy (because it’s busy) about getting to doing a GC, so this is not that unusual. (That said, I see you have added the Xincgc arg to your JVM. Do you realize that tells the JVM not to do incremental GCs? Do you really want that? I understand that people trade jvm args like baseball cards, trying to solve problems for each other, but I’d argue that’s not the place to start. In fact, rarely do I find myself that any new JVM args are needed to solve most problems.)
    (Speaking of which, why did you set the – xss value? And do you know if you were raising or lowering it form the default?)
    Are you really getting “outofmemory” errors?
    But certainly, if you do hit a problem where (as you say) you find requests hanging, etc., then you will want to get to the bottom of that. And if indeed you are getting “outofmemory” problems, you need to solve those. To confirm if that’s the case, you’ll really want to look at the CF logs (specifically the console or “out” logs). For more on finding those logs, as well as a general discussion of memory issues  (understanding/resolving them), see:
    http://www.carehart.org/blog/client/index.cfm/2010/11/3/when_memory_problems_arent_what_th ey_seem_part_1
    This is the first of a planned series of blog entries (which I’ve not yet finished) on memory issues which you may find additionally helpful.
    But I’ll note that you could have other explanations for “hanging requests” which may not necessarily be related to memory.
    Are you really getting “queued” requests?
    You also say that “the service will start to queue requests and eventually stop processing them all together”. I’m curious: do you really mean “queuing”, in the sense of watching something in CF that tells you that? You can find a count of queued requests, with tools like CFSTAT, jrun metrics, the CF Server Monitor, or again FREC. Are you seeing one of those? Or do you just mean that you find that requests no longer run?
    I address matters related to requests hanging and some ways to address them in another entries:
    http://www.carehart.org/blog/client/index.cfm/2010/10/15/Lies_damned_lies_and_CF_timeouts
    http://www.carehart.org/blog/client/index.cfm/2009/6/24/easier_thread_dumps
    Other server differences
    You presented us a discussion of two servers, but you’ve left us in the dark on potential differences between them. First, you showed the specs for the “sick” server, but not the “good” one. Should we assume perhaps you mean that they are identical, like you said the JVM.config is?
    Also, is there any difference in the pattern of traffic (and/or the sites themselves) on the two servers? If they differ, then that could be where the explanation lies. Perhaps the sites on one are more inclined to be visited often by search engine spiders and bots (if they sites are more popular or just have become well known to search engines). There are still other potential differences that could explain things, but these are all enough to hopefully get you started.
    I do hope that this is helpful. I know it’s a lot to take in. Again, if it was easier to understand and explain, there wouldn’t be so much confusion. I do realize that many don’t like to read long emails (let alone write them), which only exacerbates the problem. Since all I do each day is help people resolve such problems (as an independent consultant, more at carehart.org/consulting), I like to share this info when I can (and when I have time to elaborate like this), especially when I think it may help someone facing these (very common) challenges.
    Let us know if it helps or raises more questions. :-)
    /charlie

  • Can I keep my itunes library on an external hard drive and still use it??

    Hello,
    I recently purchased a new laptop because my old computer wwent on the fritz.  Luckily I backed up my library on my external hard drive. My question is can a) connect my itouch and ihone to my nex computer and b) run my library strictly from my hard drive and not import everything to my new computer?
    thanks for the hep!!

    tothatc wrote:
    Hello,
    I recently purchased a new laptop because my old computer wwent on the fritz.  Luckily I backed up my library on my external hard drive. My question is can a) connect my itouch and ihone to my nex computer and b) run my library strictly from my hard drive and not import everything to my new computer?
    thanks for the hep!!
    On Windows, hold Shift (not alt), launch iTunes and select Choose library... and select the iTunes folder you copied to the external drive.

Maybe you are looking for

  • Trying to install os 8 on 7200/75

    Picked up an old but in very good condition power macintosh 7200/75.I am trying to install a retail copy of os 8 on it. Tried the press c on bootup but it goes straight to the hard drive. When the computer is booted into 7.6.1 and the os 8 cd is in t

  • Upgrading to Mac OS 10.5.6

    I just got my Mac Mini and ran the Software Upgrade check. There were several items that needed to be upgraded along with the Mac OS 10.5.6 upgrade. Everything else upgraded successfully but now when installing the 10.5.6 upgrade the system seems to

  • Document prints in reverse, what setting can I change to correct this?

    Document prints in reverse, what setting can I change to correct this?  Not a printer problem - documents print in correct order when using other programs.  Using Acrobat X Pro on a Mac.

  • Converting a Crystal 4.6 report to Crystal XI using Visual Basic 6.0

    I followed instructions for migrating from the OCX to the RDC..... and the Crystal viewer will open but it's not passing my sql query to the crystal report, so it ends up loading every single record in the database. I tried adding code to actually op

  • Third Party Software Compatibility

    I am currently running OS 10.3.9 on an iMAC G5 and am thinking about upgrading to OS 10.5. I have a variety of third party software in older versions that I am totally reliant on. Can you tell me whether these are compatible with and will run with 10