Large Arrays and Memory

I'm supposed to be working on code for a lab, and they have reported possible problems with labVIEW eating through memory on long experiments.  Someone before me tried to fix the problem but I am unsure if it is actually helping.  (I'm more familiar with languages like C++, and have not used labVIEW prior to this summer). 
Where I believe the problem lies is with the array (within a loop).  Depending on the experiment the arrays will be of different sizes so how they handle the array is:
-> It is an array of a cluster of 2 elements
-> The array is wired to a shift register.
-> The shift register is initialized prior to the loop opening by wiring the shift register to a cluster of 2 "0's".
->Each loop cycle they add new data (a new cluster) to the array using "Build Array"
There are multiple of these arrays all being plotted so they use "Build Cluster Array" and then wire it to the corresponding Plot (an XY Graph).  They use this after "Build Array".
This used to be it, so the arrays would grow large and crash the program.  Someone before me added an option to clear the arrays, but I am unsure if the way she designed it actually releases the memory since they are still reporting some problems.  The user enters a number in a control "Clear After:".  On every iteration that is a multiple of that number, the program passes the shift register an array with one element.  The array that is passed set up the same as the array passed for the initialization process. 
My concern is that the code never specifically says delete the array or release the memory.  It feels very similar to the situation in C++ when the programmer dynamically creates an array (using new) but never deallocates the array (using delete), instead they just change where the pointer is pointing.  There the memory would still be tied up and unusable. 
So I guess my question is, looking at the process above do I need use "Delete from Array" to release the memory and allow the program to run faster on longer experiments with large datasets or does labVIEW automatically deallocate that memory and therefore I should I be looking elsewhere in my program for processes that would slow down everything on longer experiments?
Thanks,
Val
Solved!
Go to Solution.

I have attached a photo of the portion of code that I was referring to.  It shows 2 photos so you can see all possibilities in the 2 case statements.
The first picture is when the cycle is adding new data points, and does not clear the array.
The second picture shows the program passing through the array (which it does every second cycle) and then "clearing" the array.  (Which as I state above, I didn't know if that was correct).
(None of this is actually my code, I was hired on to upgrade them from labVIEW 5.1 to labVIEW 2009.  They just asked me to look at this.  It seems to work fine on smaller length experiments on the order of a couple of hours).  If you need anything else from me, don't hesitate to ask.
Thanks,
Val
Attachments:
loop.docx ‏105 KB

Similar Messages

  • Need help optimizing the writing of a very large array and streaming it a file

    Hi,
    I have a very large array that I need to create and later write to a TDMS file. The array has 45 million entries, or 4.5x10^7 data points. These data points are of double format. The array is created by using a square pulse waveform generator and user-defined specifications of the delay, wait time, voltages, etc. 
    I'm not sure how to optimize the code so it doesn't take forever. It currently takes at least 40 minutes, and I'm still running it, to create and write this array. I know there needs to be a better way, as the array is large and consumes a lot of memory but it's not absurdly large. The computer I'm running this on is running Windows Vista 32-bit, and has 4GB RAM and an Intel Core 2 CPU @ 1.8Mhz. 
    I've read the "Managing Large Data Sets in LabVIEW" article (http://zone.ni.com/devzone/cda/tut/p/id/3625), but I'm unsure how to apply the principles here.  I believe the problem lies in making too many copies of the array, as creating and writing 1x10^6 values takes < 10 seconds, but writing 4x10^6 values, which should theoretically take < 40 seconds, takes minutes. 
    Is there a way to work with a reference of an array instead of a copy of an array?
    Attached is my current VI, Generate_Square_Pulse_With_TDMS_Stream.VI and it's two dependencies, although I doubt they are bottlenecking the program. 
    Any advice will be very much appreciated. 
    Thanks
    Attachments:
    Generate_Square_Pulse_With_TDMS_Stream.vi ‏13 KB
    Square_Pulse.vi ‏13 KB
    Write_TDMS_File.vi ‏27 KB

    Thanks Ravens Fan, using replace array subset and initializing the array beforehand sped up the process immensely. I can now generate an array of 45,000,000 doubles in about one second.
    However, when I try to write all of that out to TDMS at the end LV runs out of memory and crashes. Is it possible to write out the data in blocks and make sure memory is freed up before writing out the next block? I can use a simple loop to write out the blocks, but I'm unsure how to verify that memory has been cleared before proceeding.  Furthermore, is there a way to ensure that memory and all resources are freed up at the end of the waveform generation VI? 
    Attached is my new VI, and a refined TDMS write VI (I just disabled the file viewer at the end). Sorry that it's a tad bit messy at the moment, but most of that mess comes from doing some arithmetic to determine which indices to replace array subsets with. I currently have the TDMS write disabled.
    Just to clarify the above, I understand how to write out the data in blocks; my question is: how do I ensure that memory is freed up between subsequent writes, and how do I ensure that memory is freed up after execution of the VI?
    @Jeff: I'm generating the waveform here, not reading it. I guess I'm not generating a "waveform" but rather a set of doubles. However, converting that into an actual waveform can come later. 
    Thanks for the replies!
    Attachments:
    Generate_Square_Pulse_With_TDMS_Stream.vi ‏14 KB
    Write_TDMS_File.vi ‏27 KB

  • How do I create my own favorite template for DVD slideshows? I used to be able to select this from pulldown menu, but cannot now do so. I am directed straight to templates, which take more memory. I have a large slideshow, and need all the space I can get

    First, how do I create my own favorite theme template for DVD slideshows? I used to be able to select this from pulldown menu, but cannot now do so. I am directed straight to already existing themes, which take more memory. I have a large slideshow, and need all the space I can get. I just want to use a picture as my DVD cover, and then insert a slideshow. Also, when I try to burn my 8.5gb double sided slideshow, all that burns is the music. It is a large slideshow, a memorial on the life of my now deceased brother. This means a lot to me and to my family, and I am having so much trouble trying to burn it. I have gone into Project View and selected appropriately. The bar shows I have room to burn this DVD, but it does not burn.  I have burned so many DVDs in the past, but this one just will not burn. I am so confused at this point. I will say this is the first 8.5gb I have attempted to create and burn. My specs list a 7.7gb or 4.7gb as operable....but there are no 7.7gb dvds. I had to purchase 8.5gb. Help? What am I doing wrong? I have spent so much time on this, and just cannot figure it out.

    Final Cut is a separate, higher end video editor.  The pro version of iMovie.
    Give iPhoto a look at for creating the slideshow.  It's easy to assemble the photos in an album in iPhoto, put them in the order you want and then make a slideshow of them.  You can select from various themes and transitions between slides and add music from your iTunes library.
    When you have the slidshow as you want use the Export button at the bottom of the iPhoto window and export with Size = Medium or Large.
    Save the resulting Quicktime movie file in your Movies folder.
    Next, open iDVD, choose your theme and drag the QT movie file into the menu window being careful to avoid any drop zones.
    Then follow this workflow to help assure the best qualty video DVD:
    Once you have the project as you want it save it as a disk image via the File ➙ Save as Disk Image  menu option. This will separate the encoding process from the burn process. 
    To check the encoding mount the disk image, launch DVD Player and play it.  If it plays OK with DVD Player the encoding is good.
    Then burn to disk with Disk Utility or Toast at the slowest speed available (2x-4x) to assure the best burn quality.  Always use top quality media:  Verbatim, Maxell or Taiyo Yuden DVD-R are the most recommended in these forums.
    The reason I suggest iPhoto is that I find it much easier to use than iMovie (except for the older iMovie 6 HD version).  Personal preferences showing here.

  • Profile Performanc​e and Memory shows very large 'VI Time' value

    When I run the Profile Performance and Memory tool on my project, I get very large numbers for VI Time (and Sub VIs Time and Total Time) for some VIs.  For example 1844674407370752.5.  I have selected only 'Timing statistics' and 'Timing details'.  Sometimes the numbers start with reasonable values, then when updating the display with the snapshot button they might get large and stay large.  Other VI Times remain reasonable.
    LabVIEW 2011 Version 11.0 (32-bit).  Windows 7.
    What gives?
     - les

    les,
    the number indicates some kind of overroll.... so, do you have a vi where this happens all the time? Can you share this with us?
    thanks,
    Norbert
    CEO: What exactly is stopping us from doing this?
    Expert: Geometry
    Marketing Manager: Just ignore it.

  • Is it wrong or why Firefox uses it as an extremely large amount of memory? I'm running Firefox 7.0.1 on Windows 7 and it is more or less constant over 1.5 GB of memory utilization, twice as much as they early version 6. Best regards Jonas Walther

    Is it wrong or why Firefox uses it as an extremely large amount of memory?
    I'm running Firefox 7.0.1 on Windows 7 and it is more or less constant over 1.5 GB of memory utilization, twice as much as they early version 6.
    Best regards
    Jonas Walther

    Hi musicfan,<br />Sorry you are having problems with Firefox. Maybe you should have asked earlier and we could have fixed it.
    Reading your comments I do not see that rolling back to an insecure Firefox 22 will actually help you much. You are probably best using IE, unless you have also damaged that.
    *[[Export bookmarks to Internet Explorer]]
    You should not use old versions they are insecure. Security fixes are publicised and exploitable.
    * [[Install an older version of Firefox]]
    * https://www.mozilla.org/security/known-vulnerabilities/firefox.html
    Most others will not be having such problems. We are now able to say that with confidence because after developers missed a regression in Firefox 4 telemetry was introduced so that data was obtained. It may be an idea to turn on your telemetry, if you have not already done so, and decide to stick with Firefox.
    *[[Send performance data to Mozilla to help improve Firefox]]
    Trying safe mode takes seconds. Unfortunatly if you are not willing to do even rudimentary troubleshooting there is not anything we can do to help you.
    *[[Troubleshoot Firefox issues using Safe Mode]]

  • Logic of array and plotting functions?

    Hi,
    I started to build versatile acquisition programn with labview (with NI cards it seemed to be the best option) and it took quite an effort to adjust the way of thinking while shifting from other programming environments. I start to see the logic behind the labview but I have serious difficulties on figuring out how the labview's array and plotting functions work. Here's a simple example for replacing array subset and plotting it, which just doesn't work as i'd expect. I thought this would first initialize 10 rows, 1000 cols matrix of which the certain row would be replaced (zero padding when new row is longer than the original). The resulting matrix would then be displayed each row having its own windows in the stacked plot. It seems to do something totally different...
    Attachments:
    koe4.vi ‏303 KB

    As you noticed, "Replace Array Subset" can only replace array elements that exist. If it's not there, there's nothing to replace.
    What you'll need to do is make the array larger by using the "Insert into Array" function. Use this function cautiously though. It requires LabVIEW to dynamically reallocate memory while the program is running, and takes time. If you only do the 'Insert' when needed, you should be OK.
    Use the "Array Size" function to get the size of the array to be inserted, then you can choose between the ‘Insert’ or ‘Replace’ functions.
    Ed
    Ed Dickens - Certified LabVIEW Architect - DISTek Integration, Inc. - NI Certified Alliance Partner
    Using the Abort button to stop your VI is like using a tree to stop your car. It works, but there may be consequences.

  • How to implement large array in Dynamic programming

    Hi, there
    I am working on my dynamic pogramming codes using Java. The code is getting work, but the spead is very slow.
    It is supposed that the very long time run is due to the large array implementation in my codes. Most times, I have two double arrays size up to 320000, and one array of class the same size.
    Each iteration could take 30 minitues, the whole optimaization needs up to 50-100 iteration to get converged.
    Could any experise help me to how to improve the memory performance in Java?
    That is very important for my research.
    Looking for very quick response.
    Cheers
    Jack from Edinburgh

    An array of 320000 doubles isn't considered a large array. It must be
    somewhere in your algorithm implementation that slows down your
    process. Some relevant code snippets could clarify things ...
    kind regards,
    Jos

  • Performance problem when initializing a large array

    I am writing an application in C on a SUN T1 machine running Solaris 10. I compiled my application using cc in Sun Studio 11.
    In my code I allocate a large array on the heap. I then initialize every element in this array. When the array contains up to about 2 million elements, the performance is as I would expect -- increasing run time due to more elements to process, cache misses, and instructions to execute. However, once the array size is on the order of 4 million or more elements, the run time increases dramatically -- a large jump not in line with the other increases due to more elements, instructions, cache misses, etc.
    An interesting aspect is that I experience this problem regardless of element size. The break point in performance happens between 2 and 4 million elements, even if the elements are one byte or 64 bytes.
    Could there be a paging issue or other subtle memory allocation issue happening here?
    Thanks in advance for any help you can give.
    -John

    to save me writing some code to reproduce this odd behaviour do you have a small testcase that shows this?
    tim

  • How do i output multiple arrays from a case structure to create one larger array

    I currently have a vi that has one hardware input that i needed to take a measurement then be moved and take a similar measurement at a different point.  To accomplish this i used a while loop inside a case structure.  The while loop takes the measurement  and finds the numbers i need while the case structure is changed per the new measurement location.  I want to take the data points i have created in each case and output them into a single table.  I assumed to do this the best way would be to get the data from each case into its own built array and build a larger array but I cant get the information out of the case structure so that it all inputs at different places.
    thanks for your help
    Attachments:
    Array.vi ‏30 KB

    Hi Ross,
    attached you will find a solution for your table building problem.
    I would suggest thinking about program design - having the same case content in several cases doesn't make sense. I also would not want my user to press several stop buttons depending on choosen measurement...
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome
    Attachments:
    Array.vi ‏45 KB

  • Please help - I can not create large arrays of images (no more than 108 elements)

    Hello,
    I am working in an application where i acquire 300 high resolution images (3Mp) and then i process each of those frames.
    When i try to put alll those frames in one array of images, the system does not let me more than 108 elements. There is no such message like out of memory or memory error. It is just that when i visualize the array of images, there are good frames the first 108 elements, and then white frames the rest of the elements.
    Ideallically i would prefer to process frames as i am acquiring them, but i am concerned about the remaining processing time for the rest of the tasks.
    Then, as an alternative, I have tried to store frames in two shorter arrays, and i ended up observing that when the first array (say 50 elements) is full and the second array starts (another 50 elements for example), labview needs a certain time in the the middle (like three seconds between closing first array and starting acquiring on the second array). If i dont wait that time between arrays, both arrays contain the same information. I know, this is weird, and i know about the fact of passing information by reference on imaq images. 
    The most interesting thing is that when i reduce considerably the resolution of the image (say that now, instead 3Mp they are 2Mp), the maximum number of elements on the arrays are exactly the same: 108, so this makes me wonder if it is a memory constrain. And the code is fairly simple, there is no way that i am cutting the acquisition at the number 108.
    So my question is, How can i put 300 frames in one just one array? or How can eliminate the time i have to wait between arrays?
    I have heard about memory allocation and so, but i am not sure how to proceed. I have windows 7.
    Thanks in advance,
    Roberto

    It's hard to visualize.  Please post the code - or at least a shell representing your code - so we can see what is happening.  Are you sure you are getting 108 contiguous pictures, or are some aquisitions being skipped?  Is the first picture in the array the first aqcuisition?  Is the last picture the last acquisition?
    I really don't think it is about memory allocation.  I think it could be about your method of acquiring and processing said acquisitions.
    Bill
    (Mid-Level minion.)
    My support system ensures that I don't look totally incompetent.
    Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.

  • Hard drive on HP notebook PC failed. After replacement, can I upgrade OS and memory?

    I have an HP Pavilion Entertainment Notebook PC, Model dv9617nr (HP pn GS730UA).  An error message on boot-up says "Operating System not found" so my Windows Vista Ultimate 32-bit (upgraded two years ago) will not load. I am assuming this means that my hard drive has failed. How can I verify this assumption? Again, assume I am correct, and I order the recommended replacement (HP pn 590736-001) from SpareParts Warehouse ($142.50 plus shipping). Are there instructions on the internet for doing this replacement? After I do this replacement, I thought I could install Windows 7 I already have and upgrade the memory to 4GB SODIMM from the 1GB SODIMM that was originally used. What problems would I have with Hardware Compatibility using Windows 7? If the memory module I use has the same specs as the HP recommended replacement (HP pn 598861-001), i.e. 800MHz, 200 pin, PC2-6400, SDRAM, but a larger 4GB memory capacity, would I have any compatibility problems using it? Please advise me on these issues. Thanks, Jeff
    This question was solved.
    View Solution.

    Hi, Jeff:
    Let me break down your question...
    First, to confirm your HDD died, go into your BIOS menu. There should be a hard drive and memory diagnostics utility.  Run the HDD diagnostics utility and see what it reports. More than likely the HDD has crashed and is no longer usable.
    You can order a replacement drive or you can get any SATA II hard drive --either 5400 RPM or 7200 RPM.
    You could go to 250 to 500 GB if you choose. Unfortunately right now, HDD prices are unreasonably high due to the flooding in Thailand where many HDD manufacturing plants are located. Some were damaged heavily, causing HDD prices to triple almost overnight.
    This is a drive you can get that I think is better than what you have now: This one is 7,200 RPM. Faster but uses more battery power.
    http://www.newegg.com/Product/Product.aspx?Item=N82E16822136279
    You can get the 5400 rpm version too...similar. Not as good as the caviar black. This is more along the line of what you would get from HP.
    http://www.newegg.com/Product/Product.aspx?Item=N82E16822136387
    Read the reviews on both or you are free to get the one from HP.
    There are instructions to install the hard drive and memory on your notebook's support and driver page. Click on Manuals on the right side of the page.
    http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01278446&tmp_task=prodinfoCategory&cc=us&dlc=en...
    Then you want to look at the maintenance and service guide.
    You can upgrade the memory to a maximum of 4 GB using 2 x 2GB of PC2-6400 memory. You can get it from HP or you can get it at the link below. This is the memory I use. As a matter of fact I just ordered this very same memory to install in my sister-in-law's dv6700.
    http://www.newegg.com/Product/Product.aspx?Item=N82E16820148159
    Now, you can install W7 just fine. I installed W7 Home Premium 64 bit on my dv6810us that has pretty much the same hardware as yours and 4 GB of memory (the same as I posted above).
    Here is the catch though. It depends on what version of W7 you have. If you have a W7 upgrade version, you must install a qualifying operating system on the hard drive before you can upgrade it.
    If you have a full version of Windows 7, then you can install it on a blank hard drive.
    If you have the upgrade version, contact HP support and order a set of Vista recovery disks. Install Vista and then uprade to W7 if you have the upgrade version of W7.
    You can use the vista drivers from your notebook's support page for any drivers you need that W7 didn't supply.
    One thing...DO NOT flash (update) the BIOS when running Windows 7. You can only flash it using windows vista.
    Hope this helps. If you have any other questions, please let us know.
    Paul

  • Nio ByteBuffer and memory-mapped file size limitation

    I have a question/issue regarding ByteBuffer and memory-mapped file size limitations. I recently started using NIO FileChannels and ByteBuffers to store and process buffers of binary data. Until now, the maximum individual ByteBuffer/memory-mapped file size I have needed to process was around 80MB.
    However, I need to now begin processing larger buffers of binary data from a new source. Initial testing with buffer sizes above 100MB result in IOExceptions (java.lang.OutOfMemoryError: Map failed).
    I am using 32bit Windows XP; 2GB of memory (typically 1.3 to 1.5GB free); Java version 1.6.0_03; with -Xmx set to 1280m. Decreasing the Java heap max size down 768m does result in the ability to memory map larger buffers to files, but never bigger than roughly 500MB. However, the application that uses this code contains other components that require the -xMx option to be set to 1280.
    The following simple code segment executed by itself will produce the IOException for me when executed using -Xmx1280m. If I use -Xmx768m, I can increase the buffer size up to around 300MB, but never to a size that I would think I could map.
    try
    String mapFile = "C:/temp/" + UUID.randomUUID().toString() + ".tmp";
    FileChannel rwChan = new RandomAccessFile( mapFile, "rw").getChannel();
    ByteBuffer byteBuffer = rwChan.map( FileChannel.MapMode.READ_WRITE,
    0, 100000000 );
    rwChan.close();
    catch( Exception e )
    e.printStackTrace();
    I am hoping that someone can shed some light on the factors that affect the amount of data that may be memory mapped to/in a file at one time. I have investigated this for some time now and based on my understanding of how memory mapped files are supposed to work, I would think that I could map ByteBuffers to files larger than 500MB. I believe that address space plays a role, but I admittedly am no OS address space expert.
    Thanks in advance for any input.
    Regards- KJ

    See the workaround in http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4724038

  • Adding large arrays to a text file as new columns

    I am trying to merge several large text data files into a single file in Labview 8.0.  The files are too large to read in all at once (9-15 million lines each), so decided I need to read them in as smaller chunks, combine the arrays, and write them to a new file.
    The reason there are three separate data files was for speed and streaming purposes in the project, and the users wanted the raw, unadulterated data written to file before any kind of manipulation took place. 
    My VI:
    1.  Takes a header generated from another VI and writes it to the output file.
    2.  Creates a time column based on sample rate and the total number of data points
    3.  Reads in 3 files that each have text data (each data point is 9 bytes wide, there are up to 15 million data points per file.
    4.  Each iteration of the for loop writes a chunk of 10 to 100 thousand points (Somewhere in there seems to be the fastest it will do), formatted with the time column on the left, then the three data columns, until it's done.  I haven't quite figured out how to write the last iteration if there are fewer data points than the chunk size.
    Anyways, the main thing I was looking for was suggestions on how to do this faster.  It takes about a minute per million points on my laptop to do this operation, and though I recognize it is a lot of data to be moving around, this speed is painfully slow.  Any ideas?
    Attachments:
    Merge Fast Data.vi ‏67 KB

    Thanks for the tip.  I put the constants outside the array and noticed a little improvement in the speed.  I know I could improve the speed by using the binary file VI's but I need the files as tab delimited text files to import them into MATLAB for another group to do analysis.  I have not had any luck converting binary files into text files.  Is there an easy way to do that?  I don't know enough about binary file systems to use them.  I looked at the high speed data logger examples but they seemed complicated and hard to adapt to what I need to do.  Creating the binary header file seemed like a chore. 
    I am up for more advice on the VI I posted, or suggestions on different ways to convert a binary file to a MATLAB readable text file.
    Thanks!

  • How can I get to read a large file and not break it up into bits

    Hi,
    How do I read a large file and not get the file cut into bits so each has its own beginning and ending.
    like
    1.aaa
    2.aaa
    3.aaa
    4....
    10.bbb
    11.bbb
    12.bbb
    13.bbb
    if the file was read on the line 11 and I wanted to read at 3 and then read again at 10.
    how do I specify the byte in the file of the large file since the read function has a read(byteb[],index,bytes to read);
    And it will only index in the array of bytes itself.
    Thanks
    San Htat

    tjacobs01 wrote:
    Peter__Lawrey wrote:
    Try RandomAccessFile. Not only do I hate RandomAccessFiles because of their inefficiency and limited use in today's computing world, The one dominated by small devices with SSD? Or the one dominated by large database servers and b-trees?
    I would also like to hate on the name 'RandomAccessFile' almost always, there's nothing 'random' about the access. I tend to think of the tens of thousands of databases users were found to have created in local drives in one previous employer's audit. Where's the company's mission-critical software? It's in some random Access file.
    Couldn't someone have come up with a better name, like NonlinearAccessFile? I guess the same goes for RAM to...Non-linear would imply access times other than O(N), but typically not constant, whereas RAM is nominally O(1), except it is highly optimised for consecutive access, as are spinning disk files, except RAM is fast in either direction.
    [one of these things is not like the other|http://www.tbray.org/ongoing/When/200x/2008/11/20/2008-Disk-Performance#p-11] silicon disks are much better at random access than rust disks and [Machine architecture|http://video.google.com/videoplay?docid=-4714369049736584770] at about 1:40 - RAM is much worse at random access than sequential.

  • Converting from spreadshet string to array and then back to spreadsheet string

    My questions is; why is the Spreadsheet string to array function creating more data than the original string had when you change the array back into a spreadsheet string. Im trying to analyze a comma delimited file using array functions since my column and row size is constant, but my data varies. Thus my reason for not using string parsing functions which would get more involved and difficult. So, however, after i convert to a 2D array of data from the comma delimited file I read from, and then I convert back to string using the Array to Spreadsheet String, I get added columns to the file, which prevents another program from receiving these files. Also, the data which I am reading is not all contiguous, it has gaps in some places for empty data. Looking at the file compared to the original after it has gone from string to array and then back to string again, looks almost identical except for the file size which got larger by 400 bytes and where the original file has empty spaces, the new file has a lot of commas added. Any idea?
    Charles

    The result you get is normal when the spreadsheet string contains rows of uneven length. Since the array rows have the same number of elements, nil values are added during the coonversion. And of course, the back to string conversion keep those added values in the string, with the associated commas.
    example : 3 x 3 array
    1,2,3
    4
    5,6,7
    is converted into
    1 2 3
    4 0 0
    5 6 7
    then back to
    1,2,3
    4,0,0
    5,6,7
    Chilly Charly    (aka CC)
             E-List Master - Kudos glutton - Press the yellow button on the left...        

Maybe you are looking for

  • Issue in bank reconciliation

    Hi I have an issue in manual bank reco. We have a scenario in which there is a cheque bounce in the account. The system is generating entry for cheque issue using T Type IN01 Bank Subledger Dr            Main bank Cr However, for T Type IN02, system

  • Payment in CJIA and document in FB03, Value not matching

    ********Reposting from FI group*********** Dear Gurus, In a scenario where user has made a payment to vendor, a clearing document has been posted. The values are not martching in payment transaction, CJIA and the document disaplay FB03. Please find t

  • Returns to old value!

    Hi again guys. Is there anyway that a global variable can keep its value assigned in a method,after the program leaves the class? Let me give U an example! class one int x = null;//Global void method() x = 3; void method2() system.out.println(x); cla

  • IPhoto freezes when changing editor in Pref.

    It never happened before. Maybe it has to do with installing the new PSE 11 today. My external editor was PSE 8. In Pref. Advanced tab, under "use editor" was iPhoto. So today after installing the new PSE11, I wanted to change the external editor to

  • CS5 Contribute - Publishing Error

    I am encountering a problem when trying to publish changes to my website from my home computer (Mac).  I receive the following error: "You can't perform this action on this draft.  The draft is invalid because a lock has been broken and another insta