Raid-0 Stripe & Cluster Size

I just ordered 2 10k RPM Raptor S-ATA Drives from newegg, they should arrive shortly. I plan to configure my system with them as Raid-0 for increased performance, I just read the "Raid Setup Guide 865/875 LSR/FIS2R Rev 1.01" by Vango and it seems that my Mobo can be configured as Raid-0 with either the Intel ICH5R Controller or the promise controller.
Will use promise as my raid controller, it seems it's faster , now i got another question.
What about stripe size/cluster size? my research is giving me too many setting suggestions with all very different settings, can't decide on what to do. Can someone suggest some good setting, Intel raid manual suggest a 128 KB stripe for best performance, and said nothing about cluster size. Vango posted somewhere he used a 64kb for Stripe, but no info on cluster size.
I will be using 2 36 gb WB Raptors in raid-0 as my main and only windows array (disk) (Will install windows and apps+games to it) then use PATA drive for backups and movie storage. My computer is used mostly for working with office, creation of web pages, playing Everquest (big game), and watching video (divx movies). I use WinXP Pro Sp1.
Can someone suggest some general setting on stripe/cluster size that give good performance this kind of usage? what is the easiest (best) way to change the 4k default cluster size on the array after i get windows installed to it? do I bother with changing the cluster size? I got partition magic and other softtware available to do this, but dunno what's the best procedure to do this.
Thanks in Advance

I've always just used the 4K cluster size that Windows creates if you use NTFS. I honestly don't think this makes a big difference. If you want a different size, use PM to format the drive that way before installing XP. I would recommend against converting from one size to another. Did this once and ended up with all my files labeled in DOS 8.3 format.   (this was NOT good for my 1000+ MP3's)
I use 64k stripe size as a compromise. My research showed that people were getting the "best scores" using a small stripe size. This seemed to come at the cost of CPU usage going up and I'm unconvinced these scores relate much to how I actually use my HDD's. They say if all your files are 128K and bigger you don't need a smaller stripe size. If you're using the Raid as your XP drive you'll actually have lots of small files so I would recommend something smaller than 128K. Maybe try 32k?
Let us know how it goes.

Similar Messages

  • Raid 0 (Stripe) for OS X boot disk? Best Performance and block size

    Hi,
    so this is a new thread to an older question I had and would like some feedback on;
    I have a new Mac Pro with 4 matched 1TB caviar black drives. I WILL be doing Full Time-Machine Backups, as well as an independant full-system backup regularly.
    That being said, I have 4 drives open and am looking for suggestions. I am leaning toward 2 sets or stripes (one for the OS and one for 'work space', the former with a 32k stripe block size, the latter with 64k (will hold video, audio, scratch, and, yes, Games).
    Does this sound alright? Is there an issue with Striping the boot drive? Is the block size or 32 (or 64) optimal?
    Thanks!
    Dan

    Hi D# Shooter, regarding your question,
    D3 Shooter wrote:
    You brought to mind something I did not take into consideration, Time Machine. I really like the simplicity of TM as it saved me once before. So, could you tell me, for photo files, some video, how much does the striping (% wise) improve the accessing and filing of such files compared to no striping but, using internal drives (7200/WD/1TB/Caviar)? I have not done striping before and want to weigh in because of the back up storage issues now. Thanks.
    J_ust give it a try and see if it is worth it for you_.
    Striping:
    • just enhances (reduces the access/transfer) because in practice the access is distributed in parallel across several DDM's (Old school but it works great!). I think for video and file work the advantage is that you can access the whole object sooner (rather than faster).
    • this distribuition also reduces a load of old style queing on the device ove rthe path. THis was resolved in the late 1980's so no reall rocket science here.
    the issues with striping are few and basically over all the raid implementations (except JBOD which of course is not raid) when compared to a single spindle. The discussions are enormous and plentiful via google and experiences and opinions vary widely.
    Fir the I.T. peole its the advantage they get for access using a smart disk controller that caches goosies like indexes and stuff so that they can sustain a zillion trivial transactions/sec (i.e. banking & internet stuff).. stuff that is of no interest to me
    For the creative people and many applications that are BLOB's (like video, film and remote sensing objects) getting use of the objects sooner (not faster) is of prime importance for workflow efficiencies. If you have this need then striping stiff across disks is for you!
    TIMEMACHINE.app works fine as it seems fairly agnostic to whats implemented under the disk file system. MY issue with time machine is that I don't want it looking after my production stuff, only to keep an eye on my admin I.T. type stuff such as ~/ and data data files.
    As posted on ths thread:
    • availability is the major concern with any file system (cloud or raid or other). RAID with parity schemes and double parity schemes (Raid1,,3,5,6) and implementations such as RAID6+ LSF (log structured file) are all wonderful for this business workflows that need it.
    • timely access in a workflow is another
    • cost benefits are another
    However a *great benefit* for me of *consolidating small storage components under one huge file system is that you dont have to COPY any thing around*. THis is marvelous especially when you think you have to move 2TB's of stuff from one place to anther. THis a takes a lot of time with elcheapo didks that dont have fast interfaces such as SATA/SAS of FC for example.
    As always and has been addressed by others on this thread (Hatter) if you lose a component storage device the whole file system is hosed or severely degraded unless you spend a lot of money on full ranks of DDMs with hot spares and a very good RAID controller card. Again its money.
    YEah sure you can carry some PARITY RAID implementation around across 3 didks but the storage capacity usage is dreadful. THis is why more complex RAID implemntatiosn are in groups of 10+ dDMs.. (yep poepl can argue.. but this is the mainstream).
    My external disk arrays are merely two LUNs (SAS DOMAINSA) that have two file systems implemented using 2 x 4TB 1TBs DDMS - all RAID0 - no parity (no availability) - I just want speed. I look after my own "availability" withm= my archive solution. If the operation dies, I stat again. I'm happy wi that. RAID 5 has write penalty performace hits (well known +update in place+), , RAID 6+ is lousy for huge objects but good for I.T. but ok if you lose two disks in a stripe (RANK).
    They all have their flaws... and mirroring a RAID0 (RAID1/0) seems to be popular with storage vendors because they can see you more disk and thats proper business workflow depends on it.
    However you can achieve this stuff if you change your workflow slightly.
    Other than these the rest is tech specs and stuff under the cover.
    So you what is right for you and your business.
    I dont like spending money on nasty elcheapo FW800 LeCIE disk enclosures with the their junky components and their ilk having been done badly on several corrupted devices and lsing TB;s of content - this is why I invested in a high speed LTO4 ULtrium data tape archive solution.
    sorry for long post..
    w

  • Optimizing RAID: stripe/chunk size

    I'm trying to figure out how to optimize the RAID chunk/stripe size for our Oracle 8i server. For example, let's say that we have:
    - 4 drives in the RAID stripe set
    - 16 KB Oracle block size
    - MULTIBLOCK_READ_COUNT=16
    Now the big question is what the optimal setting is for the chunk/stripe size. As far as I can see, we would have two alternatives:
    - case 1: stripe size = 256 KB
    - case 2: stripe size = 64 KB
    In case 1, all i/o would be spread out over all 4 drives. In case 2, we'd be able to isolate a lot of i/o to separate drives, so that each drive serves different i/o calls. My guess is that case 1 would work better where there's a lot of random disk i/o.
    Does anyone have any thoughts or experience to share on this topic?
    Thanks,
    Alex Algard
    WhitePages.com
    null

    It does not matter. Do not mix soft-raid and hard-raid. One OS i/o operation can read from one disk and number of disk. Do not forget about track-to-track seek time.
    Practice is the measure of truth :)
    For example, http://www.fcenter.ru/fc-articles/Technical/20000918/hi-end.gif

  • Does Administrator have to set cluster size during RAID 0+1,3 and 5 used according to Microsoft tech document?

    Hi everyone,
    I always thank you for providing helpful information in this forum.
    As I got a plan to set hard drive for RAID 0+1,3 and 5. Therefore should I set cluster size under each RAID type according to the url which is provided by Microsoft Tech net. I mean I want to set the cluster size which is size by one time hard disk head
    access.
    The url is ....
    https://support.microsoft.com/kb/140365
    Thanks

    Hi OlegSmirnovPetrov,
    Additional,
     on Win 2008 and later versions, disk partition alignment is enabled by default. On win 2k3 and earlier versions you need to enable , 
    more information please refer the following KB:
    1. Best practices for using dynamic disks on Windows Server 2003-based computers
    http://support.microsoft.com/kb/816307
    2. Disk Partition Alignment (Sector Alignment): Make the Case: Save Hundreds of Thousands of Dollars
    http://blogs.msdn.com/b/jimmymay/archive/2009/05/08/disk-partition-alignment- 
         sector-alignment-make-the-case-with-this-template.aspx
    3. General Hardware/OS/Network Guidelines for a SQL Box
    4.http://blogs.msdn.com/b/cindygross/archive/2011/03/10/general-hardware-os-network-guidelines-for-a-sql-box.aspx (please refer Storage specifications)
    5.Disk Partition Alignment Best Practices for SQL Server
    http://msdn.microsoft.com/en-us/library/dd758814.aspx
    I’m glad to be of help to you!
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Best Cluster Size for RAID 0

    Hi all,
    When I get my replacement SATA HDD back I will be creating another RAID 0 array.  My Windows XP cluster size is 4K, but I have a choice when I am creating my RAID array.  All hardware in Sig.
    The system is being used mainly as a gaming machine, but I will be doing other things to with it also.  Looking for the best balance (slightly in favour of the gaming,  ).
    I heard that the cluster size if the drive should match the cluster size of the RAID array.  So if my drive cluster is 4K then i should set my RAID cluster size to 4K or should I set them higher.  
    any information is more than welcome,
    Andrew
    P.S. I did do a search through the forums, but could not find much recent information.

    The "EASIEST" way to change your cluster size is to have a 3rd drive with Win XP on it....here is what you need to do.
    1. Have 3rd drive with Win XP
    2. Go to bios and change boot order to 3rd drive b4 Raid
    3. Once in wondows goto "Disk Management" (as soon as you click on it you will have a window pop up and ask you to select drive and it will also want to know if you want to convert drive to a dynamic disk...I always choose no)
    4. You will see your raid drives as 1 BIG drive..now all you do is right click on the drive and click partion...primary partition...set size...now is where you can choose cluster size...you will have 3 boxes to check off...
    5. NTFS or FAT32...Volume Label....Cluster Size......
    6. After you check all of that off then you click off quick format and Voila...you have done it now do same on rest of drives....
    7. once you have setup all your partitons and selected cluster sizes...shutdown PC unplug 3rd drive...change boot order so CD rom will be a bootable device b4 raid device and you are good to go on a clean install....remember once you get into the window when setting up XP you have choices to format drives again and choose where XP get sinstalled ...choose to leave as is...no need to format again cuz it will default to 4k cluster again.....
    let me know how this goes for ....I have been doing this trick now for a LONG time and I know for a fact that this is a fast and easy way...without using any 3rd party software.....
    my advice to you is try 16/16 then 32/32

  • NTFS Cluster size is set at 4k instead of recommended 64k.

    We have found that our partition is not aligned and need to get some
    feedback on a few things.
    Here are our numbers:
    Starting partition offset = 32,256
    Stripe Size = 128k (131,072)
    Cluster size = 4k (4096)
    We are experiencing high "Avg Queue length" and high "avg disk usage"
    in windows performance monitor...
    My question is this: How important is the NTFS cluster size at 4k. I know
    the recommendation is 64k but how bad would a 4k cluster size ALSO
    affect performance given we know that our partition is mis-aligned ?
    Thanks..
    Ld

    > My question is this: How important is the NTFS cluster size at 4k. I know
    > the recommendation is 64k but how bad would a 4k cluster size
    It's very important from performance perspective, especially when you're facing a huge database and quantity of disks. 64K is the rule of thumb as cluster package size for SQL server, can be considered a minimum size of unit at least in this case. Imagine if it's cut down to 4K which is 1/16, then the disk arms need to do 16 times to grab the same amount of data.
    > Starting partition offset = 32,256
    >ALSO
    > affect performance given we know that our partition is mis-aligned ?
    Starting offset has a similar impact as allocation unit size does, but usually it only happens on old fashion storage device as far as I know, not sure maybe I'm wrong here. Last time an issue related to that I know was on HP EVA 5000 4 years ago, and after that starting offset value is part of initial optimization when it's being installed by storage vendor provided, no manual change from customer needed. But to be on the safe side, please do check with your storage vendor to make sure.
    In either case of allocation unit size or starting offset size, it would be very difficult to change as long as it's set and has gone live in production environment, from either downtime or transition storage device.
    Regards,

  • Software RAID0 for Video Editing+Storage: What Stripe Block Size ?

    Hi
    I'm planning to setup a raid0 with 2x 2TB HDD's to store and edit the videos from my digital camera on it. The most files will have a size of 5 to 15GB. In the past i had raid0's on RHEL with the (old) default stripe block size of 64kb. Since the new drive will only contain very big files would it be better to go with 512kb stripe block size ? 512 is also the default setting now in the gnome disk utility which i use for my partitions.
    Another question: Does the so called "raid lag" exist ? I think i've seen occasional stuttering in movies when they're played from a raid0/5 without cache enabled in the player (with cache its fine). Games also seem to occasional freeze when installed on raid (I had this problem in the past with wine games installed on raid0, they sometimes freeze when they try to load data which never occurred ever on a normal HDD).
    Many thanks in advance for your suggestions

    That is a hard question to answer.. Nothing is best for everyone... However if i am to generalize it I would put it this way.. If you want to cut everything from a short promos to hollywood pictures..
    A high end windows pc (only cause mac pro hasn't been updated in ages)
    Avid Media Composer with Nitrus DX
    Two monitors
    Broadcast monitor
    HD Deck
    pimping 5.1 speakers
    A good mixer
    You are looking at over 70,000 or 80,000, could even approach even more.. HD decks run atleast 15k.
    If price is not an issue then there you go....
    However this is not realistic for most people nor best solution by no means... I run a macbook pro with Avid (as primary) Final Cut 7, Final Cut X (for practice, didn't have to use it for a job yet), and Premiere (just in case)
    I am a final cut child who grew up on it and love it however everything I am doing in last few years is on AVID...
    Have a second monitor..
    I am very portable and rest of the gear I usually get where ever I work at.. I am looking into getting a good broadcast monitor connected with AJA thunderbolt..
    Like I said this is very open question, there is no (BEST) it all depends what you will be doing.. If you get AVID (which can do everything, however is cluncky as **** and counter intuitive) but you are only cutting wedding videos and short format stuff, it would be an overkill galore.. Just get FCP X in that case... Simple,easy, one app...
    Be more specific and you will get clearer answers..

  • Cluster Size issues

    Hi,
    I am running SQL Server 2008 R2 on Windows Server 2008 R2. My databases are residing on a RAID5 configuration.
    Recently I have had to replace one of the HDDs in the RAID with a different HDD. The result is that I now have 2 HDD with a physical and logical cluster size of 512 Bytes and 1 with 3072 Bytes physical and 512 Logical.
    Since the rebuild, the databases and SQL have been fine. I could read and write to and from the databases and Backups had no issues either. Today however (2 months down the line of the RAID rebuild) I could no longer access the databases and backups did
    not work either. I kept getting this error when trying to detach the database or backing it up:
    TITLE: Microsoft SQL Server Management Studio
    Alter failed for Database 'dbname'.  (Microsoft.SqlServer.Smo)
    For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500.0+((KJ_PCU_Main).110617-0038+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Alter+Database&LinkId=20476
    ADDITIONAL INFORMATION:
    An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
    Cannot use file 'D:\dbname.MDF' because it was originally formatted with sector size 512 and is now on a volume with sector size 3072. Move the file to a volume with a sector size that is the same as or smaller than the original sector size.
    Cannot use file 'D:\dblogname_1.LDF' because it was originally formatted with sector size 512 and is now on a volume with sector size 3072. Move the file to a volume with a sector size that is the same as or smaller than the original sector size.
    Database 'dbname' cannot be opened due to inaccessible files or insufficient memory or disk space.  See the SQL Server errorlog for details.
    ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5178)
    For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500&EvtSrc=MSSQLServer&EvtID=5178&LinkId=20476
    BUTTONS:
    OK
    My Temporary solution to this was to move the DB to my C: drive and attach it from there. This is not ideal as I am losing the redundancy of the RAID. 
    Can anybody tell me if it is because of the hard drive with the larger sector size? (This is the only logical explanation i have) And why would it only happen now? 
    I am sorry if this is the wrong Forum for this question

    Apparently it was not until recently that the database spilled over to that new disk. No, I don't too know much about RAIDs.
    But it seems obvious that you need to make sure that all disks in the RAID have the same sector size.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • K7n2 delta ilsr - cluster size

    How exactly would I change my cluster size?  Also, if I just wanted to do it for the raid array - not the os volume, could I do that w/o a clean windows install?

    Quote
    Originally posted by loopyloops
    Quote
    Originally posted by Bonz
    Yea I have to agree with Raven on this...don't mess with your cluster sizes because it usually leads to a reload of your system...windows wants that decided early on in the load if you are wanting to change it for real...I have never seen a benefit myself when I experimented with it...if I dropped it lower than the default 4069 then it drags...you can waste a lot of space cranking it up tho, pictures are brutal but your speed does improve...most of these things are trial and error type things tho but do as you see fit for your setup...
    Bonz
    This has to do with my upcoming raid 0 array.
    I think I'll just do it 16stripe/4cluster.

  • How do I retrieve binary cluster data from a file without the presense of the cluster size in the data?

    Hey guys,  I'm trying to read a binary data file created by a C++ program that didn't append sizes to the structures that were used when writing out the data.  I know the format of the structures and have created a cluster typedef in LabView.  However the unflatten from string function expects to see additional bytes of data identifying the size of the cluster in the file.   This just plain bites!  I need to retrieve this data and have it formatted correctly without doing it manually for each and every single element.  Please Help!
    Message Edited by AndyP123 on 06-04-2008 11:42 AM

    Small update.  I have fixed size arrays in the clusters of data in the file and I have been using arrays in my typedefs in LabView and just defining x number of indexes in the arrays and setting them as the default value under Data Operations.  LabView may maintain the default values, but it still treats an array as an unknown size data type.  This is what causes LabView to expect the cluster size to be appended to the file contents during an unflatten.  I can circumvent this in the most simplest of cases by using clusters of the same type of data in LabView to represent a fixed size array in the file.  However, I can't go around using clusters of data to represent fixed size arrays BECAUSE I have several multi-dimentional arrays of data in the file.  To represent that as a cluster I would have to add a single value for every element to such a cluster and make sure they are lined up sequentially according to every dimension of the array.  That gets mighty hairy, mighty fast. 
    EDIT:  Didn't see that other reply before I went and slapped this in here.  I'll try that trick and let you know how it works.......
    Message Edited by AndyP123 on 06-04-2008 12:11 PM

  • Mirrored RAID:  MediaKit reports block size error

    I am trying to create a 2nd set up backup drives for my photos.  I have two new iomega 2TB drives, which look essentially identical to drives I'm currently using as my primary backups as a mirrored RAID set.
    I can start the process with freshly erased and reformatted drives (with the default mac format, extended, journaled, unencrypted, not case-sensitive).  And after a minute or three, I see
    "MediaKit reports block size error, usually caused by not being a multiple of 512."
    The RAID options are Mirrored RAID, Mac extended journaled, and options settings are default.
    I see several series of posts with complaints about encrypting RAIDs and disk block sizes, but not unencrypted errors.   I actually started out trying to do this with the 2006 MBP running 10.6.8 and got a different error:  "POSIX reports:  the operation couldn't be completed. Operation not permitted."  I wasn't sure whether the 2TB RAID I already have was set up iwth the older or newer computer--it was definitely before I put Lion on this one--so I tried this one and now have a different error.
    Any idea what the problem might be? 

    Update:  I spent some time on the phone with an Apple support RAID expert, and couldn't figure out what the error was; we couldn't bypass it by playing with partitions on the drives, or any of another couple of manuevers that I've already forgotten.  He noted that his own searches were showing a lot of mentions of similar problems but only with Iomega drives, and he was finding the same links I found earlier about problems creating encrypted drives.  Now trying to decide if it's worth throwing more good money after bad for a call with Iomega support, and waiting to see if the iomega forum is at all helpful.

  • Maximum cluster size?

    Hi all,
    I'm using LV 7.1 and am trying to access a function in a DLL.  The function takes a pointer to a data structure that is large and complex.  I have calculated that the structure is just under 15kbytes.
    I have built the structure as a cluster and then attempted to pass the cluster into the call library function node with this parameter set as adapt to type.  I get memory error messages and when analysing the size of the cluster I am producing it appears to be much smaller than required.
    I have also tried creating an array and passing that but I think that won't work as it needs to be a fixed size of array which can only be acheived, according to what I've read, by changing it to a cluster and this is limited to a size of 256.
    Does anybody have a suggestion of how to overcome this?
    If any more detail is required then I'm happy to supply. 
    Dave.

    John.P wrote:
    Hi Dave,
    You have already received some good advice from the community but I wanted to offer my opinion.
    I am unsure as to why the cluster size will not exceed 45, my only suggestion is to try using a type cast node as this is the suggested method for converting from array to cluster with greater than 256 elements.
    If this still does not work then in this case I would recommend that you do use a wrapper DLL, it is more work but due to the complexity of the cluster you are currently trying to create I would suggest this is a far better option.
    Have a look at this KB article about wrapper DLL's.
    Hope this helps,
    John P
    John, I am having a hard time converting an array of greater than 256 elements to a cluster.  I attempted to use the type cast node you suggested and didn't have any luck.  Please see the attached files... I’m sure I’m doing something wrong.  The .txt file has a list of 320 elements.  I want to run the VI so that in the end I have a cluster containing equal number of integer indicators/elements inside.  But more importantly, I don't want to have to build a cluster of 320 elements.  I'd like to just change the number of elements in the .txt file and have the cluster automatically be populated with the correct number of elements and their values.  No more, no less.   One of the good things about the convert array to cluster was that you could tell the converter how many elements to expect and it would automatically populate the cluster with that number of elements (up to 256 elements only).  Can the type cast node do the same thing?  Do you have any advice?  I posted this question with more detail to my application at the link below... no luck so far.  
    http://forums.ni.com/ni/board/message?board.id=170​&thread.id=409766&view=by_date_ascending&page=1
    Message Edited by PhilipJoeP on 05-20-2009 06:11 PM
    Attachments:
    cluster_builder.vi ‏9 KB
    config.txt ‏1 KB

  • Array to Cluster, Automate cluster size?

    I often use the Array to Cluster VI to quickly change an array of data into a cluster of data that I can then connect to a Waveform Chart.  Sometimes the number of plots can be different which results in extra plots (full of zeros) on the graph, or missing plots if the array is larger than the current cluster size.  I know I can right-click on the node and set the cluster size (up to 256) manually.  I could also use a case structure with as many Array to Cluster nodes as I need, set them individually and wire an Array Size to the case structure selector but that's kind of a PITA. 
    My question is whether or not anyone knows a way to control the cluster size value programatically.  It seems that if I can right-click it and do it manually there must be some way to automate it but I sure can't figure it out.  It would be nice if you could simply wire your desired value right into an optional input on the node itself.  Any ideas will be much appreciated.
    Using LabVIEW: 7.1.1, 8.5.1 & 2013
    Solved!
    Go to Solution.

    I'm under the impression it's impossible.  See this idea for related discussion.
    Tim Elsey
    LabVIEW 2010, 2012
    Certified LabVIEW Architect

  • Do we need to format data and log files with 64k cluster size for sql server 2012?

    Do we need to format data and log files with 64k cluster size for sql server 2012?
    Does this best practice still applies to sql server 2012 & 2014?

    Yes.  The extent size of SQL Server data files, and the max log block size have not changed with the new versions, so the guidance should remain the same.
    Microsoft SQL Server Storage Engine PM

  • Programmaticly change cluster size

    I'm i'm facing the following problem, i want to display several plots in a chart, to do this i'm using a build to cluster function, because the source is an array with N items. The sollution i came up to, was not correct because the cluster has a fixed size and the array doesn't. Which results in 9 plots (default cluster size) and only N of them have to be displayed in the chart.
    I included a part of my program. By which i hope it explains my problem a bit more. This subvi is executed X times and saves N value's to an array and after each run the array is converted into a cluster and is displayed in a chart.
    So my question is, is it possible to programmatically change the cluster size, or is it possible to realise a chart which is updated during runtime with new value's.
    Attachments:
    Wavelength AnalyseAndSave.vi ‏208 KB

    I figured it out by myself, havn't slept to mutch last night For those who are interested in my solution, here is what i did after the array is coming out from the forloop i reshaped it so that the dementions are the same as the number of plots i want to have, then i wired the output to a chart and it works

Maybe you are looking for

  • How I solved my shuffle problems!!!

    It was very simple you see. After 3 days of messing around with the **###! shuffle...it would work ,then not work, then it would say "can't read or write to disk" after I did the 4 R's and everything else they suggested.I called Apple and they said i

  • Moving a group of photos-leaves the last one

    Not always, but most of time, if I select a group of photos in a folder, then drag and drop them into another folder, it will move all but the last one. Window pops up saying "There was an error copying files to the destination you selected. The foll

  • BPM: condition (XPATH) works not correct

    Hi everybody, we got conditions (XPATH) in several BPMs. In one BPM the condition does not work, although it is the same as used in other BPMs. Any idea? Regards Mario Mario Müller

  • ITunes 9.1.1 Coverflow issue.

    Hello. After I've done a upgrade to iTunes 9.1.1, the Coverflow view not work. There is a screenshot: http://quartoestudio.com/itunes9.1.1_coverflowissue.png Any idea?

  • PS Elements 12

    How to switch off the location map function so we can add tags and subtags to the location WITHOUT the need of having an internet connection. Now the only way to add a location is to use the map. I just want to add a location tag or sub tag and give