Airdisk: slow read, fast write?

I'm evaluating the option to use the Airdisk as an alternative for my Linux fileserver. For this I created 2 partitions (Airdisk1 and Airdisk2, both Mac OS Extended Journalled GUID) on a USB-drive using the disk utility. Then I attached this drive to my AEBS.
I performed my tests by copying a 1 GB file to/from my iMac from/to the Airdisk1 using a 100Mb Ethernet network (so not wireless):
Write the file to Airdisk1: 6.3 MB/s (not (Mb/s)
Read the file from Airdisk1: 2.8 MB/s (not Mb/s)
The strange thing is that the read-speads are considerably lower than the write speeds. Also, while the write-speed is consistent during the copy, the read-speed is very mixed and sometimes it even stops for a split-second. Can anybody explain this to me how this is possible? When attaching the USB-drive directly to the iMac, both read and write are equal and at a very high level (so no problem with the iMac-disc).
Also, are these speeds more or less the maximum I can expect?
Is GUID the right choice or is it better to use APM (Apple Partition Map)?
The AEBS is using firmware 7.3.1 and all the latest software updates are done.
Thanks for any reponse.
Message was edited by: jnmen
Message was edited by: jnmen

Thanks for your support.
I looked at the post you're referring to and disabling journaling only improved the write-speed. But the write-speed is not what the problem is; it is the read-speed. And disabling journaling has no influence on reading a drive (as far as I know).
Also, disabling journaling can give data integrity-problems on the HD in case of sudden disconnects or power-down. Or am I wrong?

Similar Messages

  • Airdisk - SLOW read speed, but FAST write speed with Leopard 10.5.1?

    I've been complaining for a while about my random problems with reading files from airdisk. My solution has always been to relaunch finder and re-mount the airdisk when it slows down.
    I just noticed the speed cut down to and when looking at the activity monitor, it's fluctuating up and down and never seems to go past 1.5 to 2 MB/s.
    HOWEVER, what I did just notice is that my write speed is consistently 3-4 MB/s with no fluctuation at all. Interference is not an issue, as I'm on N-only mode and in 5GHZ.
    Is there anything I can do aside from waiting until 10.5.2? This does this issue in every AEBSn firmware. Thanks.

    I experience the same problem (slow read: 2,8 MB/s, fast write: 6,3 MB/s) with my airdisk over a LAN-connection with 10.5.2. I atteched the USB-drive directly to my iMac and then it's the other way around: fast write, slow read (although considerably higher and normal throughtputs so drive is OK).
    Anyone having some suggestions? The airdisk is formatted as HFS+ (GUID) with journalling on, Airport firmware 7.3.1 and the most recent updates installed.
    Message was edited by: jnmen

  • Slow read and write operations on DAQmx

    I am trying to build up a feedback control system using PCI-6052E and PCI-6722 cards, so that the computation of the control algorithm is performed on computer's CPU. I am trying to reach sampling period of 1kHz. It turns out that the bottleneck of my system are the read and write operations from and to cards that consume lot of processor time.
    An example code (C#) that shows how the reads and writes are implemented is as attachment. On my tests the example code produces a read-time of 1000 samples on 6 channels 7.58s and a write-time of 4.69s. Is there any way to improve the performance?
    The program is running on Windows XP on 1000Mhz processor.
    Attachments:
    DAQmxPerformanceTest.cs ‏3 KB

    Petteri,
    I don't have the hardware to reproduce this, but I have a few ideas. For analog output, are you creating a task, starting it, and calling write repeatedly, or are you simply calling write? While an AO Task will auto start on write, it will also go through the process of stopping when the write is complete. Which means next time you call write, the task will need to start again. It will be much more effecient if you explicitly call start on the task once, perform as many writes as required, and stop/clear the task when you are done. This same principle applies to you analog input reads as well.
    I hope this helps,
    Dan

  • Xbench 1.2 result... slow HD Random write and read

    Hello, I ran xbench 1.2 on my MBP 2.0/7200RPM HD, and have the following slow read and write. Do you think this is normal?
    uncached write 0.71MB/sec [4k blocks]
    uncached read 0.56MB/sec [4k blocks]
    ???

    I had the same issue, but answered my own question.
    Random
    Uncached Write 9.21 0.97 MB/sec [4K blocks]
    Uncached Write 67.86 21.72 MB/sec [256K blocks]
    Uncached Read 77.23 0.55 MB/sec [4K blocks]
    Uncached Read 98.84 18.34 MB/sec [256K blocks]
    It's the size of the data being read/written combined with random access that's causing the apparent issue. Compare the 4K figures with the 256K figures.
    What's happending in the random test is that Xbench is picking data from all over the drive to read or write, so the drive heads are moving around a lot (and you get no benefit from the cache). After all this movement, the drive then only transfers 4K of data (a tiny amount). So the drive is spending most of it's time moving heads around rather than transferring data.
    For the 256K reads, the drive is transferring 64 times as much data with each head movement and getting 20 times as much throughput. (it probably takes 3 times as long to read 256K as it does 4K, depending upon the drive, so you get 64/3 times the data throughput).
    Regards,
    Steve

  • Windows Server 2012 Storage Spaces Simple RAID 0 VERY SLOW reads, but fast writes with LSI 9207-8e SAS JBOD HBA Controller

    Has anyone else seen Windows Server 2012 Storage Spaces with a Simple RAID 0 (also happens with Mirrored RAID 1 and Parity RAID 5) virtual disk exhibiting extremely slow read speed of 5Mb/sec, yet write performance is normal at 650Mb/sec in RAID 0?
    Windows Server 2012 Standard
    Intel i7 CPU and Motherboard
    LSI 9207-8e 6Gb SAS JBOD Controller with latest firmware/BIOS and Windows driver.
    (4) Hitachi 4TB 6Gb SATA Enterprise Hard Disk Drives HUS724040ALE640
    (4) Hitachi 4TB 6Gb SATA Desktop Hard Disk Drives HDS724040ALE640
    Hitachi drives are directly connected to LSI 9207-8e using a 2-meter SAS SFF-8088 to eSATA cable to six-inch eSATA/SATA adapter.
    The Enterprise drives are on LSI's compatibility list.  The Desktop drives are not, but regardless, both drive models are affected by the problem.
    Interestingly, this entire configuration but with two SIIG eSATA 2-Port adapters instead of the LSI 9207-8e, works perfectly with both reads and writes at 670Mb/sec.
    I thought SAS was going to be a sure bet for expanding beyond the capacity of port limited eSATA adapters, but after a week of frustration and spending over $5,000.00 on drives, controllers and cabling, it's time to ask for help!
    Any similar experiences or solutions?

    Has anyone else seen Windows Server 2012 Storage Spaces with a Simple RAID 0 (also happens with Mirrored RAID 1 and Parity RAID 5) virtual disk exhibiting extremely slow read speed of 5Mb/sec, yet write performance is normal at 650Mb/sec in RAID 0?
    Windows Server 2012 Standard
    Intel i7 CPU and Motherboard
    LSI 9207-8e 6Gb SAS JBOD Controller with latest firmware/BIOS and Windows driver.
    (4) Hitachi 4TB 6Gb SATA Enterprise Hard Disk Drives HUS724040ALE640
    (4) Hitachi 4TB 6Gb SATA Desktop Hard Disk Drives HDS724040ALE640
    Hitachi drives are directly connected to LSI 9207-8e using a 2-meter SAS SFF-8088 to eSATA cable to six-inch eSATA/SATA adapter.
    The Enterprise drives are on LSI's compatibility list.  The Desktop drives are not, but regardless, both drive models are affected by the problem.
    Interestingly, this entire configuration but with two SIIG eSATA 2-Port adapters instead of the LSI 9207-8e, works perfectly with both reads and writes at 670Mb/sec.
    I thought SAS was going to be a sure bet for expanding beyond the capacity of port limited eSATA adapters, but after a week of frustration and spending over $5,000.00 on drives, controllers and cabling, it's time to ask for help!
    Any similar experiences or solutions?
    1) Yes, being slow either on reads or on writes is a quite common situation for storage spaces. See references (with some of the solutions I hope):
    http://social.technet.microsoft.com/Forums/en-US/winserverfiles/thread/a58f8fce-de45-4032-a3ef-f825ee39b96e/
    http://blogs.technet.com/b/askpfeplat/archive/2012/10/10/windows-server-2012-storage-spaces-is-it-for-you-could-be.aspx
    http://social.technet.microsoft.com/Forums/en-US/winserver8gen/thread/64aff15f-2e34-40c6-a873-2e0da5a355d2/
    and this one is my favorite putting a lot of light on the issue:
    http://helgeklein.com/blog/2012/03/windows-8-storage-spaces-bugs-and-design-flaws/
    2) Issues with SATA-to-SAS hardware is also very common. See:
    http://social.technet.microsoft.com/Forums/en-US/winserverClustering/thread/5d4f68b7-5fc4-4a3c-8232-a2a68bf3e6d2
    StarWind iSCSI SAN & NAS

  • Fast way to read and write console

    hi guys
    what's the fastest way to read and write from console?
    For writing I'm using (I think thats the fastest way)
    System.out.println(foo)and I have to read the following - all numbers are ints (that means m + n + 1 lines):
    1 * "<n> <m>"
    n * "<a> <b> <c>"
    m * "<a> <b>"Actual I'm reading (the second line for example) with
    BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
    String[] items = br.readLine().split("[^\\-\\.0-9]+");
    Target mainTarget = new Target(Integer.parseInt(items[0]), Integer.parseInt(items[1]), Integer.parseInt(items[2]));
    for (int i = 1; i < m; i++) {
        items = br.readLine().split("[^\\-\\.0-9]+");
        mainTarget.addTarget(Integer.parseInt(items[0]), Integer.parseInt(items[1]), Integer.parseInt(items[2]));
    } But that isn't really fast...
    have you any idea?
    grz faetzminator

    faetzminator wrote:
    right, I have 1 + 1 + 1 =3 up to 10^6-1 + 10^6-1 + 1 =2*10^6-1 input lines with max 4999997 integers (and I have to store 2999999 of them)
    ok, if that is the fastest way... (maybe the reading of the input file is so slow; I run eg "java Foo < input > output")All those br.readLine calls where you never check if the return is null are dangerous.
    I really don't understand what you are doing. You mentioned System.in at first and now reading from files.
    I also am pretty sure that you are in fact running out of memory.

  • How to increase disk read and write speed after installing new SSD (2009 Macbook Pro)? Why not as fast as advertised?

    Hi everyone,
    I just installed a Crucial MX10512 GB SSD into my 2009 Macbook Pro. It's definitely much faster, but the read and write disk speed is around 200 Mb/s for both versus the 300-500 Mb/s that the SSD advertised. Any ideas as to why? And is there anything I can do to make it faster? Before I installed it, it was between 80-90 Mb/s.
    Specs:
    - currently have about 460 of 511 GB of storage available
    - am using 2GB of memory
    - running on 10.10.2 Yosemite
    Thanks!

    nataliemint wrote:
    Drew, forgive me for being so computer-incompetent but how would I boot from another OS? And shouldn't I be checking the read speeds on my current OS (Yosemite) anyways because I want to know how the SSD is performing on the OS I use? And finally, what kind of resources would it be using that would be slowing down my SSD?
    Sorry for all the questions - I'm not a Macbook wiz by any means!
    You could make a clone of your internal OS onto an external disk. Hopefully you already have a backup of some form
    A clone is a full copy, so you can boot from it. It makes a good backup as well as being useful to test things like this.
    Carbon Copy Cloner will make one or you can use Disk Utility to 'restore' your OS from the internal disk to an external one.
    Ideally the external disk is a fast disk with a fast 'interface' like Thunderbolt, Firewire 800 or USB3. USB2 can work, but it is slow & may effect the test.
    You connect the clone, hold alt at startup & select the external disk in the 'boot manager'. When the Mac is finished booting run the speed tester.
    Maybe this one…
    https://itunes.apple.com/gb/app/blackmagic-disk-speed-test/id425264550
    Test the internal & compare to the previous tests
    A running OS will do the following on it's boot disk…
    Write/ read cache files from running apps
    Write/ read memory to disk if memory is running low
    Index new files if content is changing or being updated
    Copy files for backing up (Time Machine or any other scheduled tasks)
    Networking can also trigger read/ write on the disk too.
    You may not have much activity that effects a disk speed test, but you can't really be sure unless that disk is not being used for other tasks.
    Disk testing is an art & science in itself, see this if you want to get an idea …
    http://macperformanceguide.com/topics/topic-Storage.html
    Simply knowing that it's about twice the speed would be enough to cheer me up

  • Fast, small-byte reads and writes.

    I am trying to communicate with a third-party counter using a 1 byte "start" command followed by reading a 4 byte number.  I would like to perform this read/write pair as fast as possible as I will be using count-times on the order of microseconds repeated millions of times.  Our first attempt used USB but the latency in the bus was way too slow at 1ms.  We spent more time waiting than counting. We are currently considering GPIB since the latency should be only 30 microseconds but we're concerned that the overhead from labview will prevent us from reading and writing that fast.  Another idea we had was to use digital I/O channels to create our own parallel port to by-pass the USB bus.  Can anyone comment on a good solution for low-latency small-byte reads and writes from labview? What type of hardware is recommended and can labview support programmatic reads-writes on the order of microseconds? Thanks!

    Theoretically GPIB is capable of very good latencies, as you say on the order of usec.  But in the real world using real PCs running real operating systems and connected to real instruments, the latencies seem to cluster around 2-5msec, in my experience.  The issue isn't LabVIEW so much as all the stuff through which instrumentation interactions must filter, including background processes, screen updates and all the rest of the activities which consume the processor's bandwidth.  Some instruments allow you to buffer and interpret the next command while the current one is underway, then trigger it, which saves latency.  Also keep in mind that your GPIB network will only be as responsive as the least-well-behaved instrument on the bus.

  • Process using Webservice in Reader and Writer becomes slow while connecting processors

    Hi,
    I have a process that reads from a web service and writes to a web service. When I try to modify the code(after stopping the running process) by adding new processors or change the current connections, the connecting arrows take forever to connect and sometimes the director hangs for a long time. There is no issue in other processes where I read and write from staged data.
    How to overcome this isue?
    Appreciate the help!
    Regards,
    Ravi

    Hi Nick,
    Thanks for the advice. In the scenario where upgrade is not an option, is there any workaround to this issue?
    What I've found is if I strip the next processor of all its connections, then it gets connected to the previous processor pretty quick.
    EDQ hangs if the processor has other input/output connections and I try to connect it to another processor)
    P1>---<P2>    <P3 ---------- P2 to P3 connects fast as P3 is standalone
    P1>---<P2>    <P3>---<P4> ---------- In this case,  when I try to connect P2 to P3, it hangs forever.
    I've a complex process and just for adding a few processors in between, I will not be able to connect the rest of the processors all over.
    Regards,
    Ravi

  • Why are my Datasocket OPC reads and writes so slow?

    I am creating an application (in LabView 6.1) that has to interface with 15-30 OPC tags (about half and half read and write) on a KepWare OPC server. It seems like the DSC Module would be overkill, but the datasocket write VIs seem to slow the system to a halt at about 6 tags. I saw something about using the legacy datasocket connect VIs, but would I really see a large performance boost from switching?
    In the existing code we have, I have seen that even the legacy VIs (our old code uses the older VIs) sometimes grind the system to a halt or lock it up completely if it loses sight of the OPC server. Has anyone else had problems with this?

    Hi IYUS,
    What OPC Server are you using?  What methods are you using to communicate?  If you are using IAOPC, have you installed the latest patch?
    Also, this post is nearly 2 years old.  You will probably get more visibility (and more help) if you post your questions in a new thread.
    Cheers,
    Spex
    National Instruments
    To the pessimist, the glass is half empty; to the optimist, the glass is half full; to the engineer, the glass is twice as big as it needs to be...

  • Adobe reader fast now it is slow...Help!

    I have been using Adobe for about 2-3 months on my Samsung tablet. It has worked well until now. It is slow and the screen stays black like it won't turn on and when I go to get out of the program. It turns on, but I already swiped it off my screen.

    I have a Samsung Tab S. All the app says is Adobe Reader. I don't know what version it is. What has been happening is I will tap the app, it will then open to a black screen and stay like that for for what seems like forever, but is probably about 20-30 seconds. Then I will try to scroll through my Pdfs  and it stutters. When I find the document I open it and it goes black again. It also scrolls slower.

  • Can you improve the speed of my CSV Reader and Writer?

    hi all, i'm trying to develop a CSV Writer and Reader. i have done a good work to implement the special character and quoting it, it's also support multi line value but it's incredibly slow.
    can someone help to make it faster?
    here it's how to use the writer
    char *stringhe_sorgenti[10] = {0};
    out = OpenFile(nfile, VAL_WRITE_ONLY, VAL_TRUNCATE, VAL_ASCII);
    for(i = 0; i < sizeof(stringhe_sorgenti)/sizeof(char*); i++){
    stringhe_sorgenti[i] = (char*)calloc(200, sizeof(char));
    sprintf(stringhe_sorgenti[0], "example1");
    sprintf(stringhe_sorgenti[1], "example2");
    scrivi_riga_csv(out, stringhe_sorgenti, sizeof(stringhe_sorgenti)/sizeof(char*), formato);
    for(i = 0; i < sizeof(stringhe_sorgenti)/sizeof(char*); i++){
    free(stringhe_sorgenti[i]);
    CloseFile(out);
    here is the writer 
    void scrivi_riga_csv(int file_handle, char *stringa_sorgente[], int numero_stringhe, int formato)
    char delimitatore[2][2] = {{',', '\0'}, {';', '\0'}};
    char stringa_destinazione[1024] = {0};
    int index_destinazione = {0};
    int index_start = {0};
    int index_fine = {0};
    int errore = {0};
    int i = {0};
    //int k = {0};
    size_t lunghezza_stringa = {0};
    for(i = 0; i < numero_stringhe; i++){
    if(i != 0){
    stringa_destinazione[index_destinazione++] = delimitatore[formato][0];
    index_start = 0;
    lunghezza_stringa = strlen(stringa_sorgente[i]);
    // se la stringa sorgente
    if( (FindPattern(stringa_sorgente[i], 0, lunghezza_stringa, delimitatore[formato], 0, 0) != -1) // contiene delimitatore
    || (FindPattern(stringa_sorgente[i], 0, lunghezza_stringa, "\"", 0, 0) != -1) // contiene parentesi
    || (FindPattern(stringa_sorgente[i], 0, lunghezza_stringa, "\n", 0, 0) != -1) // contiene a capo
    // apro parentesi all'inizio
    stringa_destinazione[index_destinazione++] = '"';
    // metodo find pattern, piu' complesso ma piu' performante
    do{ index_fine = FindPattern(stringa_sorgente[i], index_start, lunghezza_stringa - index_start, "\"", 0, 0);
    if(index_fine != -1){
    index_fine++;
    // copio dall'inizio fino alle virgolette
    CopyString (stringa_destinazione, index_destinazione, stringa_sorgente[i], index_start, index_fine - index_start);
    index_destinazione += index_fine - index_start;
    // ne aggiungo una dopo
    stringa_destinazione[index_destinazione++] = '"';
    // aggiorno la posizione di start e riparto con il while
    index_start = index_fine;
    }while(index_fine != -1);
    CopyString (stringa_destinazione, index_destinazione, stringa_sorgente[i], index_start, lunghezza_stringa - index_start);
    index_destinazione += strlen(stringa_sorgente[i]) - index_start;
    // alla fine della riga chiudo la parentesi
    stringa_destinazione[index_destinazione++] = '"';
    else{
    // altrimenti la copio semplicemente e shifto l'indice della stringa di destinazione
    CopyString (stringa_destinazione, index_destinazione, stringa_sorgente[i], 0, lunghezza_stringa);
    index_destinazione += strlen(stringa_sorgente[i]);
    memset(stringa_sorgente[i], 0, strlen(stringa_sorgente[i]));
    errore = WriteLine (file_handle, stringa_destinazione, strlen(stringa_destinazione));
    if(errore == -1){
    errore = GetFmtIOError();
    MessagePopup("WriteLine -> WriteLine", GetFmtIOErrorString(errore));
    return;
     here how to read the file
    char *stringhe_sorgenti[10] = {0};
    for(i = 0; i < sizeof(stringhe_sorgenti)/sizeof(char*); i++){
    stringhe_sorgenti[i] = (char*)calloc(200, sizeof(char));
    out = OpenFile(nomearchivio, VAL_READ_ONLY, VAL_OPEN_AS_IS, VAL_BINARY);
    leggi_riga_csv(out, stringhe_sorgenti, sizeof(stringhe_sorgenti)/sizeof(char*), formato);
    strcpy(intestazione.data, stringhe_sorgenti[1]);
    for(i = 0; i < sizeof(stringhe_sorgenti)/sizeof(char*); i++){
    free(stringhe_sorgenti[i]);
    CloseFile(out);
     and here the reader
    void leggi_riga_csv(int file_handle, char *stringa_destinazione[], int numero_stringhe, int formato)
    char delimitatore[2][2] = {{',', '\0'},
    {';', '\0'}};
    char stringa_sorgente[1024] = {0};
    int stringa_in_corso = {0};
    int index_inizio_valore = {0};
    int index_doublequote = {0};
    int offset_stringa_destinazione = {0};
    size_t lunghezza_stringa = {0};
    int inquote = {0};
    int errore = {0};
    int i = {0};
    for(i = 0; i < numero_stringhe; i++){
    lunghezza_stringa = strlen(stringa_destinazione[i]);
    memset(stringa_destinazione[i], 0, lunghezza_stringa);
    do{ memset(&stringa_sorgente, 0, sizeof(stringa_sorgente));
    errore = ReadLine(file_handle, stringa_sorgente, sizeof(stringa_sorgente) - 1);
    // If ReadLine reads no bytes because it has already reached the end of the file, it returns –2.
    // If an I/O error occurs, possibly because of a bad file handle, ReadLine returns –1.
    // You can use GetFmtIOError to get more information about the type of error that occurred.
    // A value of 0 indicates that ReadLine read an empty line.
    if(errore == -1){
    errore = GetFmtIOError();
    MessagePopup("leggi_riga_csv -> ReadLine", GetFmtIOErrorString(errore));
    return;
    else if(errore == -2){
    errore = GetFmtIOError();
    MessagePopup("leggi_riga_csv -> ReadLine", "already reached the end of the file");
    return;
    else{
    lunghezza_stringa = errore;
    index_inizio_valore = 0;
    // metodo find pattern, piu' complesso ma piu' performante
    for(i = 0; i <= lunghezza_stringa; i++){
    // se come primo carattere ho una " allora e' una stringa speciale
    if(inquote == 0){
    if(stringa_sorgente[i] == '\"'){
    inquote = 1;
    index_inizio_valore = ++i;
    else{
    // altrimenti cerco il delimitatore senza il ciclo for
    i = FindPattern(stringa_sorgente, i, lunghezza_stringa - index_inizio_valore, delimitatore[formato], 0, 0);
    if(i == -1){
    // se non lo trovo ho finito la riga
    i = lunghezza_stringa;
    if(stringa_sorgente[i - 1] == '\r'){
    i--;
    if(stringa_in_corso < numero_stringhe){
    CopyString (stringa_destinazione[stringa_in_corso], 0, stringa_sorgente, index_inizio_valore, i - index_inizio_valore);
    offset_stringa_destinazione = 0;
    stringa_in_corso++;
    if(stringa_sorgente[i] == '\r'){
    i++;
    index_inizio_valore = i + 1;
    if(inquote == 1){
    // se sono nelle parentesi cerco le virgolette
    i = 1 + FindPattern(stringa_sorgente, i, lunghezza_stringa - index_inizio_valore, "\"", 0, 0);
    if(i == 0){
    if(stringa_sorgente[lunghezza_stringa - 1] == '\r'){
    lunghezza_stringa--;
    // se non le trovo ho finito la riga, esco dal ciclo for
    break;
    // se incontro una doppia parentesi salto avanti
    else if(stringa_sorgente[i] == '\"'){
    continue;
    // !!!! fondamentale non cambiare l'ordine di questi else if !!!!!
    // se incontro una parentesi seguita dal delimitatore
    // o se incontro una parentesi seguita dal terminatore
    // \r = CR = 0x0D = 13
    // \n = LF = 0x0A = 10
    // a capo = CR + LF
    else if( (stringa_sorgente[i] == delimitatore[formato][0])
    || (stringa_sorgente[i] == '\r')
    || (stringa_sorgente[i] == '\0')
    // salvo il valore
    inquote = 0;
    if(stringa_in_corso < numero_stringhe){
    CopyString (stringa_destinazione[stringa_in_corso], offset_stringa_destinazione, stringa_sorgente, index_inizio_valore, i - 1 - index_inizio_valore);
    offset_stringa_destinazione = 0;
    stringa_in_corso++;
    if(stringa_sorgente[i] == '\r'){
    i++;
    index_inizio_valore = i;
    // se sono andato a capo scrivo fino a dove sono e poi procedo con la nuova riga
    if(inquote){
    if(stringa_in_corso < numero_stringhe){
    CopyString (stringa_destinazione[stringa_in_corso], offset_stringa_destinazione, stringa_sorgente, index_inizio_valore, lunghezza_stringa - index_inizio_valore);
    strcat(stringa_destinazione[stringa_in_corso], "\n");
    offset_stringa_destinazione += lunghezza_stringa - index_inizio_valore;
    offset_stringa_destinazione++;
    }while(inquote == 1);
    // elimino le doppie parentesi
    for(i = 0; i < numero_stringhe; i++){
    index_doublequote = 0;
    do{ lunghezza_stringa = strlen(stringa_destinazione[i]);
    index_doublequote = FindPattern(stringa_destinazione[i], index_doublequote, lunghezza_stringa - index_doublequote, "\"\"", 0, 0); // contiene doppia parentesi
    if(index_doublequote != -1){
    index_doublequote++;
    memmove (stringa_destinazione[i] + index_doublequote, stringa_destinazione[i] + index_doublequote + 1, lunghezza_stringa - index_doublequote);
    }while(index_doublequote != -1);
    return;

    the format is CSV, i try to explain better what i'm doing.
    our client asked to save acquisition data with header description in an excel readable format, i've decided to use .CSV and not .TDM because it's a simple txt file and we never used .TMD but i will propose to use it.
    after some research on the internet i've found nothing to handle .CSV in CVI except from this csv_parse but i've found it difficult to be maintained so i've write it by my own hand.
    i've written two example of how to use my function to read or write and i've copyed my function used to read and write.
    in the write function i check with FindPattern if the string to be write contain some special character, if i find this i have to quote the string to respect the standard RFC4180 and if i find a quote i have to double it. aftere i've done this check i write the line in the file.
    in the read function, that is more complicated, i:
    check if the first character is a quote.
    if it's not i copy the string until the delimitier or until the end of the line.
    if it is i have a string with special character inside so:
    i find the first quote in the string. when i've found i check if it's follwed by another quote. this means that in the starting message i was writing a single quote.
    if it's not followed by another quote but it's followed by a delimiter or a carriage return i've finished the special line.
    if i don't find it it means that the special quote have a carriage return inside and i have to check the next line. before checking the next line i save this in my string.
    after this loop i check in every string if i have a double quote and i delete one.
    the main problem is in the speed of this, i'm acquiring data at 1000 S/s with 8 active channel for 60 second so i have 480000 data to be stored, divided in 60.000 row and 8 column. to read a file like that my pc stay "locked" for 15 second or more.
    i've tried to use the arraytofile function and it's extremly fast and i can also put header because the function can start from the last position in the file but the filetoarray function start from the beginning and i cannot read the header correctly. also if i'm using the european CSV with semicolon as delimiter with arraytofile i cannot select the semicolon but only the coma

  • Dd to wipe partiton, data gets read after write?

    Hi,
    want to wipe a partition on my hdd. Doing this with dd if=/dev/urandom of=/dev/sdxx and get ~10MB/s (it's a WD green 2TB in USB2 external case). I tried different bs values (2048, 4096, 1M, 4M, default) but nothing helped.
    Now I saw in conky that mot of the time read and write speeds are identical. So I assume that the data gets read after write, is that possible? I also tried it with shred, same behavior on read/write except it's a bit slower ~8MB/s. Perhaps this has to do something with USB. Of course I could connect with SATA directly for wiping but the problem then stays for later when I want to use it as external device...
    Any suggestions?
    Thanks in advance.

    Thanks for these replies.
    @ROOKIE Mh, thats not the case since the data gets read fom /dev/urandom.
    @brebs Doesn't know that there are settings/drives supporting that. Good to know but it says "write-read-verify = not supported"
    @teateawhy Lready done with urandom but always good to know that this exists.
    So I'm still curious whats going on. And now I can say that it still happens with filesystem on that drive while copying files.
    Got read = write = 30MB/s copying files from internal SATA connected to external USB 2.0 connected HDD. So overall 60MB/s what I assume it should be but just writing...
    Same when using dd with /dev/zero as source just slightly faster.
    Finally reading speed is ~85MB/s which seems to bee the USB 2.0 limit, so this works like expected.
    Thanks again and perhaps someone can (help me) solve this.

  • Can't read or write from iPod when it syncs with a big library

    When I try to sync with my 50Gb size library in the iTunes installed on my iBook G4 (10.4.11), it appears "can't read or write from the ipod", in the middle of the synchronization. It starts to do the job very well, but when it has done only 6 or 8 Gb, the process got slower until it appears such message. I have iTunes 7.6 and ipod firmware 1.1. I have tried on Windows and Mac plattforms, trying all the combinations (Mac and/or Win formats) and the trouble persists. The device has only two weeks of use, and it was working perfectly until now. The problem it is the same if I do it manually (no automatic sync). I am not sure if the problem is defective hardware, does anybody have any idea or have similar problem? Thanks.

    Thanks a lot for your answer. The idea of an increasing autoplaylist it is pretty good. I did it, in 5 Gb steps, and in the thrid step (10 Gb done), when it only has copied almost one Gb, the error message pop up again. Before the message appears, iTunes become slower and slower copyng each song until it stops in one song, and after a while, message appears. After, the system come down, almost blocked, and it takes a long until I can mount the ipod again. Of course, ipod restarts automatically and the songs copied in the last step are not there. I was taking care in every moment of the process, avoiding to leave the system suspend alone (and also turning off all the saving energy options). I am now pretty sure that the problem is on the device. Anyway, thank you very much for your valuable ideas.

  • Slow motion / fast motion in Premiere Elements 13

    I read that in Premiere Elements 13 fast motion (and I think also slow motion) ist removed. Is that correct. And if yes: Is there any workaround available?

    rr
    Thanks for the reply.
    Great information. If you have been using Ctrl+R, that is the keyboard shortcut that brings up the Time Stretch dialog.
    Time Stretch (use of the Ctrl+R) has not been removed from Premiere Elements 13 Expert workspace.
    Like Premiere Elements 11, Time Stretch is found in the Expert workspace and not in the Quick workspace.
    When you use the keyboard shortcut Ctrl+R, I get Time Stretch dialog from where I can create the slow or fast motion effect in the selected clip.
    The following relates to Premiere Elements Timelapse Capture (Stop Motion) which has been removed from
    Premiere Elements 13 as a specific feature that you see in Premiere Elements 11 and 12.
    Adobe Premiere Elements 11 * Capture stop-motion and time-lapse video
    Please let me know if you are OK with the above information.
    Thank you.
    ATR

Maybe you are looking for