Check/Turn on Disk write cache

Dear all,
I read one of the Oracle metalink, and it said, we need to turn on the 'disk write cache' for better file writing throughput, but I am not very sure, whether currently, my newly installed Solaris 9 system have the 'disk write cache' turned on. Please advise,
- Whichcommand, I can use to check the current setting?
- How can I turn the 'disk write cache' if it is current off
Thanks
Regards,
Eddie

There are so many things that this could be, particularly when Oracle is brought into play, that I wouldn't know where to really start.
For example:
* Are you using parallel directories? If so, are all of the directories on separate hard drives? Better yet, are they on separate SCSI chains?
* Are you using UFS or VxFS?
* Did you run an iostat and vxstat to isolate where the slowness is occurring?
* Are you using any kind of software RAID?
* What's the speed of your drives?
* Are your boot disks mirrored? (Yes, boot disk performance can kill an entire server even if the applications and data are not on the boot disk.)
* Do you have inode access time enabled or disabled?
* Have you fine-tuned your kernel settings for better disk utilization?
Unfortunately, disk optimization is hell to diagnose. I know because I just went through weeks tuning the I/O of our production server. It's NOT fun.

Similar Messages

  • How to disable hard disk write caching?

    How to disable hard disk write caching?
    Is there something similar to Windows: http://support.microsoft.com/kb/259716/en

    Manually mount the disk, disabling the cache.  With the default mount mechanisms, no.  Details.

  • Disk write cache corruption on an external hard disk

    Recently I purchased an external hard disk, to put all of my pictures of my baby due to space limitations on my internal laptop drive. Since my Canon camera 10 megapixel pictures take up a ton of space this was occurring at an alarming rate. My intention was also to back up to DVD but never got to it. This is both a comment and a question.
    Now said baby, is getting really good at pulling cords etc. He managed to do this on the external drive firewire cable. As far as i know the drive was not in the process of actually writing. However, after a few minutes and trying to re-plug the drive the OS crashed with a Grey Screen of Death hardware error that tells you that you need to push the power button..
    This apparently corrupted the drive's directory. Upon reboot the drive would not mount. Since there were time machine backups on the drive too Diskwarrior was not able to reconstruct the drive in the built in memory available on the computer. Time machine stores millions of files and Diskwarrior apparently wants to keep it all in memory at once. Since my computer has 2GB you would hope that would be enough but apparently not. So that failed. Other utilities seemed to be able to find the files but not restore the directory. I also took it to the apple store they were not able to recover it.
    I have to say this is extremely distressing, and hard to believe that a simple accident like this could cost the loss of thousands of pictures. I did send it to a rescue company but that was expensive but I think apple needs to do something about this situation.
    On Windows there is a way to disable the write cache for external drives. This is not available for Mac OS X. This would prevent this rather common occurrence of a plug accidentally becoming disengaged when the drive is not in the process of writing. This reduces the odds but I still think apple, in order to become clearly superior, needs a better solution.
    I know Apple has experimented with ZFS would this not eliminate this possibility of this kind of disaster? Is this in Snow Leopard desktop? I know they are thinking this is a business customer focused technology but clearly if it can eliminate this kind of thing then I think it is extremely useful to their non server customers. Perhaps there are other ways of dealing with this issue but ZFS is designed to deal with these issues. HFS+ is extremely fragile to disk corruption.
    I know my situation is not that uncommon and this is not the first time this has happened to me with an external drive. You would think I had learned. I hate to think of how many other people have had the same thing happen to them.

    Disabling a write cache would not help you with the problem you are having. Besides the write cache is on the hard drive and operates independently of the operating system. But nothing can protect you from disconnected an external drive improperly, and if the drive is in the process of being written to, and the problem comes from directory corruption, not file corruption, as in the case you have described, disabling a write cache would not have made any difference.
    For the future the magic word is: Backups.
    < Edited by Host >

  • Is there a way to turn off using ram as intermediary when transfering files between 2 internal hdds? (write cache is off on all drives)

    right forum?  first time here. none of the options seem perfect. so i guess this applies to 'setup.' i tried to describe what's happening verbosely from the start of a file transfer to completion of writing file to 2nd hard drive.
    win 7 64bit home - i2600k 3.4ghz - 8gb of ram - 2hdd 1 ssd
    I have this problem when transferring files between 2 internal hard drives. one is unhealthy (slow write speeds but not looking for advice to replace it because it serves its unimportant purpose). so, the unhealthy drive drops into PIO mode - that's acceptable.
    however, a little over 1gb of any file transfer is cached in RAM during the file transfer. after the file transfer window closes, indicating the transfer is "complete" (HA!), it still has 1gb to write from RAM which takes about 30minutes. This would
    not be a problem if it did not also earmark an additional 5gb of ram (never in use), leaving 1gb or less 'free' for programs. this needlessly causes my pc to be sluggish - moreso than a typical file transfer file between 2 healthy drives. i have windows write
    caching turned off on all drives. so this is a different setting i can't figure out nor find after 2 hours of google searches.
    info from taskmanager and resource monitor.
    idle estimates: total 8175, cached 532, available 7218, free 6771 and the graph in taskman shows about 1gb memory in use.
    at the start of a file transfer:  8175,  2661,  6070,  4511free and ~2gb of ram used in graph. No problems, yet.
    however, as the transfer goes on, the free ram value drops to less than 1gb (1gb normal +1gb temporary transfer cache +5gb of unused earmarked space = ~7gb),  cached value increases, but the amount of used ram remains relatively unchanged (2gb
    during transfer and slowly drops to idle values as remaining bits are written to the 2nd hard drive).  the free value is even slower to return to idle norms after the transfer completes writing data to 2nd hard drive from RAM. so, it's earmarking an addition
    5gb of ram that are completely unused during this process.  *****This is my problem*****
    Is there any way to turn this function off or limit its maximum size. in addition to sluggishness, it poses risk to data integrity from system errors/power loss, and it's difficult to discern the actual time of transfer completion, which makes it difficult
    to know when it's safe to shutdown my pc (any data left in ram is lost, and the file transfer is ruined - as of now i have to use resmon and look through what's writing to disk2 -> sometimes easy to forget about it when the transfer window closed 20-30minutes
    ago and the file is still in the process of writing to the 2nd disk).
    Any solution would be nice, and a little extra info like whether it can be applied to only 1 hard drive would be excellent.

    Thanks for the reply.
    (Although i have an undergrad degeee in computers, it's been 15years and my vocab is terrible. so, i will try my best. keep an open mind. it's not my occupation, so i rarely have to communicate ideas regarding PCs)
    It operates the same way regardless of the write-cache option being enabled. It's not using the 5gb for read/write buffer - it's merely bloating standby memory during the transfer process at a rate similar to the write speed of destination (for my situation).
    at this point i don't expect a solution. i've tried to look through lists of known memory leaks but i dont have the vocabulary to be 100% certain this problem is not documented. as of now it can't affect many people - NAS's with low bandwidth networks, usb
    attached storage etc. do bugs get forwarded from these forums? below i can outline the consistent and repeatable nature not only on my pc but on others' pcs as well (2012 forum post i found).
    I've been testing and paying a little more attention and i can describe it better:
    Just the Facts
    Resmon Memory Info: "In Use" stays consistent ~1gb (idle amount and roughly the same when nothing else is running during file transfer)
                                     "Modified" contains file transfer
    data (meta data?) which remains consistent at little over 1gb (minor fluctuations due to working as a buffer). After the file transfer window closes "Modified" slowly diminishes as it finishes lazy writing (i believe that's the term). I forget idle
    pc amount, but after transfer this is ony 58mb)
    "Standby" as the transfer starts it immediately rises to ~2gb. I'm sure this initial jump is normal. However, it will bloat well over 5gb over time with a large enough transfer increasing at a consistent rate during the entire transfer
    process. the crux of the matter.
    "Free" will drop as far as 35-50megabytes over time.
    as the transfer starts, the "Standby" increases by an immediate chunk then at a slow rate throughout entire transfer process(~1mb/s). once writing metadata to RAM no longer occurs, the "Modified" ram slowly (@500kb/s matching resmon disk
    activity for that file write) diminishes as it finishes lazy writing, After file is 100% written to destination drive, "Standby" remains a bloated figure long after.
    a 1.4gb transfer filled 3677MB of "Standby" by the time writing finished and modified ram cleared. after 20minutes, it's still bloated at 3696MB. after 30-40mins it's down to 2300mb - this is about what it jumps immediately to when transfer starts
    - it now remains at this level until i reboot.
    I notice the "standby" is available to programs. but they do not operate well. e.g. a 480p trailer on IMDB.com will stop-and-go every 2-3seconds (stream buffers fine/fast) - this would be during the worst case scenario of 35-50mb "Free"
    ram. my pc isn't and never was the latest and greatest, but choppy video never happens even with 1 or 2 resource hogs running (video processing/encoding, games, etc).
    Conjecture Below
    i think it's a problem when one device is significantly slower at writing than the source device - this is the factor that i share with others having this problem. when data is written to modified ram then sent to destination, standby memory is expanded
    until it completely or nearly fills all available RAM - If the transfer size is large enough relative to how slow the write speed of destination device is. otherwise it fills it up as much as the file size/write speed issue allows. the term "memory leak"
    is used below but may not technically be one, but it's an apt description in layman's terms.
    i saw a similar post in these forums (link at end). My problem is repeatable and consistent with others' reports. I wasn't sure if i should revive it with a reply. some of these online message boards (maybe not this one) are extremely picky
    and sensitive, lol.the world will end if an old thread revives - even if for a good reason.
    i can answer some of the ancillary issues. one person (Friday, September 21, 2012 8:33 PM) mentions not being able to shutdown, i asume he means stuck on the shutdown screen - this is because lazy writing has not completed - his nas write speed is significantly
    slower than reading from source - the last bits of data left in ram still needs to be writen to the destination. shutdown will stall for as long as needed until the data finishes writing to destination to prevent data loss.
    another person (Monday, September 24, 2012 6:31 PM) mentions the rate of the leak, but the rate is more likely a function of read speed from source relative to write speed of destination. which explains why my standby expands closer to a 1:1 ratio compared
    to his 1:100 (he said 10mb per 1000mb)
    we all have the same exact results/behaviour, but slightly different rates of bloating/leaking. as the file is written from from the ram to the destination, standby increases during this time - not a problem if read and write speeds are roughly equal (unless
    your transfering a terabytes then i bet the problem could rear its head). when writing lags, it gives the opportunity for standby ram to bloat with no maximum except the amount of installed ram. slower the write speed, the worse the problem.
    The reply on Wednesday, September 26, 2012 3:04 AM has before and after pictures of exactly what i described in
    "Just the Facts". Specifically the resmon image showing the Memory Tab.
    The kb2647452 hotfix seems to do some weird things relative to this problem. in the posts that mention they've applied it: after file completes it looks like the "standby" bloat is now "in use" bloat. as per info from Tuesday, October
    09, 2012 10:36 PM - bobmtl in an earlier post applies the patch. compare images from earlier posts to his post on this date. seems like a worse problem. Also, his process list indicates it's very unlikely they add up to ~4gb as listed in the color coded bar.
    wheres the extra gb's coming from? likely the same culprit of what filled up "standby" memory for me and others. it looks like this patch relative to this problem merely recategorizes the bloat - just changes the title it falls under.
    Link:
    https://social.technet.microsoft.com/Forums/windows/en-US/955b600f-976d-4c09-86dc-2ff19e726baf/memory-leak-in-windows-7-64bit-when-writing-to-a-network-shared-disk?forum=w7itpronetworking

  • Can't Enable write cache on disk with event ID 34

    One of our clients has a performance issue similar this one:
    Performance issue on TS as VM - Resolution with screenshots
    After monitoring it we found the problem is write cache. We improve the
    ... I
    checked all other VMs, the Enable advanced performance is unchecked. Do you
    ... A: To improve the performance for the disk, here are some proactively action
    plans:.
    www.chicagotech.net/remoteissues/tsperformance1.htm
    However, when we follow the resolution to Enable write cache, it doesn't take it with this event:
    Event Type: Warning
    Event Source: Disk
    Event Category: None
    Event ID: 34
    Description:
    The driver disabled the write cache on device \Device\Harddisk0\DR0.
    How to fix it?
    Bob Lin, MCSE &amp; CNE Networking, Internet, Routing, VPN Networking, Internet, Routing, VPN Troubleshooting on http://www.ChicagoTech.net How to Install and Configure Windows, VMware, Virtualization and Cisco on http://www.HowToNetworking.com

    Hi,
    By default, write caching is disabled on a disk that contains the Active Directory database (Ntds.dit). Also, write caching is disabled on a disk that contains the Active Directory log files. By doing this, you enhance the reliability of the Active
    Directory files.
    Thus if it is a DC, try to move the Active Directory database and the Active Directory log files off of a disk in which you need to enable write caching.
    If you have any feedback on our support, please send to [email protected]

  • EventId: 32, disk - The driver detected that the device \Device\Harddisk0\DR0 has its write cache enabled

    Virtualized 2012 R2 Domain Controller running on hyper-V.  Every boot lists the following error three times:
    EventId: 32, disk - The driver detected that the device \Device\Harddisk0\DR0 has its write cache enabled. 
    Data corruption may occur.
    Having checked the virtual SCSI disks all have their write cache enabled, is this correct or do I need to take some action to prevent these events?

    Hi,
    This is normal and you can't change it. This tells AD that write caching is enabled so it uses a different method to store information.
    http://www.hyper-v.nu/archives/hvredevoort/2013/07/keeping-your-virtual-active-directory-domain-controllers-safe/
    More information there.
    Thanks
    Denis
    Regards,
    Denis Cooper
    MCITP EA - MCT
    Help keep the forums tidy, if this has helped please mark it as an answer
    Blog: http://www.windows-support.co.uk 
    Twitter:   LinkedIn:

  • Write caching

    When using Windows, enabling or not write caching for an EHD may be important. Is write caching a consideration for EHDs when running OS X?
    If so, where is the setting selected? I can't find anything about write caching in Disk Utility.
    Thanks.

    Try checking "enchanced productivity" (or kind of - I've got russian Vista, not english ) checkbox in HDD properties in Device Manager.
    Message Edited by skripatch on 07-14-2008 10:37 AM
    //help will save the world

  • Write cache not utilized for a LUN

    I am running into this test result that I can't explain, please help.  the test is running 5 thread of file creation at 5G each.
    This is on 7420, testing is done within the same pool that uses a pair of mirrored log at 76G each.
    one LDOM has 64G of memory, LUN is 1T, the "Write Cache Enabled" check box doesn't make hardly any difference, I can see that the log disks are being used, is that because the pool was created using the logs?
    the other LDOM has 96G of memory, LUN is 3.5T, when the "Write Cache Enabled" is checked, I clearly see the log disks are hardly used, the other disks are getting a lot more actions than the log disks.  But when the "Write Cache Enabled" is not checked, I see the log disks are used, they would have more activities on them.  However the test results show that without using the log disks, I get faster throughput.  So disk logzilla used, but faster; while logzilla used, I get slower results.  What am I missing here?
    I attached a pic for show what I saw, the highlighted is one of the log disks.
    thanks for your help!

    Dear Arun,
    You wrote
    =============
    Cache is first in first out... if you exceute too many queries then cache might force out the first query for want of memory. Check the cache immediately after the query is ruin once then you should also check the cache settings for the query in RSRT to see if cache is enabled for the query. Ideally the option for cache should be " Main memory cache with swapping " in RSRT.
    ==============
    I executed my query in question as the first query but no cache was created.  The options for the cache mode was 'H Query reads data upon navigation' and 'main memory without swapping' along with a tick in the 'Query cache added to Delta' checkbox.
    Just to check, i went on removing formulaes and calculated key figures and free characteristics from the query one by one and in the end the cache did get created for the same query. But I still could not pinpoint the exact reason for the query not producing cahce in its original form.
    It only makes me think that there may be a particular scenarios or a peculiarity pertaining to the design of a query, that would prevent its cache being generated.
    regards,
    Kulmohan

  • Slow disk writes VM 3.1

    I know I must have something configured wrong but I can't figure it out.
    I have tested disk write access on my VM servers in V2.2 and V3.1, using these commands:
    hdparm -tT /dev/mapper/<lun>
    and
    dd if=/dev/zero of=output.img bs=8k count=256k
    the output is very close between the V2.2 and V3.1 servers, so that's good.
    But then when I execute those command on the actual VMs themselves within the 2.2 and 3.1 environments, I get different results.
    The hdparm command is still equivalent between the two.
    But the dd command shows my old V2.2 VMs getting around 650mb/s while my new V3.1 VMs are getting between 8mb-30mb/s. Does anyone have an idea off the top of their head why this would be?
    Oh, I just tried one more thing... most of my V3.1 VMs were converted from V2.2 via template import. I just tried the same commands on a couple of new VMs I made in V3.1 (from OL6-64bit templates) and they returned 360mb/s and 750mb/s. So I wonder what's wrong with the VMs that I converted from VM2.2?

    Hi there,
    We're having exactly the same problem as you appear to have experienced. Very slow write I/O in guest VMs (domU) but fast I/O on the host on the same iSCSI file system.
    Write I/O inside the guest is between 3-20MB/sec (dd bs=2048k count=512), whereas on the host its 95MB/sec - which is hitting the practical limits of our GigEth iSCSI SAN.
    I've checked inside the guests and can see no problems of write caches being disabled in dmesg. Curiously we hit this same wall on both a RHEL6.3 and Solaris10U10 guest.
    I've tried both sparse and non-sparse files and the performance is the same. Read performance is fine, but this write bottleneck is a showstopper. Would appreciate any assistance you guys might have while I await a response from Oracle..
    Setup:
    - Oracle VM 3.1.1 update 485
    - Sun Fire X4150 Server
    - Sun Storagetek 2510 iSCSI SAN

  • Disk Writes and Contious Input

    I'm having a problem with my MacBook Pro. In the last two days the hard drive on my MacBook Pro is in continous use. The drive is not going at full speed, but clicks away continously when I'm using the computer no matter what is running (or not running). I've checked in Activity monitor and there seems to be data being contiously written to the drive at a rate of between 40-60 kb/s. I recognize most of the top processes as well and nothing seems to be taking an unwarrented amount of cpu cycles. Is there any way to track down which processes could be writting to disk? Or, if thats not possible, can I find out what data is being written or where it is going?
    Any help would be much appreciated.
    Sincerely,
    Tim
    MacBook Pro Core 2 Duo   Mac OS X (10.4.8)  

    No spotlight was not idexing at the time, although I hadn't thought of that.
    I have come across the solution by happenstance. I thought I would go through and quit processes one at a time that I was absolutely sure were not system related. By chance, the first one I picked turned out to be the culprit. When "TabletDriver" was quit all disk writes ceased. This was the newly updated driver for a Wacom tablet and it seems to have been causing all the problem. I neither know why or where it was writing data to the disk, but this seems to have cleared up the issue. Thank you for the helpful suggestion.
    Sincerely,
    Tim

  • Can't enable HDD write caching

    I recently upgraded my PC.  I am using Vista x64 on a MSI P35 mobo and a Seagate SATA II 250GB drive and a 36GB Raptor (SATA I).  It has the newest 1.8 BIOS and newest Intel AHCI drivers.
    The problem is this: I can't enable write caching on either hard drive in the Device Manager.  It is there to check but below it it says something like "This device does not support write caching."
    Now I am not dumb, and the Raptor used to have write caching enabled on my old nForce4 mobo in Vista.  I have AHCI enabled in the mobo BIOS too.  Does anybody kno what is causing this?
    Also, does anybody knw of any good utilities that will tell me the fatures that are enabled (NCQ, SATA 3.0, etc..) that doesn't require an install?  I just like to execute utilities and get results without installing stuff (like CPU-Z and GPU-Z).

    Hi,
    By default, write caching is disabled on a disk that contains the Active Directory database (Ntds.dit). Also, write caching is disabled on a disk that contains the Active Directory log files. By doing this, you enhance the reliability of the Active
    Directory files.
    Thus if it is a DC, try to move the Active Directory database and the Active Directory log files off of a disk in which you need to enable write caching.
    If you have any feedback on our support, please send to [email protected]

  • Disk write failure

    when i go to update the song on my ipod it does 2 or 3 then says disk write failure disk could not be read or written to
    does anyone know whats going on

    http://docs.info.apple.com/article.html?artnum=301267
    When you sync your iPod with iTunes, iTunes may display an error message that reads:
    "Attempting to copy to the disk <iPod name> failed. The disk could not be read from or written to."
    When you update or restore your iPod, you may see this error:
    "Firmware update failure. Disk write error."
    These errors can happen with any iPod.
    These errors can occur anytime iTunes can't read information from or write information to the iPod. Here are some things that can cause this.
    Outdated operating system software
    Make sure you have the latest updates for your operating system, which may include improvements for device connections. For example, many USB and FireWire improvements have been included in Windows Service Packs.
    Computer needs updates
    Make sure you have the latest updates available for your specific computer model (or components for home-built PCs). These are usually available for download on the support website for the maker of the PC (or component). Many USB updates are listed as "Intel chipset" or just "chipset" updates on PC manufacturer's support and download websites.
    Software interference
    Some software can interfere with iTunes, making it unable to write files to your iPod. For example, antivirus software that scans all files on connected disks can cause one of these errors. Think about what software you have installed, and try disabling any add-ons that might be interfering with iTunes. Check your suspected software's documentation or contact the software maker if you need assistance with disabling the application.
    Damaged files
    If one of your music files or photos is damaged, iTunes may display one of these errors when transferring that file to the iPod. If you identify a file that is causing the error, try deleting that file and reimporting it from a backup file or from the original source. You may be able to repair files by repairing the disk (see the solutions in the next section).
    Damaged disk structure
    These errors can also appear if the format of your computer's hard drive or your iPod disk is damaged.
    Windows users, search the Help system in Windows for chkdsk to get more information on checking and repairing the disk structure.
    To repair an iPod disk—Restore the iPod or iPod shuffle using the latest version of iPod Updater. Warning: Be sure to back up your data before restoring an iPod. The restore process cannot be undone. All of your songs and files will be deleted.
    Corrupt iPod photo Cache
    If you're getting the error when transferring photos to an iPod photo, try deleting the iPod photo Cache and then starting the photo sync again.
    Lost connection
    Make sure that the connections from your computer to the iPod are snug and do not wiggle or come loose during transfers. For example, if you use the wrong size dock for your iPod, it can put strain on the connectors and cause a bad connection.
    Hardware failure or non-compliant hardware can cause these errors. This could be an issue with iPod hardware or with the cable or dock you're using, but more often it's an issue with the USB or FireWire card or interface in your computer. Some USB and FireWire interfaces just don't work very well. If you isolate the issue to the USB or FireWire interface in your computer, you may want to try a different port, get the computer serviced, or replace the card or interface with a better one.

  • Should I Put Disk/Media Cache Folder in Primary SSD with OS and Apps?

    Hello, I have a SATA3 SSD with OS and Apps on them, and 2 SATA2 HDDs, one for footages/projects and the other for renders/archive.
    I believe putting the disk/media cache in the SSD would boost performance because of the speed, but I heard it can shorten the lifespan of the disk, although I'm not sure by how much.
    I guess I could purchase another SSD dedicated for disk cache, but I'd rather not if putting them on the same drive with OS and Apps won't be too much problem.

    Go ahead and put your caches on the same SSD as your OS and applications.
    Most of the concerns about repeated writes causing SSDs to fail are from earlier versions of the technology. Reliability is greatly improved now. Also, there's nothing much lost if your SSD fails and it only has your applications, OS, and caches on it, since you can reinstall your OS and applications rather easily.
    You never want to have your irreplaceable assets (like the only copy of your footage) on a disk that you are afraid of failing.

  • Query performance problem - events 2505-read cache and 2510-write cache

    Hi,
    I am experiencing severe performance problems with a query, specifically with events 2505 (Read Cache) and 2510 (Write Cache) which went up to 11000 seconds on some executions. Data Manager (400 s), OLAP data selection (90 s) and OLAP user exit (250 s) are other the other event with noticeable times. All other events are very quick.
    The query settings (RSRT) are
    persistent cache across each app server -> cluster table,
    update cache in delta process is checked ->group on infoprovider type
    use cache despite virtual characteristics/key figs checked (one info-cube has1 virtual key figure which should have a static result for a day)
    =>Do you know how I can get more details than what's in 0TCT_C02 to break down the read and write cache events time or do you have any recommandation?
    I have checked and no dataloads were in progres on the info-providers and no master data loads (change run). Overall system performance was acceptable for other queries.
    Thanks

    Hi,
    Looks like you're using BDB, not BDB JE, and this is the BDB JE forum. Could you please repost here?:
    Berkeley DB
    Thanks,
    mark

  • Atomic disk writes using Java, is it possible??

    Hi, I need to do atomic disk writes using Java, is that supported/possible? And if, how is it done?
    With a atomic disk write I mean that if I write to a file all information should be written or none at all, not just some of the data. (In case of a program crash or other failures in the computer)
    /Fredrik

    you can store all the information in an object..and when you are sure you have all you need, you write all the data in one step to a file...
    but there is never a 100% guarantee.. you can check after the writing process if the size is correct or such things..

Maybe you are looking for