Severe data corruption with ext4

Hello, I don't know where else to ask (except, perhaps, the LKML), so I decided to post my problem here.
I am running an up-to-date Arch installation with the stock kernel ("2.6.28-ARCH"), using ext4 on LUKS on LVM on RAID-1; LUKS was set up as described here, and RAID/LVM was set up following this howto.
The system running several days without any problems, this morning I found some messages in /var/log/errors.log, stating:
Mar 19 08:42:43 bakunin kernel: BUG: scheduling while atomic: install-info/27020/0x00000002
Mar 19 08:42:48 bakunin kernel: BUG: scheduling while atomic: kjournald2/2451/0x00000002
repeating the 2nd line 10 more times.
Looking into /var/log/kernel.log, I found the following message:
Mar 19 08:42:43 bakunin kernel: EXT4-fs error (device dm-13): ext4_mb_generate_buddy: EXT4-fs: group 0: 16470 blocks in bitmap, 4354 in gd
Mar 19 08:42:43 bakunin kernel:
Mar 19 08:42:43 bakunin kernel: BUG: scheduling while atomic: install-info/27020/0x00000002
Mar 19 08:42:43 bakunin kernel: Modules linked in: it87 hwmon_vid isofs zlib_inflate ext3 jbd loop ipv6 arc4 ecb snd_seq_oss nvidia(P) ath5k snd_seq_midi_event snd_seq snd
_seq_device agpgart mac80211 snd_hda_intel led_class snd_pcm_oss snd_mixer_oss i2c_core k8temp snd_hwdep pcspkr snd_pcm snd_timer snd snd_page_alloc cfg80211 shpchp pci_ho
tplug r8169 mii sg soundcore wmi evdev thermal processor fan button battery ac rtc_cmos rtc_core rtc_lib ext4 mbcache jbd2 crc16 aes_i586 aes_generic xts gf128mul dm_crypt
dm_mod raid1 md_mod sd_mod usbhid hid ahci pata_amd pata_acpi ata_generic libata scsi_mod ohci_hcd ehci_hcd usbcore [last unloaded: i2c_dev]
Mar 19 08:42:43 bakunin kernel: Pid: 27020, comm: install-info Tainted: P 2.6.28-ARCH #1
Mar 19 08:42:43 bakunin kernel: Call Trace:
Mar 19 08:42:43 bakunin kernel: [<c0336dc7>] schedule+0x4a7/0x970
Mar 19 08:42:43 bakunin kernel: [<f8bb844d>] dm_table_unplug_all+0x3d/0x90 [dm_mod]
Mar 19 08:42:43 bakunin kernel: [<c03396d5>] _spin_unlock+0x5/0x20
Mar 19 08:42:43 bakunin kernel: [<c02790df>] vt_console_print+0x23f/0x330
Mar 19 08:42:43 bakunin kernel: [<c014cde1>] getnstimeofday+0x51/0x110
Mar 19 08:42:43 bakunin kernel: [<c03372e1>] io_schedule+0x51/0x90
Mar 19 08:42:43 bakunin kernel: [<c01be315>] sync_buffer+0x35/0x40
Mar 19 08:42:43 bakunin kernel: [<c0337797>] __wait_on_bit+0x47/0x70
Mar 19 08:42:43 bakunin kernel: [<c01be2e0>] sync_buffer+0x0/0x40
Mar 19 08:42:43 bakunin kernel: [<c01be2e0>] sync_buffer+0x0/0x40
Mar 19 08:42:43 bakunin kernel: [<c0337873>] out_of_line_wait_on_bit+0xb3/0xd0
Mar 19 08:42:43 bakunin kernel: [<c0144dd0>] wake_bit_function+0x0/0x70
Mar 19 08:42:43 bakunin kernel: [<c01be28e>] __wait_on_buffer+0x1e/0x30
Mar 19 08:42:43 bakunin kernel: [<c01be8af>] sync_dirty_buffer+0x7f/0xc0
Mar 19 08:42:43 bakunin kernel: [<f941e227>] ext4_commit_super+0x77/0xd0 [ext4]
Mar 19 08:42:43 bakunin kernel: [<f9421fd7>] ext4_handle_error+0x47/0xb0 [ext4]
Mar 19 08:42:43 bakunin kernel: [<c03365f2>] printk+0x17/0x1d
Mar 19 08:42:43 bakunin kernel: [<f9422104>] ext4_error+0x54/0x60 [ext4]
Mar 19 08:42:43 bakunin kernel: [<f942c8cb>] ext4_mb_init_cache+0x94b/0xb40 [ext4]
Mar 19 08:42:43 bakunin kernel: [<c0173997>] find_or_create_page+0x57/0x90
Mar 19 08:42:43 bakunin kernel: [<f942cd6d>] ext4_mb_load_buddy+0x2ad/0x2e0 [ext4]
Mar 19 08:42:43 bakunin kernel: [<f942d0da>] ext4_mb_free_blocks+0x33a/0x760 [ext4]
Mar 19 08:42:43 bakunin kernel: [<c0172b8c>] find_get_page+0x2c/0xd0
Mar 19 08:42:43 bakunin kernel: [<f9412622>] ext4_mark_iloc_dirty+0x412/0x520 [ext4]
Mar 19 08:42:43 bakunin kernel: [<f940e66b>] ext4_free_blocks+0x5b/0xf0 [ext4]
Mar 19 08:42:43 bakunin kernel: [<f9426dc7>] ext4_ext_truncate+0x807/0x960 [ext4]
Mar 19 08:42:43 bakunin kernel: [<f9418177>] ext4_truncate+0x147/0x690 [ext4]
Mar 19 08:42:43 bakunin kernel: [<c017c7c1>] truncate_inode_pages_range+0x191/0x360
Mar 19 08:42:43 bakunin kernel: [<c0183eda>] unmap_mapping_range+0xda/0x280
Mar 19 08:42:43 bakunin kernel: [<c0184266>] vmtruncate+0xc6/0x190
Mar 19 08:42:43 bakunin kernel: [<c01b0c53>] inode_setattr+0x53/0x180
Mar 19 08:42:43 bakunin kernel: [<f9413961>] ext4_setattr+0x241/0x350 [ext4]
Mar 19 08:42:43 bakunin kernel: [<c01a5453>] do_lookup+0x93/0x1e0
Mar 19 08:42:43 bakunin kernel: [<c01b0ec9>] notify_change+0x149/0x340
Mar 19 08:42:43 bakunin kernel: [<c0337e73>] __mutex_lock_slowpath+0x1d3/0x250
Mar 19 08:42:43 bakunin kernel: [<c019c5cc>] do_truncate+0x6c/0x90
Mar 19 08:42:43 bakunin kernel: [<c03396d5>] _spin_unlock+0x5/0x20
Mar 19 08:42:43 bakunin kernel: [<c01aeaee>] __d_lookup+0x10e/0x150
Mar 19 08:42:43 bakunin kernel: [<c01a6a1f>] may_open+0x1bf/0x250
Mar 19 08:42:43 bakunin kernel: [<c01a8ec5>] do_filp_open+0x185/0x860
Mar 19 08:42:43 bakunin kernel: [<c018ec1b>] free_pages_and_swap_cache+0x7b/0xa0
Mar 19 08:42:43 bakunin kernel: [<c03396d5>] _spin_unlock+0x5/0x20
Mar 19 08:42:43 bakunin kernel: [<c019b641>] do_sys_open+0x61/0xf0
Mar 19 08:42:43 bakunin kernel: [<c019b74c>] sys_open+0x2c/0x40
Mar 19 08:42:43 bakunin kernel: [<c0103f0f>] sysenter_do_call+0x12/0x2f
followed by multiple instances of:
Mar 19 08:42:48 bakunin kernel: EXT4-fs error (device dm-13): mb_free_blocks: double-free of inode 0's block 11457(bit 11457 in group 0)
Mar 19 08:42:48 bakunin kernel:
Mar 19 08:42:48 bakunin kernel: BUG: scheduling while atomic: kjournald2/2451/0x00000002
Mar 19 08:42:48 bakunin kernel: Modules linked in: it87 hwmon_vid isofs zlib_inflate ext3 jbd loop ipv6 arc4 ecb snd_seq_oss nvidia(P) ath5k snd_seq_midi_event snd_seq snd_seq_device agpgart mac80211 snd_hda_intel led_class snd_pcm_oss snd_mixer_oss i2c_core k8temp snd_hwdep pcspkr snd_pcm snd_timer snd snd_page_alloc cfg80211 shpchp pci_hotplug r8169 mii sg soundcore wmi evdev thermal processor fan button battery ac rtc_cmos rtc_core rtc_lib ext4 mbcache jbd2 crc16 aes_i586 aes_generic xts gf128mul dm_crypt dm_mod raid1 md_mod sd_mod usbhid hid ahci pata_amd pata_acpi ata_generic libata scsi_mod ohci_hcd ehci_hcd usbcore [last unloaded: i2c_dev]
Mar 19 08:42:48 bakunin kernel: Pid: 2451, comm: kjournald2 Tainted: P 2.6.28-ARCH #1
Mar 19 08:42:48 bakunin kernel: Call Trace:
Mar 19 08:42:48 bakunin kernel: [<c0336dc7>] schedule+0x4a7/0x970
Mar 19 08:42:48 bakunin kernel: [<f8bb844d>] dm_table_unplug_all+0x3d/0x90 [dm_mod]
Mar 19 08:42:48 bakunin kernel: [<c0144d9b>] autoremove_wake_function+0x1b/0x50
Mar 19 08:42:48 bakunin kernel: [<c03396d5>] _spin_unlock+0x5/0x20
Mar 19 08:42:48 bakunin kernel: [<c02790df>] vt_console_print+0x23f/0x330
Mar 19 08:42:48 bakunin kernel: [<c014cde1>] getnstimeofday+0x51/0x110
Mar 19 08:42:48 bakunin kernel: [<c03372e1>] io_schedule+0x51/0x90
Mar 19 08:42:48 bakunin kernel: [<c01be315>] sync_buffer+0x35/0x40
Mar 19 08:42:48 bakunin kernel: [<c0337797>] __wait_on_bit+0x47/0x70
Mar 19 08:42:48 bakunin kernel: [<c01be2e0>] sync_buffer+0x0/0x40
Mar 19 08:42:48 bakunin kernel: [<c01be2e0>] sync_buffer+0x0/0x40
Mar 19 08:42:48 bakunin kernel: [<c0337873>] out_of_line_wait_on_bit+0xb3/0xd0
Mar 19 08:42:48 bakunin kernel: [<c0144dd0>] wake_bit_function+0x0/0x70
Mar 19 08:42:48 bakunin kernel: [<c01be28e>] __wait_on_buffer+0x1e/0x30
Mar 19 08:42:48 bakunin kernel: [<c01be8af>] sync_dirty_buffer+0x7f/0xc0
Mar 19 08:42:48 bakunin kernel: [<f941e227>] ext4_commit_super+0x77/0xd0 [ext4]
Mar 19 08:42:48 bakunin kernel: [<f9421fd7>] ext4_handle_error+0x47/0xb0 [ext4]
Mar 19 08:42:48 bakunin kernel: [<c03365f2>] printk+0x17/0x1d
Mar 19 08:42:48 bakunin kernel: [<f9422104>] ext4_error+0x54/0x60 [ext4]
Mar 19 08:42:48 bakunin kernel: [<f942b40c>] mb_free_blocks+0x1dc/0x440 [ext4]
Mar 19 08:42:48 bakunin kernel: [<f943022e>] release_blocks_on_commit+0xde/0x250 [ext4]
Mar 19 08:42:48 bakunin kernel: [<f93b038c>] jbd2_journal_commit_transaction+0xf3c/0x1180 [jbd2]
Mar 19 08:42:48 bakunin kernel: [<f93b3dac>] kjournald2+0xac/0x1f0 [jbd2]
Mar 19 08:42:48 bakunin kernel: [<c0144d80>] autoremove_wake_function+0x0/0x50
Mar 19 08:42:48 bakunin kernel: [<f93b3d00>] kjournald2+0x0/0x1f0 [jbd2]
Mar 19 08:42:48 bakunin kernel: [<c0144a89>] kthread+0x39/0x70
Mar 19 08:42:48 bakunin kernel: [<c0144a50>] kthread+0x0/0x70
Mar 19 08:42:48 bakunin kernel: [<c0104d9f>] kernel_thread_helper+0x7/0x18
Using "dmsetup ls", I figured that dm-13 was /usr; so I fsck'd it.
fsck revealed hundreds of errors, which I let "fsck -y" fix automatically.
Now there's plenty (more than 250) of files and directories in /usr/lost+found.
[Annotation: When I now, after a reboot, use "dmsetup ls", /usr is associated minor number 14, while 13 (which is mentioned in the error messages above) is associated with /var. I don't know whether I misread the number/volume in the first place, or if the enumeration changes after a reboot. However, /var is hosting a clean filesystem.]
An extended SMART self-test of my hard drives spotted no error messages. The hardware seems to be okay.
The evening before the errors occured, I ran lm_sensors' "sensors-detect" script which loaded a couple of modules and stuff. Don't know if there might be a connection.
Also, every night about 5 minutes past midnight, I get something like this in my logs:
Mar 19 00:04:51 bakunin kernel: init_special_inode: bogus i_mode (336)
I guess this is caused by some process run by crond, but have not yet figured out which one spawns this message.
So, any ideas?
Last edited by Edmond (2009-03-19 23:54:17)

Just to keep you informed: I've posted on the LKML/ext4 ML, Ted Ts'o answered:
Ted Ts'o wrote:
> Mar 19 08:42:43 bakunin kernel: BUG: scheduling while atomic:
> install-info/27020/0x00000002
This was casued by the call to ext4_error(); the "scheduling while
atomic" BUG error was fixed in 2.6.29-rc1:
commit 5d1b1b3f492f8696ea18950a454a141381b0f926
Author: Aneesh Kumar K.V <[email protected]>
Date:   Mon Jan 5 22:19:52 2009 -0500
   ext4: fix BUG when calling ext4_error with locked block group
   The mballoc code likes to call ext4_error while it is holding locked
   block groups.  This can causes a scheduling in atomic context BUG.  We
   can't just unlock the block group and relock it after/if ext4_error
   returns since that might result in race conditions in the case where
   the filesystem is set to continue after finding errors.
   Signed-off-by: Aneesh Kumar K.V <[email protected]>
   Signed-off-by: "Theodore Ts'o" <[email protected]>
It's going to be moderately painful to backport this to 2.6.28 and
2.6.27, but we can look into it.
Last edited by Edmond (2009-03-23 11:53:59)

Similar Messages

  • Data corruption with SSD after hard reset

    Hi, I'm using a Mac Mini for a car product and I MUST shut it down each time the hard way, i.e. by cutting the power.
    I am perfectly aware that this is definitely NOT the recommended way to shut down OSX because it might lead to data corruption but at the moment there are NO other options. So please don't simply suggest to shut it down the "good way", i.e. via software, because it simply isn't an option.
    Now, in the past I did lots of ON/OFF tests with conventional drives and had no problems. Recently I moved to SSD drives and it looks like I get more frequent boot problems (apple logo with the wheel spinning forever). Using DiskWarrior in these cases fixes the problem and repairs the folder structure of the drive. After that the drive boots again.
    Given the constraints for the application, i.e. shutdown==powercut, is there any way I can ensure better data integrity? If I disable write caching would that help? Any other trick I could do to make the system more resilient? And finally, are actually SSD more prone to crashes of this kind or was that just a coincidence?
    Thanks a lot for the help, I hope some of you has experienced this problem in a similar situation (and found a good solution)
    cheers
    Emanuele

    There are OSX compatible UPSs that send status information to the computer via USB. This enables the computer to do a normal shutdown before the UPS battery runs out. Here's an example of the system preference that will show when a compatible UPS is connected:
    <http://www.cyberpowersystems.com/support/faqs/faqOSXUPS.html>
    Do you think disabling write caching would help at all?
    It might, but the problem may be entirely in the SSD. Do you have journaling enabled on the SSD? That could make a big difference.
    Finally, I'm using OCZ Solid 2. Do you have any comment on that or do you recommend something different? I doubt that SLC drives would help in this matter.
    You would have to contact the SSD makers and ask them how well they handle power failures. It should be possible to make them no worse than a normal hard drive, but there may be some compromises they have to make for performance reasons.

  • Preventing data corruption with TM and a TC over a wireless connection?

    Long story short: I was using TM and a TC via wireless connection to do the usual ~/ backups. My last "manual" backup to a USB hard drive was on June 1.
    I decided to wipe the partition to remove a lot of built-up garbage, etc., and reinstall Snow Leopard -- which I did, followed by the 10.6.4 combo update, the usual other patches, and the like. Still reinstalling a thing or two. But when I tried to use Time Machine to restore the things I wanted to keep (about 100 GB of data in all), it was sporadically corrupted and useless. Handfuls of MP3 files that wouldn't play, a good number of PDFs that wouldn't open and were unrecognizable in Preview.app, that sort of thing.
    I went back to the June 1 data set and have noticed no issues thus far, but that also means that everything new from June 1 to mid-July was lost.
    I would like to still be able to use TM+TC wirelessly, but have it be reliable, so what I'm wondering is this: how can I ensure data integrity when wirelessly backing up my home directory? What can I do to make sure corruption issues like this don't happen again?
    I guess the obvious one is to not stop backups in progress by closing the lid or such a thing, but... what else?
    Thanks.

    I would like to still be able to use TM+TC wirelessly, but have it be reliable, so what I'm wondering is this: how can I ensure data integrity when wirelessly backing up my home directory? What can I do to make sure corruption issues like this don't happen again?
    I don't mean to be negative here, it's just that with so many wireless interference issues with other networks, cordless phones, security systems, other nearby electronics, etc...the list goes on....I find that it is getting difficult to use the words "wireless" and "reliable" in the same sentence.
    If you consider that a business, with important files to back up constantly, would never consider the use of wireless for this function, it might help place things in perspective.
    If you are using Time Machine, I think it goes without saying that your first master backup should be using ethernet because the entire computer is being backed up on this pass. Not only will the backup proceed much more quickly, but it will be much more reliable as well. There are no wireless interference issues when you use ethernet.
    If you don't need to backup each hour, as Time Machine does, you might want to look at Time Machine Editor to allow you to setup a backup schedule whenever you like. Maybe once a day at 2 or 3 AM when things around the neighborhood on wireless systems have quieted down and the cordless phones are not in use. Or, if you are on the computer quite a bit, maybe twice or three times day. This will minimize the chances of a corrupted backup due to wireless interference.
    And a good wireless connection is a must. The "bars" at the top of your computer screen are there basically for show and they tell you nothing about damaging noise that may be present on your network. Post back if you want more info on evaluating your wireless signal quality, which is a combination of the best signal with the lowest noise.
    I guess the bottom line is this...the more you can use ethernet (maybe think of your important data as if it were your business files), the better things will be.
    Message was edited by: Bob Timmons

  • Data corruption with Airport Disk file sharing

    There is another long thread on the Airport Disk, and I am not sure if this is the same problem. My situation is that my file server died recently, and I am looking for its replacement. Airport Disk with my new base station appears ideal because I have a 300GB disk from my dead file server and a USB disk enclosure. I connected the disk to my iMac and formatted it to HFS. While still connected to my iMac, I copied over my iTunes library. This was the second time trying out the airport disk, so I created /Shared and /Users/username to work with file sharing with accounts. I then ejected the disk and connected to my airport base station (gigE, 7.2.1 firmware). My first test was to run a recursive diff between the two iTunes libraries. There were some files that differed, so I ran the diff again and a different set of files were identified as differing! I disconnected all users and moved the disk back to my iMac. I ran the diff again, and there were no differing files!
    I haven't yet had time to investigate more, and I am wondering if others have seen similar symptoms.
    Dan

    I have the same issue.
    Plus, in my case, two disks appear on my desktop, even though I have no partition. One is named the same as my disk, the other is named the same as the user account for the disk on the Airport Extreme.

  • [SOLVED] EXT4 Data Corruption Bug Linux 3.6.2 & 3.6.3

    Be careful
    EXT4 Data Corruption Bug Hits Stable Linux Kernels
    http://www.phoronix.com/scan.php?page=n … px=MTIxNDQ
    https://lkml.org/lkml/2012/10/23/779
    Edited: Removed the [ALERT]  label.
    Last edited by ontobelli (2012-11-01 04:58:43)

    headkase wrote:
    Even though it is a severe bug the chances of it happening to you are low.  You have to unmount and immediately remount an EXT4 partition twice in a row for it to happen.  On a normally operating system that is not a normal thing to happen.  Just wait on your desktop for 5 minutes before rebooting again.
    Arch, as a general rule, tends to stick as close to upstream as possible.  I'm sure the devs are very competent people but a quick hack or branch revert has the possibility of introducing issues of its own.  With the chance of the bug occurring low on a normally operating system I think it is better to wait for a fix from upstream.
    Well maybe i am a bit over stressed about this since with this computer i have quite a lot of troube which i cannot find solutions to. Also a kernel panic after a reboot this morning -probably not related to this- got me in a bad mood.
    Anyway.

  • NForce 4 TCQ Data Corruption Issue? with raptors

    just a warning in case it is not board specific
    i found this
    It appears there might be a data corruption problem when enabling TCQ on the nForce 4 chipset at least with the Asus A8N-SLI Deluxe. Check out this forum post for more details. I've yet to try TCQ as all of our Western Digital Raptors are in a server. You can learn about TCQ in this pdf from Western Digital. Anyway you may want to take some caution when enabling TCQ and make sure you do have a backup. Here is a post from sadtherobot on the alledged issue.
    No, I have not been able to get TCQ working reliably with the current nForce4 drivers, they are the worst nvidia drivers I've seen in a long time. Every time I've enabled TCQ in either the 32-bit or 64-bit versions of windows, I get massive corruption of files. As for TCQ on the nForce3 chipset, it does not seem to work correctly either, I've tested it on a few computers using different drives, my main test was using 1 of my computers which is a SN95G5 with a 74GB raptor. Data corruption does not occur on the nForce3 boards, instead it drops transfer rates to much lower than they should be. For instance, using the 32-bit version of windows and the nForce 5.10 drivers, with TCQ enabled, the computer takes at least 3 times longer to start up and nvidia's speed test shows the sustained transfer rate at about 5MB/s (down from 70). Using the 64-bit version of windows and the 6.25 drivers, there is a speed decrease, but it is not nearly as bad as the 32-bit loss of speed, the nvidia speed test shows about 9MB/s (instead of 71). Whether these transfer rates are correct or not isnt really the issue (i highly doubt it's going at 5-9MB/s), the problem is that there is a noticable decrease in speed and their own synthetic tests shows very poor results, which means that something is indeed wrong with the TCQ support. I think the only reason the 64-bit loss wasnt as bad is because the drivers actually support read caching, unlike the massively out-dated 32-bit drivers.
    Let's see if we can't figure this one out guys
    http://www.amdzone.com/modules.php?op=modload&name=News&file=article&sid=2112&mode=thread&order=0&thold=0

    thanks for the smart advise! 
    I know that if if I go out of "specifications" something could go wrong... but I have bought this MB exactly for this! I would have bought something cheaper if I would have gone 200mhz fsb...  am I wrong? 
    I only wanted to know if there was any issue and if I could bypass it....

  • M500/M5x0 QUEUED-TRIM data corruption alert (mostly for Linux users)

    The M550 (all released firmware), and the M500 (up to MU04 all released firmware) can cause data corruption when QUEUED TRIM is used. Since Crucial is not urging everyone to update to MU05 and are taking their time with the M550 update, I assume that Windows does not issue QUEUED TRIM by default, and therefore does not trigger the issue (yet). I have no idea about Intel RST enhanced windows drivers, MacOS or FreeBSD. Chances are they cannot trigger the bug, but you might want to check with the vendors.
     EDIT: The M500 MU05 firmware still has the QUEUED TRIM data-killer bug, there are no safe firmware versions.EDIT: This is a problem on several outdated versions of the Linux kernel, for the 3.12, 3.13, 3.14 and 3.17 branches. Linux releases older than 3.12 will NOT trigger the bug. Recent releases of the 3.12, 3.13, 3.14 and 3.17 branches have a blacklist in place and will NOT trigger the bug. The 3.15 and 3.16 kernels also have the blacklist, and won't trigger the firmware bug.
     Dangerous for use in kernels:3.12 (before 3.12.29);3.13 (before 3.13.7);
    3.14 (before 3.14.20);3.17 (before 3.17.1) - regression in the blacklist, fixed in 3.17.1. Safe kernels:anything before 3.12;3.12.29 and later;3.13.7 and later;3.14.20 and later;3.15 (all);3.16 (all);3.17.1 and later.Bug workaround for any kernel version (tanks performance down to a crawl on most workloads): disable NCQ in the kernel command line, by adding the libata.force=noncq parameter in the bootloader. The "uname -r" command will tell you the Linux kernel release you're running on.

    bogdan wrote:
    Thanks for clarification uname -r
    3.14.5-031405-generic After first reboot I have noticed following entries:[ 25.818233] ata1: log page 10h reported inactive tag 0
    [ 25.818242] ata1.00: exception Emask 0x1 SAct 0x50000000 SErr 0x0 action 0x0
    [ 25.818244] ata1.00: irq_stat 0x40000008
    [ 25.818247] ata1.00: failed command: READ FPDMA QUEUED
    [ 25.818252] ata1.00: cmd 60/60:e0:78:d4:15/00:00:09:00:00/40 tag 28 ncq 49152 in
    [ 25.818252] res 40/00:f4:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error)
    [ 25.818254] ata1.00: status: { DRDY }
    [ 25.818256] ata1.00: failed command: SEND FPDMA QUEUED
    [ 25.818260] ata1.00: cmd 64/01:f0:00:00:00/00:00:00:00:00/a0 tag 30 ncq 512 out
    [ 25.818260] res 40/00:f4:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error)
    [ 25.818262] ata1.00: status: { DRDY }
    [ 25.818490] ata1.00: supports DRM functions and may not be fully accessible
    [ 25.824747] ata1.00: supports DRM functions and may not be fully accessible
    [ 25.830741] ata1.00: configured for UDMA/133
    [ 25.830754] ata1.00: device reported invalid CHS sector 0
    [ 25.830779] sd 0:0:0:0: [sda]
    [ 25.830781] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [ 25.830783] sd 0:0:0:0: [sda]
    [ 25.830784] Sense Key : Aborted Command [current] [descriptor]
    [ 25.830787] Descriptor sense data with sense descriptors (in hex):
    [ 25.830788] 72 0b 00 00 00 00 00 0c 00 0a 80 00 00 00 00 00
    [ 25.830795] 00 00 00 00
    [ 25.830798] sd 0:0:0:0: [sda]
    [ 25.830800] Add. Sense: No additional sense information
    [ 25.830802] sd 0:0:0:0: [sda] CDB:
    [ 25.830804] Write same(16): 93 08 00 00 00 00 02 93 68 38 00 00 00 08 00 00
    [ 25.830812] end_request: I/O error, dev sda, sector 43214904
    [ 25.830827] ata1: EH complete
    [ 25.831278] EXT4-fs (sda1): discard request in group:164 block:27655 count:1 failed with -5Then I have created large files, 1GB, 15GB and 30GB, deleted them and issued fstrim commandsudo fstrim -v /
    /: 106956468224 bytes were trimmed Right now I have no more errors in log file except those recorded after first reboot so I guess I should wait for data corruption? It should show up much faster if you have the filesystem mounted with the "discard" option, which enables online discard mode.  My best guess is that corruption won't trigger on just every write, likely you want to have pending writes inside the SSD, and maybe cause a trim near them, or something else like that. Running the "mount" command as root should show you the mount options ("sudo mount" might do it in default Ubuntu).  "sudo mount -o discard,remount /" might enable it if it isn't already there.  Or try "sudo su -" to go root, and issue the commands without "sudo". Now, where the corruption, should it happen, will end up going, I don't know. EDIT:  Doing a lot of filesystem work might help, as well.  Maybe running bonie++ (warning: will do a lot of writes), or several concurrent file creation/remove workloads.  Doing it either in online discard mode, or running fstrim concurrently should do it, I guess.

  • How can I minimise data corruption?

    Mac OS X is great, and one of the greatest things it has achieved is an environment so stable that it almost never crashes.
    However, for me the next BIG problem with using computers is data corruption. Data corruption problems bug my life as much now as system freezes/crashes did 5 years ago.
    For some reason, it often seems to be preferences files that become corrupt. I don't know why, or whether other files are becoming corrupt too and I've not discovered it yet. Sometimes I wonder whether it's just because of all the junk I install, or the length of time it's been since doing a clean format. However, with my recent purchase of a Macbook, and within a couple of months having all my preference files becoming corrupt, this goes against those theories. My macbook has minimal amounts of software installed, and is generally kept quite simple.
    Obviously backing up is an important strategy, but that leads to a whole load of decisions like, how often to backup, do you keep incremental backups, do you restore absolutely everything when you discover 1 or 2 corrupt files (how do you know if others have become corrupt?).
    Correct shutting down is something I always do - unless something prevents me from doing so, like power cuts. I've also often had a problem with the sccreen remaining blank after macbook has slept or had the screensaver on. On occasion I've had to hold down the power button to shut it down and get it going again.
    I've looked into uninterruptible power supplies. Unfortunately, the ideal setup with additional battery to provide a few hours of power are very expensive. Also, should the macbook not be immune from problems caused by power fluctuations because of the battery? I certainly did get a system crash recently when there was a power cut - but at the time I just wondered if it was due to the wireless router going off.
    .mac and idisk seem to cause their share of problems. Again, I'm not sure if these are the cause or a consequence of the problems. I have iDisk syncing switched on, and on a few occasions it's vanished and I've had to reboot to get things back to normal. Recently there have been warnings of clashes between .mac and the local idisk.
    Filevault is another possible cause of problems. I've read people advising against it's use. However, if someone is willing to steal my macbook, I don't want that sort of person having access to anything, whether it's address book contacts, calendars, word documents or anything financial. OK, people suggest creating an encrypted disk image, but that doesn't solve the problem of preventing people accessing address book or ical.
    What I'd really like to know is, what are the main causes of data corruption. If I can identify the causes I might be better prepared for trying to prevent it. For example, if 99% of data corruption is due to power fluctuation then I might accept that I need to spend the money on a UPS.
    Once identifying the possible causes, what can be done to prevent them. Would a RAID 1 configuration protect against data corruption, or is it only good in cases of catastrophic drive failure? I've just purchased a 500GB external Firewire 800 drive, which raises the option of creating a RAID 1 with my 2 built in drives.
    Sorry for so many questions, but I just really need to get this sorted. Since moving from OS 9 to OS X this has regularly been my biggest cause of troubles.

    Hi, bilbo_baggins.
    You wrote: "What I'd really like to know is, what are the main causes of data corruption..."You've already identified them, but you seem reluctant to implement the procedures needed to mitigate or avoid those causes that can be mitigated or avoided, in particular:• Power outages or power problems.
    • Improper shutdowns.
    • Hardware problems or failures, e.g. hard drive failures, bad sectors, bad RAM, etc.
    • Bad programming.I can understand your position since:• Not everything one needs to run their computer comes in the box: additional money must be spent.
    • The solutions often seem more complex to implement than they really are. One needs some guidance, which again it does not come in the box, and few books address preventing problems before they occur.Here's my advice:
    1. Implementing a comprehensive Backup and Recovery Solution and using it regularly is essential to assure against data loss in the event of a hard drive failure or other problems. For advice on the backup and recovery solution I employ, see my "Backup and Recovery" FAQ. Using a personal computer without backup and recovery is like driving without auto insurance. Likewise, without a Backup and Recovery solution, you are accepting the risk of potentially losing all of your data at some point.
    2. Perform the little bit of required, regular maintenance: see my "Maintaining Mac OS X" FAQ. It covers my advice on "regular maintenance" and dispels some common "maintenance myths."
    3, If you use a desktop Mac, you need an Uninterruptible Power Supply: power outages and other power problems —surges, spikes, brownouts, etc. — can not only cause data corruption but damage your hardware. I have complete advice on selecting a UPS in the "Protecting Against Power Problems" chapter in my book. Don't simply walk into a store and by the first UPS recommended by a clerk: the UPS needs to be configured and sized to match your computer setup. You don't need hours of battery run time: 10-15 minutes is sufficient to save your work and perform a proper shutdown, or for a modern UPS to perform an automatic shutdown if your computer is running in your absence.
    4. If you regularly "solve" problems by performing a hard restart (pressing and holding the power button or, on Macs so equipped, the restart button), then go back to work without first troubleshooting the cause of the problem, you risk letting a small problem compound into a larger problem. At a minimum, after a hard restart your should:• Run the the Procedure specified in my "Resolving Disk, Permission, and Cache Corruption" FAQ.
    • Then troubleshoot the cause of the problem that led to the hard restart.My book also has an entire chapter on methods for troubleshooting "Freezes and Hangs."
    5. Likewise, hoping that by installing a Mac OS X Update will fix a problem, or simply reinstalling one, without first checking for other problems, can make a bad problem worse. Before installing software updates, you may wish to consider the advice in my "Installing Software Updates" FAQ. Taking the steps therein before installing an update often helps avert problems and gives you a fallback position in case trouble arises.
    6. FileVault does not corrupt data, but it, like any hard drive or disk imge, doesn't respond well to the causes cited above. This is why it is essential to regularly backup your encrypted Home folder using a comprehensive Backup and Recovery solution. FileVault is an "all your eggs in one basket" solution: if bad sectors develop on the hard drive in the area occupied by your encrypted Home folder, you could lose all the data therein without a backup.
    7. RAID: IMO, unless one is running a high-volume transaction server with a 99.999% ("Five Nines") availability requirement, RAID is overkill. For example, unless you're running a bank, a brokerage, or a major e-commerce site, you're probably spending sums of time and money with RAID that could be applied elsewhere.
    RAID is high on the "geek chic" scale, low on the "average user" practicality scale, and high on the "complex to troubleshoot" scale when problems arise. The average user is better served by implementing a comprehensive Backup and Recovery solution and using it regularly.
    8. I don't use .Mac — and hence, don't use an iDisk — so I can't advise you there. However, I suspect that if you're having problems with these, and the causes are unrelated to issues of Apple Server availability, then I'd suspect they are related to the other issues cited above.
    9. You can't completely avoid problems caused by bad programming, but you can minimize the risk by not installing every bit of shareware, freeware, or beta code you read about just to "try it out." Stick to reliable, proven applications — shareware or freeware offerings that are highly rated on sites like MacUpdate and VersionTracker — as well as commercial software from major software houses. LIkewise, a Backup and Recovery solution can help here.
    10. Personal computers today are not much more advanced than automobiles were in the 1920's and '30s: to drive back then, you had to be part mechanic as well as driver. Cars today still require regular maintenance. It's the same with personal computers today: you need to be prepared for troubleshooting them (mechanic) as well as using them (driver). Computer hardware can fail, just as autos can break down, usually at the worst possible moment.
    People whose homes or offices have several Macs, a network, and the other peripherals normally associated with such setups — printers, scanners, etc. — are running their own data centers but don't know it. Educating yourself is helpful: my "Learning About Mac OS X" FAQ has a number of resources that you will find helpful including books, online training, and more. My book focuses exclusively on troubleshooting, with a heavy emphasis on preventing problems before they occur and being prepared for them should they arise.
    Good luck!
    Dr. Smoke
    Author: Troubleshooting Mac® OS X
    Note: The information provided in the link(s) above is freely available. However, because I own The X Lab™, a commercial Web site to which some of these links point, the Apple Discussions Terms of Use require I include the following disclosure statement with this post:
    I may receive some form of compensation, financial or otherwise, from my recommendation or link.

  • Random page item data corruption

    I designed an APEX page for a service request processing application with several date page items. Our database is Oracle 10g. The application is in production and the users have reported that data in some date page items become corrupted randomly. For example, the user enters ’07/04/2011’ in the date page item “Planned Start Date” among other values and submits the page. The date somehow winds up ‘07/04/0011’ that is, year 11. (A corrupt date increases the risk that these transactions will not be processed in a timely manner and will negatively impact the level of customer service the user wishes to provide.)
    I have identified three possible areas that may hold the key to underlying cause: 1) page item edit mask, 2) page process(es), 3) DB column constraints. Inasmuch as this is a random problem and has affected a handful of rows it may not be possible to trace the logic as the pages are processed. Has anyone encountered this problem and can suggest a possible remedy? For the sake of clarity I placed the setups for the affected page item below.
    Selected setups for P14_PLANNED_START_DATE, all unlisted attributes are blank or null:
    Settings - Value Required: No; Format Mask: MM/DD/YYYY; Show: on icon click; Show Other Months: No; Navigation List for: None
    Source - Source Used: Always, replacing any existing value in session state; Source Type: Database Column; Maintain Session State: Per session; Source Value or Expression: PLANNED_START_DATE
    Default - Default Value: to_char(sysdate, ‘MM/DD/YYYY’); Default Value Type: PL/SQL Expression
    Sample page processes are:
    Validation - Name: P14_PLANNED_START_DATE not null; Type: Item specified is not null; Validation expression: P14_PLANNED_START_DATE; Always execute: No
    Conditions – Condition Type: SQL Expression; Expression 1: :REQUEST IN (‘SAVE’, ‘CREATE’)
    Validation – Name: P14_PLANNED_STARTEND_OKAY; Type: Function Returning Boolean:
    Validation Expression:
    IF SIGN(to_date(to_char(to_date(:P14_PLANNED_START_DATE,'MM/DD/YYYY'),'YYYYMMDD-') || :P14_PLANNED_END_TIME,'YYYYMMDD-HH:MI AM')
    - to_date(to_char(to_date(:P14_PLANNED_START_DATE,'MM/DD/YYYY'),'YYYYMMDD-') || :P14_PLANNED_START_TIME,'YYYYMMDD-HH:MI AM')) > 0
    THEN RETURN TRUE;
    ELSE
    RETURN FALSE;
    END IF;
    Conditions – Condition Type: PL/SQL Function body returning Boolean
    Expression 1:
    IF :REQUEST IN ('SAVE','CREATE') THEN
    IF (:P14_PLANNED_START_TIME IS NULL OR :P14_PLANNED_END_TIME IS NULL)
    THEN RETURN FALSE;
    ELSE RETURN TRUE;
    END IF;
    ELSE RETURN FALSE;
    END IF;
    Database: The attributes for the database column are
    Column Name: PLANNED_START_DATE
    Data Type: DATE
    Nullable: No

    Ligon,
    You're right to think this is pretty laborious stuff. A co-worker wanted to do the same, to make sure users didn't lose a change when clicking Cancel. I suggested he look at calculating the query checksum before and after, which he tried. But it got very cumbersome very fast and he ended up dropping the idea. He's fairly new with Apex, but he's also a quick study, so it's not like he's a novice coder.
    I don't have his implementation details anymore to even share with you.
    Sorry I couldn't be more help.
    Good luck,
    Stew

  • Data Corruption and Recovery Planning

    I have two questions with regards to data file integrity and recovery as my office tries to plan out our FCS use. These are both hypothetical scenarios and also something people may already have experience with.
    1) Has anyone experienced data corruption within FCS and the assets stored on the server? Is anyone using checksum to manage and review file integrity and what is the best process to do so? If someone has faced issues with corruption in files, what are your responses to correct the issue?
    -(we are using a drobo and backing up the catalog, but is this enough protection for the data 5+ years down the line? What are your suggestions for additional protection of files, specifically against file corruption issues, as even the server drives where the files are uploaded to on the drobo can eventually face file corruption?)
    2) Has anyone used FCS to catalogue assets and decided that they wish to use another program and if so, what was/is your method of exporting the files and metadata to the other program? Was it successful? How time consuming was the process and did you have to redevelop the metadata fields in the other program, or was the mapping process simple? Is an intermediary program useful to transfer files, and what program would that be?
    - Thank you in advance

    1- None of my dozens of FCSvr clients have ever had data corruption, it's a non-issue for FCSvr.  It is an issue for hard drives and digital files in general, and is not a FCSvr issue.
    2- Full asset and catalog backups on a daily basis are nessisary if you're using this for a business.  There are several solutions that work with FCSvr.
    3- If you want to move assets to another database, you can export them from the FCSvr client app.  Or just copy the files from the Device they're stored on directly.  FCSvr saves metadata in the catalog database, so only what is stored nativly in each asset's file format will be readable by a diffrent file server solution.  None of my clients have left FCSvr, all are happy with it.

  • Javax.crypto.BadPaddingException: pad block corrupted with 1.4

    I'm getting a javax.crypto.BadPaddingException: pad block corrupted Exception while working on converting our existing java jdk 1.2 to java 1.4. Any suggestions would be great. Here are the specifics:
    We have a web application that been running for several years under java jdk 1.2 & jce_1_2.jar. Within the application we are exchanging data (XML) with a customer using the following encryption scheme:
    1) We create a one time DESede key through the KeyGenerator class passing in ("DESede", "BC")
    2) We encrypt the data with this one time key using ("DESede/ECB/PKCS5Padding", "BC")
    3) This one time key is then encrypted using ("RSA/ECB/PKCS1Padding", "BC") and customer's public key
    4) We create a signature with our private key, which they have the public key for.
    This is process/api that we had to use for their API's and its worked fine under 1.2, with "ABA" as the provider. Now moving to 1.4, I'm using BouncyCastle as the provider.
    Other differences, the keystore in 1.2 was defined as "JCEKS" provider "SunJCE" under 1.4 I changed them to "JKS" and "SUN" . I would get bad header exceptions from keystore until I changed it. I don't think its the BouncyCastle since I was able to download the 1.2 version of BC and get the existing app to work and I also got the 1.4 version of BC to work under the existing 1.2 application.
    So something seems to be different with the algorithms/padding, but I can't seem to find it. I tried the following: "RSA" "RSA/ECB" "RSA//PKCS1Padding" "NoPadding" also changed the DESede algorithm with no luck. All I know is that its failing on the decryption of the one time key.
    Thanks

    http://forum.java.sun.com/forum.jsp?forum=60 is probably a better place to post this.

  • Data corruption on MSI K7N2 Delta ILSR

    Hi all,
    Just got my new MSI K7N2 Delta ILSR up and running.
    Got some issues with the memory running at 200mhz while the Barton core of my Athlon Xp 2600+ was running 166mhz, this had to be 1:1 (thx forum  )
    While running the default bios settings with cpu/mem at 166/166 fsb everything is stable as hell. I have been running Prime95 for 2 days now and using the system for dvd ripping/encoding, UT2004 gaming etc.
    Then the overclocking began   I'm using the standard AMD cooler that came with my boxed Xp2600+ so I didn't expect an FSB of 200mhz.
    I stopped at an FSB of 190mhz...been crunching Prime95 for 18 hours now and again very stable.
    Now comes the problem : Sometimes I get crc errors when installing a game or decrypting a dvd!! But how can this be? Prime95 is stable at this 190fsb overclock.
    When using an FSB of 166mhz no data corruption or crc errors.
    What can this be? It's not the cables because they're all the same as in my old setup which was running (ahum...walking) stable. PCI/AGP speeds are locked when overclocking...
    Only thing I can think of is the power supply. I've read in a warning post from Bas that the Q-Tec 550w psu I have is crap. So my guess is PSU, but maybe it's something else I'm forgetting...Any suggestions??
    The stupid thing is when I'm not overclocking I don't have the data corruption problems, but I'm still using the same crappy Q-Tec 550w PSU.
    Help!  
    gr,
    Remco

    Hmmm, that's curious. On the MSI K7N2 Delta ILSR product page Kingston memory is recommended  
    Ok, did a Memtest86 v1.11 pass at cpu/dimm 166/166 - no problems whatsoever. I think using Prime95 would have revealed memory problems...but will do another pass with cpu/dimm at 190/190.
    Can overclocking affect the Promise Raid Controller?? It should be locked like the PCI/AGP but who knows...
    Well, I still have to replace my PSU but I'm not 100% sure that this is the problem. What PSU would you recommend? ...and I'm also thinking about replacing the standard CPU cooler, any advice is welcome  
    gr,
    Remco

  • How to bind several data (more then 7300) to datagrid

    Hello,
    I am trying to  bind to datagrid several data. I created an observablecollection, i stocked data into this collection and then i binded this collection to my datagrid. In xaml file i declared my datacontext. When i did it my visual studio and all application
    on my computer clock. I can't do anything. A simple think that i can do, is to close session and restart .
    I need help.
    Here is some sample:
    code cs ViewModel:
    using System;
    using System.Collections.Generic;
    using System.Text;
    using GestionDeContrats_Offres_ClientsGUI.VueModele;
    using System.Collections.ObjectModel;
    using System.Windows.Data;
    using System.ComponentModel;
    using GestionDeContrats_Offres_Clients.GestionOffres;
    using GestionDeContrats_Offres_Clients.GestionContrats;
    using System.Windows.Input;
    using GestionDeContrats_Offres_Clients.GestionModele;
    using GestionDeContrats_Offres_ClientsGUI.crm;
    using System.Data;
    namespace GestionDeContrats_Offres_ClientsGUI.VueModele
        /// <summary>
        /// </summary>
       public class GestionDeContratVueModele : VueModeleBase
            private readonly ObservableCollection<ContratVueModele> contrats;
            private readonly PagingCollectionView pagingView;
            private GestionDeContrat gestiondecontrat;
           /// <summary>
           /// Constructeur de la classe
           /// GestionDeContratVueModele
           /// </summary>
            public GestionDeContratVueModele() {
                try
                    this.gestiondecontrat = new GestionDeContrat();
                    this.contrats = new ObservableCollection<ContratVueModele>();
                    this.contrats.Clear();
                    foreach (contract contrat in this.gestiondecontrat.ListeDeContrat())
                       // this.contrats.Add(new ContratVueModele());
                             this.contrats.Add(new ContratVueModele() { NOMDUCONTRAT = contrat.title, DATEDEDEBUT = contrat.activeon.Value, DATEDEFIN
    = contrat.expireson.Value, LESTATUT = contrat.statecode.formattedvalue,LESTATUTAVANT=contrat.access_etatavant.name });
                    this.pagingView = new PagingCollectionView(this.contrats, 3);
                    if (this.pagingView == null)
                        throw new NullReferenceException("pagingView");
                    this.currentpage = this.pagingView.CurrentPage;
                    this.pagingView.CurrentChanged += new EventHandler(pagingView_CurrentChanged);
                catch(System.Web.Services.Protocols.SoapException soapEx){
                    soapEx.Detail.OuterXml.ToString();
           /// <summary>
           /// </summary>
           /// <param name="sender"></param>
           /// <param name="e"></param>
            void pagingView_CurrentChanged(object sender, EventArgs e)
                OnPropertyChanged("SelectedContrat");
                Dispose();
                //throw new NotImplementedException();
                    /// <summary>
            /// Propriété permettant de manipuler la
            ///Vue Modèle de la liste des contrats
            /// </summary>
            public ObservableCollection<ContratVueModele> Lescontrats
                get
                    return this.contrats;
           code xaml:
           <UserControl x:Class="GestionDeContrats_Offres_ClientsGUI.VueModele.UserControlGestionContrat"
                 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
                 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
                 xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
                 xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
                 mc:Ignorable="d"
                 x:Name="GestionContrat"
                 xmlns:local="clr-namespace:GestionDeContrats_Offres_ClientsGUI.VueModele"
                 d:DesignHeight="300"  >
        <UserControl.DataContext>
            <local:GestionDeContratVueModele  />
        </UserControl.DataContext>
        <Grid>
            <Grid.RowDefinitions>
                <RowDefinition Height="30"/>
                <RowDefinition Height="40"/>
                <RowDefinition />
                <RowDefinition Height="40"/>
            </Grid.RowDefinitions>
            <Grid>
                <Grid.ColumnDefinitions>
                    <ColumnDefinition Width="320"/>
                    <ColumnDefinition Width="40" />
                </Grid.ColumnDefinitions>
                <TextBox Name="searchtexbox" Grid.Column="0"/>
                <Image Grid.Column="1" Source="/GestionDeContrats_Offres_ClientsGUI;component/Images/16_find.gif" />
            </Grid>
            <ToolBar Grid.Row="1" Name="toolbarcontrat">
                <Button    Name="btNewContrat"  Click="btNewContrat_Click">
                    <Grid>
                        <Grid.ColumnDefinitions>
                            <ColumnDefinition/>
                            <ColumnDefinition/>
                        </Grid.ColumnDefinitions>
                        <Image Source="/GestionDeContrats_Offres_ClientsGUI;component/Images/plusvert.jpg" />
                        <Label Content="Nouveau" Grid.Column="1"/>
                    </Grid>
                </Button>
                <Button    Name="btCopierContrat" >
                    <Grid>
                        <Grid.ColumnDefinitions>
                            <ColumnDefinition/>
                            <ColumnDefinition/>
                        </Grid.ColumnDefinitions>
                        <Image Source="/GestionDeContrats_Offres_ClientsGUI;component/Images/editcopy.png" />
                        <Label Content="Copier" Grid.Column="1"/>
                    </Grid>
                </Button>
                <Button    Name="btSupprimerContrat" >
                    <Grid>
                        <Grid.ColumnDefinitions>
                            <ColumnDefinition/>
                            <ColumnDefinition/>
                        </Grid.ColumnDefinitions>
                        <Image Source="/GestionDeContrats_Offres_ClientsGUI;component/Images/delgreen16.jpg" />
                        <Label Content="Supprimer" Grid.Column="1"/>
                    </Grid>
                </Button>
                <Button    Name="btModifierContrat" >
                    <Grid>
                        <Grid.ColumnDefinitions>
                            <ColumnDefinition/>
                            <ColumnDefinition/>
                        </Grid.ColumnDefinitions>
                        <Image Source="/GestionDeContrats_Offres_ClientsGUI;component/Images/ico_18_4207.gif" />
                        <Label Content="Modifier" Grid.Column="1"/>
                    </Grid>
                </Button>
            </ToolBar>
            <DataGrid Name="listViewContrat" Grid.Row="2" ItemsSource="{Binding Path=Lescontrats, Mode=OneWay}"  IsSynchronizedWithCurrentItem="True" AutoGenerateColumns="False"
    CanUserReorderColumns="True" CanUserResizeColumns="True" CanUserSortColumns="True" CanUserAddRows="True" CanUserDeleteRows="True">
                <DataGrid.Columns>
                    <DataGridTemplateColumn Header="Nom du contrat" >
                        <DataGridTemplateColumn.CellTemplate>
                            <DataTemplate>
                                <TextBlock Text="{Binding Path=NOMDUCONTRAT, Mode=OneWay}"/>
                            </DataTemplate>
                        </DataGridTemplateColumn.CellTemplate>
                    </DataGridTemplateColumn>
                    <DataGridTemplateColumn Header="Date de début" >
                        <DataGridTemplateColumn.CellTemplate>
                            <DataTemplate>
                                <TextBlock Text="{Binding Path=DATEDEDEBUT, Mode=OneWay}"/>
                            </DataTemplate>
                        </DataGridTemplateColumn.CellTemplate>
                    </DataGridTemplateColumn>
                    <DataGridTemplateColumn Header="Date de fin"  >
                        <DataGridTemplateColumn.CellTemplate>
                            <DataTemplate>
                                <TextBlock Text="{Binding Path=DATEDEFIN, Mode=OneWay}"/>
                            </DataTemplate>
                        </DataGridTemplateColumn.CellTemplate>
                    </DataGridTemplateColumn>
                    <DataGridTemplateColumn Header="Statut" >
                        <DataGridTemplateColumn.CellTemplate>
                            <DataTemplate>
                                <TextBlock Text="{Binding Path=LESTATUT,Mode=OneWay}"/>
                            </DataTemplate>
                        </DataGridTemplateColumn.CellTemplate>
                    </DataGridTemplateColumn>
                    <DataGridTemplateColumn Header="Statut avant" >
                        <DataGridTemplateColumn.CellTemplate>
                            <DataTemplate>
                                <TextBlock Text=""/>
                            </DataTemplate>
                        </DataGridTemplateColumn.CellTemplate>
                    </DataGridTemplateColumn>
                </DataGrid.Columns>
            </DataGrid>
            <StackPanel Grid.Row="3" Orientation="Horizontal">
                <Label Margin="2" Content=""/>
                <Button Content="Suivant" Name="btNext" Margin="2" />
                <Button Content="Précédent" Name="btPrevious" Margin="2"  />
            </StackPanel>
        </Grid>
    </UserControl>
     I include link to this usercontrol into MainWindow.xaml.
    Thanks 
    Regards      

    I think what darnold was trying to say....
    Those very clever people who come up with insights on human behaviour have studied how many records a user can work with effectively.
    It turns out that they can't see thousands of records at once.
    Their advice is that one presents a maximum of 200-300 records at a time.
    What with maybe 40 fitting on a screen at a time. Few people like spending 20 minutes scrolling through a stack of data they're not interested in to find the one record they're after.
    Personally, I would use a treeview, set of combos or some such so the user can select what subset they are interested in and present just that.
    Anyhow.
    If you have a viewmodel which exposes an observable collection<t> as a public property you can bind the itemssource of a datagrid to that.
    Although UI controls have thread affinity, the objects in such a collection do not.
    That means you can use another thread to go get your data, add it to the observable collection, the fact it's an observable collection tells the view as records are added and it will show them. The ui will remain responsive.
    You can do this using skip and take to read a couple hundred records at a time and add those, pause and repeat for the next couple hundred records.  Use linq - skip and take.
    Please don't forget to upvote posts which you like and mark those which answer your question.
    My latest Technet article - Dynamic XAML

  • Is there a way to name the .indd files created by a data merge with values in the CSV file

    I have a data merge document that is set to single Record per Document page and 1 page per document.  In my CSV file I have a field for Part Number.  If I run a data merge with 10 records I end up with 10 .indd files.  I need a way so that the resulting .indd files are named the same value that is in the Part Number field.

    Loic has provided a link for my original piece, and I've written up some follow-up pieces for indesignsecrets.com:
    http://indesignsecrets.com/data-merging-individual-records-separate-pdfs.php
    http://indesignsecrets.com/data-merging-individual-records-separate-pdfs-part-2-scripting. php
    So there are several ways to get unique name PDFs from an indesign Data Merge. However, none of these 3 articles will truly answer your question of how to get unique indesign filenames using the database. I can see a practical purpose for this as merging business cards directly to PDFs is great, but I can be guaranteed that while on proof, there will be alts to the business cards that need to be done outside the merge, meaning specific records need to be exported to indesign files for further manipulation.

  • Data Transfer Prozess (several data packages due two huge amount of data)

    Hi,
    a)
    I`ve been uploading data from ERP via PSA, ODS and InfoCube.
    Due to a huge amount of data in ERP - BI splits those data in two data packages.
    When prozessing those data to ODS the system delete a few dataset.
    This is not done in step "Filter" but in "Transformation".
    General Question: How can this be?
    b)
    As described in a) data is split by BI into two data packages due to amount of data.
    To avoid this behaviour I enterd a few more selection criteria within InfoPackage.
    As a result I upload data a several time, each time with different selction criteria in InfoPackage.
    Finally I have the same data in ODS as in a), but this time without having data deleted in step "Transformation".
    Question: How is the general behaviour of BI when splitting data in several data packages?
    BR,
    Thorsten

    Hi All,
    Thanks a million for your help.
    My conclusion from your answers are the following.
    a) Since the ODS is Standard - within transformation no datasets are deleted but aggregated.
    b) Uploading a huge amount of datasets is possible in two ways:
       b1) with selction criteria in InfoPackage and several uploads
       b2) without selction criteria in InfoPackage and therefore an automatic split of datasets in data packages
    c) both ways should have the same result within the ODS
    Ok. Thanks for that.
    So far I have only checked the data within PSA. In PSA number of datasets are not equal for variant b1 and b2.
    Guess this is normal technical behaviour of BI.
    I am fine when results in ODS are the same for b1 and b2.
    Have a nice day.
    BR,
    Thorsten

Maybe you are looking for

  • Very urgent : Update maintenance view

    Hi I would like to know if i update a maintenance view ,will that change be included in the correction instruction or do i need to do anything special for the same. Regards Leon

  • User Exit or BADI for the Bank validation in BP transaction

    Hi All,     My requirement goes like this, my customer wants to validate the bank keys whether the deletion flag is marked or not and if marked then throw the error message, in BP transaction. But when i was searching for the user exit or a BADI duri

  • Problem opening docx Word attachments in Communications Express

    We have a problem where some Word documents of the .docx type won't open properly in IE. IE will treat the docx file as a zip file, and offer to open it up using zip, instead of Word. The strange thing is that some docx files open fine into word, but

  • I have a 10.4.11 system. Is there a suitable hard drive back up available.

    I have a 10.4.11 operating system and tried 2 back up hard drives but both have incompatible. Is there a suitable device available?

  • SCCM 2012 SP1 - Patch error

    Hi, I am getting the below error in WCM.log when i try to sync with microsoft website.Please help Log: DB Server not detected for SUP DCMPSTPV01.devswad.net from SCF File. skipping.~  $$<SMS_WSUS_SYNC_MANAGER><01-29-2014 14:08:39.994+00><thread=800 (