[Solved] Disapointing ext4 fsck performance

Hi all,
Yesterday I converted my main partition to ext4, using the tune2fs + fsck method, and changing my partition type to ext4 in fstab. I thought one of the main points of ext4 was a faster fsck, however in my case the contrary is happening. I don't have any precise figures, but I would say an fsck takes about twice as long as before, in the order of several minutes for a 170GB partition with about 50GB of stuff on it.
Any insight? Is this normal or am I missing something?
Last edited by lardon (2009-01-22 13:28:25)

lardon wrote:
skottish wrote:I'm not sure if this question got asked yet. Did you convert your data over to use the new device mapping? Simply converting the hard disc isn't enough. I don't know if that's a factor, but I do know that fsck is way faster for me now.
mmm, no I didn't, since e4defrag is not available yet with the standard kernel. Well, I guess that settles my problem then! Thanks for the answer.
Sorry I didn't respond sooner. I used the CK defrag script found here:
http://ck.kolivas.org/apps/defrag/defrag-0.06/defrag
which was found in post #62 here:
http://bbs.archlinux.org/viewtopic.php?id=61602
There's been some discussion around on whether defragging is a good idea or not, but I didn't see anyone fully explain why it's not. This is where I heard it from:
http://bbs.archlinux.org/viewtopic.php?id=63228
On my system, fsck use to take something like 4 minutes on my data discs with ext3. It now takes about 20 seconds with ext4. That includes recreating the lost+found folder, which for some reason seems to take a bit of time.

Similar Messages

  • [Solved -sorta] systemd-fsck []: fsck: /sbin/fsck.ext4: execute failed

    Greetings.
    I am getting the following on boot:
    Starting Version 218
    A password is required to access the MyStorage volume:
    Enter passphrase for /dev/sda3
    /dev/mapper/MyStorage-rootvol: clean, 210151/983040 files, 2284613/3932160 block
    [ 78.084720] systemd-fsck [280]: fsck: /sbin/fsck.ext4: execute failed: Exec format error
    [ 78.085215] systemd-fsck [287]: fsck: /sbin/fsck.ext2: execute failed: Exec format error
    I then end up at a login prompt but if I try to login, I get “Login incorrect”. Sometimes Getty will stop and restart on tty1. Then I get returned to the login prompt.
    This came about after upgrading with Pacman (which included “upgraded e2fsprogs (1.42.12-1 -> 1.42.12-2)”) a few days ago. Pacman completed successfully but on reboot the system froze forcing a hard reset.
    I've booted to a USB and run fsck on the boot partition (the only ext2 partition). Ditto on the root and home volumes. All fine. I've also mounted all three and can access the data.
    I would have thought it was something to do with the e2fs upgrade but it obviously scanned the root volume fine and I haven't been able to find any similar reports online.
    I've searched online for ideas and I've also searched for logs which might give me some indication of what the cause is but at this point, I've reached my limits.
    I'd just nuke the data and start again but I really want to understand what happened here.
    Any thoughts on what caused this or suggestions on how to proceed?
    Thank you.
    Stephen
    Last edited by FixedWing (2015-03-16 01:40:20)

    Head_on_a_Stick wrote:
    https://wiki.archlinux.org/index.php/Pa … ond_repair
    However, it may be simplest to just re-install in your case -- it depends whether you want to use the troubleshooting & repairing as a learning process or if you just want your system up & running again ASAP...
    All fixed and working just like nothing happened.
    I did use the advice at the referred link plus a few others on archlinux.org and elsewhere. Yes, an absolutely wonderful learning experience!
    I manually reinstalled e2fsprogs. That got Pacman working again and I was able to boot into the system. Then I used Pacman to reinstall e2fsprogs properly plus the other seven packages which were also installed during the same Pacman session despite their being corrupted.
    What I really don't get is how Pacman could accept a package with 0 bytes and install it? How could such a package possible pass the security check? When I reinstalled the packages, Pacman of course refused to install the corrupt packages in the cache and deleted them. So why didn't that happen initially? I can only think that a corrupt file in that process terminated prematurely and that Pacman wasn't robust enough to detect this so simply continued on, now skipping the scans and installing the corrupt packages. So just to be sure it wasn't a corrupt file in Pacman itself, I also forced a reinstall of that package as well. I've upgraded packages since without issue so I have to assume that whatever the issue was is now gone.
    Anyway, thanx for the help!
    Stephen

  • [SOLVED] badblocks ext4

    Overview of issue.
    On boot when a filesystem is being fsck, errors appear:
    [ 21.638174] sd 2:0:0:0: [sdc] Unhandled sense code
    [ 21.638180] sd 2:0:0:0: [sdc] Result: hostbyte=0x10 driverbyte=0x08
    [ 21.638186] sd 2:0:0:0: [sdc] Sense Key : 0x3 [current]
    [ 21.638192] sd 2:0:0:0: [sdc] ASC=0x11 ASCQ=0x0
    [ 21.638197] sd 2:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 05 c0 00 67 00 00 40 00
    [ 21.638210] end_request: critical target error, dev sdc, sector 96469095
    [ 21.638289] Buffer I/O error on device sdc1, logical block 96469032
    [ 21.638351] Buffer I/O error on device sdc1, logical block 96469033
    [ 21.638410] Buffer I/O error on device sdc1, logical block 96469034
    [ 21.638468] Buffer I/O error on device sdc1, logical block 96469035
    [ 21.638525] Buffer I/O error on device sdc1, logical block 96469036
    [ 21.638581] Buffer I/O error on device sdc1, logical block 96469037
    [ 21.638639] Buffer I/O error on device sdc1, logical block 96469038
    [ 21.638696] Buffer I/O error on device sdc1, logical block 96469039
    [ 21.638762] Buffer I/O error on device sdc1, logical block 96469040
    [ 21.638820] Buffer I/O error on device sdc1, logical block 96469041
    [ 23.551857] sd 2:0:0:0: [sdc] Unhandled sense code
    [ 23.551863] sd 2:0:0:0: [sdc] Result: hostbyte=0x10 driverbyte=0x08
    [ 23.551868] sd 2:0:0:0: [sdc] Sense Key : 0x3 [current]
    [ 23.551874] sd 2:0:0:0: [sdc] ASC=0x11 ASCQ=0x0
    [ 23.551878] sd 2:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 05 c0 00 a7 00 00 80 00
    [ 23.551890] end_request: critical target error, dev sdc, sector 96469159
    [ 24.591795] sd 2:0:0:0: [sdc] Unhandled sense code
    [ 24.591801] sd 2:0:0:0: [sdc] Result: hostbyte=0x10 driverbyte=0x08
    [ 24.591806] sd 2:0:0:0: [sdc] Sense Key : 0x3 [current]
    [ 24.591811] sd 2:0:0:0: [sdc] ASC=0x11 ASCQ=0x0
    [ 24.591816] sd 2:0:0:0: [sdc] CDB: cdb[0]=0x28: 28 00 05 c0 00 8f 00 00 08 00
    [ 24.591828] end_request: critical target error, dev sdc, sector 96469135
    I'm not familiar with that but since I can mount and read and write data my spider sense tells me it's probably bad blocks than a catastrophic mechanism failure.
    Disk /dev/sdc: 160.0 GB, 160041885696 bytes
    255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x0001132a
    Device Boot Start End Blocks Id System
    /dev/sdc1 63 312581807 156290872+ 83 Linux
    Two general goals,
    Locate which files are on bad blocks.
    Mark bad blocks so they can not be used again.
    I have never done either before.
    Plan of attack for finding damaged files,
    The wiki has this fine article: https://wiki.archlinux.org/index.php/Fi … iven_Block
    which describes the process for JFS (everyone uses JFS). I am unfortunately part of the minority who uses ext4. Not to worry, the article gives ideas on how to approach this:
    find bad blocks using: badblocks -v -b 4096 /dev/sdc1
    use debugfs's icheck to find which inodes correspond to the bad blocks
    use find utility to map inodes to path
    Marking bad blocks
    I need help with this.
    Last edited by fsckd (2011-11-10 12:45:42)

    I'm not sure. When I tried -d usbcypress it hung and kill -9 took a while to take effect.
    Here's the output from hdparm -I,
    /dev/sdb:
    SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    ATA device, with non-removable media
    Standards:
    Likely used: 1
    Configuration:
    Logical max current
    cylinders 0 0
    heads 0 0
    sectors/track 0 0
    Logical/Physical Sector size: 512 bytes
    device size with M = 1024*1024: 0 MBytes
    device size with M = 1000*1000: 0 MBytes
    cache/buffer size = unknown
    Capabilities:
    IORDY not likely
    Cannot perform double-word IO
    R/W multiple sector transfer: not supported
    DMA: not supported
    PIO: pio0

  • [SOLVED] How to fsck+badblocks with systemd?

    Before systemd I could boot into single mode, then "mount / -o remount,ro" and then "fsck.ext4 -c ...". Now I can't since it says / is in use and I can't remount as read-only. What do I do?
    Last edited by Butcher (2012-10-22 09:54:22)

    Gusar wrote:Two similar targets? Interesting. I just found Fedora's sysvinit to systemd cheat sheet - adding emergency gets you the emergency target. All the others I mentioned get you the rescue target. Now I'll have to go figure out what the difference between those two targets is
    [Unit]
    Description=Emergency Mode
    Documentation=man:systemd.special(7)
    Requires=emergency.service
    After=emergency.service
    AllowIsolate=yes
    emergency.target only launches emergency.service, which is a root shell (sulogin) on your primary console. No udev, no journald, no nothing.
    [Unit]
    Description=Rescue Mode
    Documentation=man:systemd.special(7)
    Requires=sysinit.target rescue.service
    After=sysinit.target rescue.service
    AllowIsolate=yes
    [Install]
    Alias=kbrequest.target
    This launches sysinit.target, thus udev and journal are running and file systems are mounted. Then it launches rescue.service, which is almost identical to emergency.service.
    Last edited by brain0 (2012-10-19 15:21:47)

  • [solved] Check EXT4 partition automatically

    Hello everyone,
    I have two EXT4 partition on my HD (sda2 and sda4):
    $ fdisk -l
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 3264 26218048+ 7 HPFS/NTFS
    /dev/sda2 3265 6528 26218080 83 Linux
    /dev/sda3 6529 6590 498015 82 Linux swap / Solaris
    /dev/sda4 6591 19457 103354177+ 83 Linux
    Eventually, sda2 partition (root partition) is checked automatically on boot, but sda4 never checked. On dmesg appears:
    EXT4-fs (sda4): warning: maximal mount count reached, running e2fsck is recommended
    On wiki (http://wiki.archlinux.org/index.php/Fsck) is written:
    The Arch Linux boot process conveniently takes care of the Fsck procedure for you and will periodically check ALL relevant partitions on your hard drive automatically in a scheduled manner.
    Can I turn automatically check on for sda4 partition too? Or I have to do this manually ?
    My fstab:
    UUID=324658b1-0c1a-494b-bc6e-93ca1d138b1b swap swap defaults 0 0
    UUID=fc37e80c-fe03-4932-b9af-c8388190f19c / ext4 defaults 0 1
    UUID=b19fe1b6-aee6-4bda-b17f-b76674016525 /media/Dados ext4 defaults,noatime,nodiratime 0 0
    /dev/sda1 /media/Windows ntfs-3g defaults 0 0
    Thanks in advance.
    Last edited by alessandro_ufms (2010-04-07 20:11:42)

    Have a look at the <pass> definition in fstab: http://wiki.archlinux.org/index.php/Fstab
    File systems with a <pass> value 0 will not be checked by the fsck utility.

  • [SOLVED] is the fsck hook needed if the systemd hook is present?

    Hiya. The Wiki isn't really clear about this. There are some instructions about building a systemd based init image here, but nothing about fsck.
    On the silent boot page there is a entry about fsck , more specifically letting systemd check the filesystem instead of the fsck hook, which involves copying configuration files.
    I understand both entries, but after migrating my init image to systemd, I keep getting systemd-fsck messages on boot, and now I have a couple of questions:
    Is my disk being fscked twice?
    Is the fsck hook needed anymore? What would happen if I remove it?
    If I want to turn off the output, can I follow the instructions on the Wiki?
    What is the general recommended approach with systemd and fsck?
    I skimmed through this thread, but I must say I didn't become any more educated on this topic.
    Thanks.
    Last edited by DoctorJellyface (2015-04-16 14:59:29)

    I just had a quick look on how these things fit together. Short answer: If you want the root filesystem to be checked in the initrd (and therefore before mount) you do need the fsck hook.
    Long answer:
    systemd-fsck just uses fsck.* to check the disks (see source code line 288ff.). If it does not exist, systemd-fsck just quits. The reason for this is clear: fsck tools are made by filesystem developers, not by systemd developers.
    The fsck hook is the only one that installs these binaries (fsck.*) into the initrd (see grep -r 'fsck\.' /usr/lib/initcpio). That means: No fsck hook --> no filesystem check in the initrd.
    Now to your questions:
    DoctorJellyface wrote:Is my disk being fscked twice?
    No. The root filesystem is checked in the initrd if the appropriate fsck.* tool is available. All other filesystems are checked after leaving the initrd.
    DoctorJellyface wrote:Is the fsck hook needed anymore? What would happen if I remove it?
    It is needed if you want the root filesystem to be checked in the initrd and therefore before being mounted. If you remove it it might be checked after leaving the initrd while being mounted read-only. But I did not test that and don't know if Arch actually does it this way. You may quickly test this yourself.
    DoctorJellyface wrote:If I want to turn off the output, can I follow the instructions on the Wiki?
    If you use systemd in the initrd you would probably have to edit the files directly in /usr/lib/systemd/system, because mkinitcpio copies the files from there. They will be overwritten on every systemd update so that is not a very nice approach. For all output after leaving the initrd you should be able to follow the steps in the wiki (copy to /etc...)
    DoctorJellyface wrote:What is the general recommended approach with systemd and fsck?
    My recommendation would be: Keep the fsck hook, because it is the only way to check the filesystem before actually mounting it. It works nicely with the systemd hook.

  • [SOLVED] [QEMU] Improve graphics performance/resolution

    Hi,
    I use qemu's kvm feature and successfuly installed Windows 7 in it.
    Everything works quite well except the graphics performance being very poor.
    When I move the cursor it starts flickering ('lagging') and I can't set the resolution fitting my laptop screen with a native resolution of 1366x768. (So currently using 1024x768 in the VM)
    I start qemu with -vga vmware.
    command I use for starting qemu:
    qemu-system-x86_64 -enable-kvm -machine type=pc,accel=kvm -cpu host -vga vmware -full-screen -k de -usb -usbdevice tablet -m 3072 /path/to/image.qcow2
    Is there a way to fix this??
    Last edited by kanikuleet (2013-04-15 10:49:36)

    Finally got it working now, performance is surprisingly good!!!
    I used to have Windows 7 Home Premium, then I discovered you cannot remote-connect to Home Prem as this version lacks that feature
    So I got me Ultimate 64bit and it works.
    I start the qemu vm with:
    qemu-system-x86_64 -enable-kvm -machine type=pc,accel=kvm -cpu host -nographic -k de -usb -m 4096 -net nic -net user,hostfwd=tcp::3389-:3389 /home/kanikuly/Pictures/ISO/Win7Ult64Bit.qcow2
    and rdesktop with:
    rdesktop -g 100% -E -D -K -P -f -z -x b -r sound:off -a 32 -u *** -p *** localhost

  • How to solve the report generating performance issue in BI server

    ->I HAVE USED ONLY ONE PIVOT IN MY REPORT QUERY AND THE REPORT QUERY IS FULLY DEPEND ON ONE FACT TABLE
    ->IN THE REPORT I AM SHOWING DATA BY MORE THAN 150 COLUMNS.
    ->I HAVE USED REPORT FUNCTIONS FOR 3 FIELDS IN THE REPORT.ALL THREE REPORT FUNCTIONS JUST ADD TWO COLUMN FIELD AND MAKE IT IN TO ONE COLUMN FILED
    ->REPORT QUERY INDIVIDUALLY RUNNING BY 2.40 SECONDS.WHEN I USED THAT TO GENERATE A REPORT IT TAKES TO RUN MORE THAN 8 MINS.WHY THIS DELAY?WILL YOU EXPLAIN
    Edited by: 873091 on Jul 18, 2011 3:50 AM

    Hello Dude,
    So from your post I understand that there is a report that basically takes 8 minutes to retrieve the data, but when you get the logical sql from the cache, and run it against the database, it only takes less than 20 seconds to retrieve the same data?
    This should never happen. There might be a delay of more 20-40 seconds more than the time taken against the database to load the page elements etc. Is this happening for only this particular report or all reports ? Are you running the query against the same instance's database or a different one?
    Try to re-boot all the BI services and run the report again, if the issue still exists, enable caching and try.
    Assign points if helpful.
    Thanks,
    -Amith.

  • [SOLVED] Broken ext4, recoverable?

    I was trying to rearrange my partitions using gparted so that the free space was in a usable place.
    I shrank my second partition and clicked apply. It finished successfully.
    I then tried to expand my third partition downwards. It sait it would take about 90 minutes, so I went to bed.
    When I came back, the screen showed it had stopped in some kind of boot state. I rebooted and found the keyboard didn't work in Xorg. Two cold boots later and it magically worked again.
    The expanded partition had not mounted to I loaded gparted up again and got "unable to detect filesystem" on that partition.
    emyr@emyr-desktop:~$ sudo mke2fs -n /dev/sdb3
    mke2fs 1.41.14 (22-Dec-2010)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    23707648 inodes, 94823304 blocks
    4741165 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=0
    2894 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968
    emyr@emyr-desktop:~$
    emyr@emyr-desktop:~$ sudo fdisk -l
    [sudo] password for emyr:
    Disk /dev/sda: 160.0 GB, 160041885696 bytes
    255 heads, 63 sectors/track, 19457 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x24bf24be
    Device Boot Start End Blocks Id System
    /dev/sda1 1 5992 48130708+ 7 HPFS/NTFS
    /dev/sda2 5993 15722 78156225 7 HPFS/NTFS
    Disk /dev/sdb: 500.1 GB, 500107862016 bytes
    255 heads, 63 sectors/track, 60801 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xe5f3ed43
    Device Boot Start End Blocks Id System
    /dev/sdb1 * 1 7012 56319952+ 83 Linux
    Partition 1 does not end on cylinder boundary.
    /dev/sdb2 7208 13582 51200588+ 83 Linux
    /dev/sdb3 13582 60801 379293216+ 83 Linux
    /dev/sdb4 7012 7207 1570243+ 82 Linux swap / Solaris
    Partition table entries are not in disk order
    Disk /dev/sdc: 250.1 GB, 250059350016 bytes
    255 heads, 63 sectors/track, 30401 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000b717b
    Device Boot Start End Blocks Id System
    /dev/sdc1 1 6527 52426752 6 FAT16
    /dev/sdc2 6528 30401 191767905 0 Empty
    emyr@emyr-desktop:~$
    I googled a bit and read about backup superblocks, so I tried this:
    emyr@emyr-desktop:~$ sudo e2fsck -b 163840 /dev/sdb3
    e2fsck 1.41.14 (22-Dec-2010)
    e2fsck: Bad magic number in super-block while trying to open /dev/sdb3
    The superblock could not be read or does not describe a correct ext2
    filesystem. If the device is valid and it really contains an ext2
    filesystem (and not swap or ufs or something else), then the superblock
    is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device&gt;
    emyr@emyr-desktop:~$
    What should be my next step?
    Last edited by Emyr (2011-07-23 00:36:16)

    Managed to recover the lost partition, and simultaneously lose my / partition. Oops. Attempt 2 after doing a partial backup got it all back.
    Now to buy a very large new drive so I can make my partitions ridiculously large!

  • PERFORM ... ON ROLLBACK

    Hi all,
    i have some code in an user exit at the end of TO creation.  I also write there some Application Log Messages. But when i throw an A Message in this user exit the Applog is empty after the transaction because the A Mesages causes an rollback. I had the idea to solve this with an perform on commit routine but this form routine is never been executed. Has anyone an idea what i have done wrong with this statement?
    Thanks in advance
    Steffen

    I solved the problem in another way:
    i write all my Application Log Messages with a function module that i call with fuction destination 'NONE'
    so all the Application Log Handling is in a seperate program context.
    after the A-Messages is thrown in the user exit and the rollback is performed, the Application Log is on the database anyway.
    here a short sample:
    create Applog ( in an own RFC able function module as wrapper for BAL_LOG_CREATE ):
      CALL FUNCTION 'Z_BAL_LOG_CREATE' destination 'NONE'
        EXPORTING
          i_s_log                       = s_log_hdr
        IMPORTING
          E_LOG_HANDLE                  = log_handle
        EXCEPTIONS
          MISSING_LOG_HEADER            = 1
          LOG_HEADER_INCONSISTENT       = 2
          ERROR                         = 3
          OTHERS                        = 4.
    Add Message to Applog ( in an own RFC able function module as wrapper for BAL_LOG_MSG_ADD ):
      CALL FUNCTION 'Z_BAL_MSG_ADD' destination 'NONE'
        EXPORTING
          i_log_handle              = log_handle
          i_s_msg                   = s_bal_msg
        EXCEPTIONS
          MISSING_LOG_HANDLE        = 1
          MISSING_LOG_MESSAGE       = 2
          LOG_NOT_FOUND             = 3
          MSG_INCONSISTENT          = 4
          LOG_IS_FULL               = 5
          ERROR                     = 6
          OTHERS                    = 7.
    save Applog to database( in an own RFC able function module as wrapper for BAL_DB_SAVE ):
      CALL FUNCTION 'Z_BAL_DB_SAVE_SINGLE' destination 'NONE'
        EXPORTING
          i_log_handle             = log_handle
        EXCEPTIONS
          MISSING_LOG_HANDLE       = 1
          LOG_NOT_FOUND            = 2
          SAVE_NOT_ALLOWED         = 3
          NUMBERING_ERROR          = 4
          ERROR                    = 5
          OTHERS                   = 6.
    throw A-MESSAGE:
    message axxx(yyy).
    the transaction performs a rollback but the applog is written to database anyway.
    regards steffen
    Message was edited by: Steffen Engert

  • DB Performance problem

    Hi Friends,
    We are experiencing performance problem with our oracle applications/database.
    I run the OEM and I got the following report charts:
    http://farm3.static.flickr.com/2447/3613769336_1b142c9dd.jpg?v=0
    http://farm4.static.flickr.com/3411/3612950303_1f83a9f20.jpg?v=0
    http://farm4.static.flickr.com/3411/3612950303_1f83a9f20.jpg?v=0
    Are there any clues that these charts can give re: performance problem?
    What other charts in OEM that can help solve or give assitance performance problem?
    Thanks a lot in advance

    ytterp2009 wrote:
    Hi Charles,
    This is the output of:
    SELECT
    SUBSTR(NAME,1,30) NAME,
    SUBSTR(VALUE,1,40) VALUE
    FROM
    V$PARAMETER
    ORDER BY
    UPPER(NAME);
    (snip)
    Are there parameters need tuning?
    ThanksThanks for posting the output of the SQL statement. The output answers several potential questions (note to other readers, shift the values in the SQL statement's output down by one row).
    Parameters which I found to be interesting:
    control_files                 C:\ORACLE\PRODUCT\10.2.0\ORADATA\BQDB1\C
    cpu_count                     2
    db_block_buffers              995,648 = 8,156,348,416 bytes = 7.6 GB
    db_block_size                 8192
    db_cache_advice               on
    db_file_multiblock_read_count 16
    hash_area_size                131,072
    log_buffer                    7,024,640
    open_cursors                  300
    pga_aggregate_target          2.68435E+12 = 2,684,350,000,000 = 2,500 GB
    processes                     950
    sessions                      1,200
    session_cached_cursors        20
    shared_pool_size              570,425,344
    sga_max_size                  8,749,318,144
    sga_target                    0
    sort_area_retained_size       0
    sort_area_size                65536
    use_indirect_data_buffers     TRUE
    workarea_size_policy          AUTOFrom the above, the server is running on Windows, and based on the value for use_indirect_data_buffers is running a 32 bit version of Windows using a windowing technique to access memory (database buffer cache only) beyond the 4GB upper limit for 32 bit applications. By default, 32 bit Windows limits each process to a maximum of 2GB of memory utilization. This 2GB limit may be raised to 3GB through a change in the Windows configuration, but a certain amount of the lower 4GB region (specifically in the upper 2GB of that region) must be used for the windowing technique to access the upper memory (the default might be 1GB of memory, but verify with Metalink).
    By default on Windows, each session connecting to the database requires 1MB of server memory for the initial connection (this may be decreased, see Metalink), and with SESSIONS set at 1,200, 1.2GB of the lower 2GB (or 3GB) memory region would be consumed just to let the sessions connect, before any processing is performed by the sessions.
    The shared pool is potentially consuming another 544MB (0.531GB) of the lower 2GB (or 3GB) memory region, and the log buffer is consuming another 6.7MB of memory.
    Just with the combination of the memory required per thread for each session, the memory for the shared pool, and the memory for the log buffer, the server is very close to the 2GB memory limit before the clients have performed any real work.
    Note that the workarea_size_policy is set to AUTO, so as long as that parameter is not adjusted at the session level, the sort_area_size and sort_area_retained_size have no impact. However, the 2,500 GB specification (very likely an error) for the pga_aggregate_target is definitely a problem as the memory must come from the lower 2GB (or 3GB) memory region.
    If I recall correctly, a couple years ago Dell performed testing with 32 bit servers using memory windowing to utilize memory above the 4GB limit. Their tests found that the server must have roughly 12GB of memory to match (or possibly exceed) the performance of a server with just 4GB of memory which was not using memory windowing. Enabling memory windowing and placing the database buffer cache into the memory above the 4GB limit has a negative performance impact - Dell found that once 12GB of memory was available in the server, performance recovered to the point that it was just as good as if the server had only 4GB of memory. You might reconsider whether or not to continue using the memory above the 4GB limit.
    db_file_multiblock_read_count is set to 16 - on Oracle 10.2.0.1 and above this parameter should be left unset, allowing Oracle to automatically configure the parameter (it will likely be set to achieve 1MB multi-block reads with a value of 128).
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • XVR-1000 performance questions

    Hi there,
    I recently upgraded my SB1K from Elite3D m6 to XVR-1000 and I am a little disapointed by the performance... Quite often, when using an OGL application, and doing some dynamic viewing (rotation, ...), the graphics stop as if the card was playing catch-up... This seems very odd... I'm running Solaris 10, Java Desktop under Xserver, would XSun be any different?

    <table border="0" align="center" width="90%" cellpadding="3" cellspacing="1"><tr><td class="SmallText"><b>NdRosario</b></td></tr><tr><td class="quote">
    Have you applied the latest OS patches and OBP ?
    </td></tr></table>
    Hi there, thanks for your reply. OBP is at the latest. Patches, well, there are a bunch of them that are in queue in the Sun Update Mgr, not sure which ones you would specifically refer to...
    <table border="0" align="center" width="90%" cellpadding="3" cellspacing="1"><tr><td class="SmallText"><b>NdRosario</b></td></tr><tr><td class="quote">
    Also You may need to update your Harddisks Firmware.
    </td></tr></table>
    Not sure I follow this one...
    <table border="0" align="center" width="90%" cellpadding="3" cellspacing="1"><tr><td class="SmallText"><b>NdRosario</b></td></tr><tr><td class="quote">
    HOw much memory do you ? What is the size of your swap ?
    </td></tr></table>RAM is 4GB, swap is 3.5GB...
    <table border="0" align="center" width="90%" cellpadding="3" cellspacing="1"><tr><td class="SmallText"><b>NdRosario</b></td></tr><tr><td class="quote">
    Could you show your "prtdiag -v".
    Since you are running Solaris 10, it will be a good idea
    to use the Dtrace service to zero-in to your problem.
    </td></tr></table>I'll work on the output and DTrace... In the meantime, the problem I'm seeing is the XRV-1000 seems to be playing catchup with OGL CAD systems while antialiased lines are displayed, but only under Java Desktop, it flies normally under CDE...
    Thanks again for your reply.

  • MBP 2011/MC724 performance decreasing after firmware updating

    On the third day after buying a suddenly degraded performance.   No additional software installed (except for the adobe flash player). 
    Perhaps the problem is caused by a firmware update.  I've Installed it from the standard upgrades. System Restore from DVD (with full HD cleaning), not solve problem.
    Current performance can not even watch videos on youtube.
    Who else is having the same problem with apple mbp 2011?
    What do I do with it?  How to fix the problem?
    thank you in advance for your help.
    Sasha.

    youtube, safari, showing spaces and so on.  Heavy freezes. Even during the first boot (after recovery from the DVD) when the music plays, there are freezes and crackling sound.

  • Performance enhancement for parallel loops

    Hi,
      I have performance problem for the following parallel loops.Please help me solve this to improve performance of report,urgently.
    LOOP AT xt_git_ekpo INTO lv_wa_ekpo.
    lv_wa_final-afnam     = lv_wa_ekpo-afnam.
    LOOP at xt_git_ekkn into lv_wa_ekkn  where ebeln = lv_wa_ekpo-ebeln
    and ebelp = lv_wa_ekpo-ebelp.
    lv_wa_final-meins     = lv_wa_ekpo-meins.
    READ TABLE xt_git_ekko INTO lv_wa_ekko
                                    WITH KEY ebeln = lv_wa_ekpo-ebeln
                                    BINARY SEARCH.
          IF sy-subrc IS INITIAL.
            lv_wa_final-ebeln     = lv_wa_ekko-ebeln.
            lv_wa_final-ebelp     = lv_wa_ekpo-ebelp.
            lv_wa_final-txz01     = lv_wa_ekpo-txz01.
            lv_wa_final-aedat     = lv_wa_ekko-aedat.
    READ TABLE xt_git_lfa1 INTO lv_wa_lfa1
                                          WITH KEY lifnr = lv_wa_ekko-lifnr
                                          BINARY SEARCH.
            IF sy-subrc IS INITIAL.
              lv_wa_final-lifnr = lv_wa_lfa1-lifnr.
              lv_wa_final-name1 = lv_wa_lfa1-name1.
            ENDIF.
         LOOP AT xt_git_ekbe INTO lv_wa_ekbe WHERE   ebeln      =   lv_wa_ekpo-ebeln
    AND  ebelp = lv_wa_ekpo-ebelp.
    waiting for quick reply.

    Hi
    U can use SORTED TABLE instead of STANDARD TABLE:
    DATA: xt_git_ekkn TYPE SORTED TABLE OF EKKN WITH NON-UNIQUE KEY EBELN EBELP,
              xt_git_ekbe  TYPE SORTED TABLE OF EKBE WITH NON-UNIQUE KEY EBELN EBELP.
    LOOP AT xt_git_ekpo INTO lv_wa_ekpo.
        lv_wa_final-afnam = lv_wa_ekpo-afnam.
        LOOP at xt_git_ekkn into lv_wa_ekkn where ebeln = lv_wa_ekpo-ebeln
                                                                  and ebelp = lv_wa_ekpo-ebelp.
            lv_wa_final-meins = lv_wa_ekpo-meins.
            READ TABLE xt_git_ekko INTO lv_wa_ekko WITH KEY ebeln = lv_wa_ekpo-ebeln
                                                                                    BINARY SEARCH.
            IF sy-subrc IS INITIAL.
              lv_wa_final-ebeln = lv_wa_ekko-ebeln.
              lv_wa_final-ebelp = lv_wa_ekpo-ebelp.
              lv_wa_final-txz01 = lv_wa_ekpo-txz01.
              lv_wa_final-aedat = lv_wa_ekko-aedat.
              READ TABLE xt_git_lfa1 INTO lv_wa_lfa1 WITH KEY lifnr = lv_wa_ekko-lifnr
                                                                                    BINARY SEARCH.
              IF sy-subrc IS INITIAL.
                 lv_wa_final-lifnr = lv_wa_lfa1-lifnr.
                 lv_wa_final-name1 = lv_wa_lfa1-name1.
             ENDIF.
             LOOP AT xt_git_ekbe INTO lv_wa_ekbe WHERE ebeln = lv_wa_ekpo-ebeln
                                                                             AND ebelp = lv_wa_ekpo-ebelp.
    Anyway u should considere to upload in the internal table only the record of the current document, in this case u need to insert the SELECT into the loop:
    SORT  xt_git_ekpo by EBELN EBELP.
    LOOP AT xt_git_ekpo INTO lv_wa_ekpo.
        lv_wa_final-afnam = lv_wa_ekpo-afnam.
       IF lv_wa_ekkn-EBELN <>  lv_wa_ekpo-EBELN.
         SELECT * FROM EKKN INTO TABLE xt_git_ekkn WHERE EBELN = lv_wa_ekpo-EBELN.
         SELECT * FROM EKBE INTO TABLE xt_git_ekbe WHERE EBELN = lv_wa_ekpo-EBELN.
       ENDIF.
        LOOP at xt_git_ekkn into lv_wa_ekkn where ebelp = lv_wa_ekpo-ebelp.
            lv_wa_final-meins = lv_wa_ekpo-meins.
            READ TABLE xt_git_ekko INTO lv_wa_ekko WITH KEY ebeln = lv_wa_ekpo-ebeln
                                                                                    BINARY SEARCH.
            IF sy-subrc IS INITIAL.
              lv_wa_final-ebeln = lv_wa_ekko-ebeln.
              lv_wa_final-ebelp = lv_wa_ekpo-ebelp.
              lv_wa_final-txz01 = lv_wa_ekpo-txz01.
              lv_wa_final-aedat = lv_wa_ekko-aedat.
              READ TABLE xt_git_lfa1 INTO lv_wa_lfa1 WITH KEY lifnr = lv_wa_ekko-lifnr
                                                                                    BINARY SEARCH.
              IF sy-subrc IS INITIAL.
                 lv_wa_final-lifnr = lv_wa_lfa1-lifnr.
                 lv_wa_final-name1 = lv_wa_lfa1-name1.
             ENDIF.
             LOOP AT xt_git_ekbe INTO lv_wa_ekbe WHERE ebelp = lv_wa_ekpo-ebelp.
    In my experience (for a very large number of records) this second solution was faster than the first one:
    - Using the first solution (upload all data in internal table and use sorted table): my job takes 2/3 days
    - Using the second solution: my job takes 1 hour.
    Max

  • Migration of huge data from norm tables to denorm tables for performance

    We are planning to move the NORM tables to DENORM tables in Oracle DB for a client for performance issue. Any Idea on the design/approach we can use to migrate this HUGE data (2 billion records/ 5TB of data) in a window of 5 to 10 hrs (Or minimum than that also).
    We have developed SQL that is one single query which contains multiple instance of same table and lots of join. Will that be helpful.

    Jonathan Lewis wrote:
    Lother wrote:
    We are planning to move the NORM tables to DENORM tables in Oracle DB for a client for performance issue. Any Idea on the design/approach we can use to migrate this HUGE data (2 billion records/ 5TB of data) in a window of 5 to 10 hrs (Or minimum than that also).
    We have developed SQL that is one single query which contains multiple instance of same table and lots of join. Will that be helpful.Unfortunately, the fact that you have to ask these questions of the forum tells us that you don't have the skill to determine whether or not the exercise is needed at all. How have you proved that denormalisation is necessary (or even sufficient) to solve the client's performance problems if you have no idea about how to develop a mechanism to restructure the data efficiently ?
    Regards
    Jonathan LewisYour brutal honesty is certainly correct. Another thing that is concerning to me is that it's possible that he's planning on denormalizing tables that are normalized for a reason. What good is a system that responds like a data warehouse but has questionable data integrity? I didn't even know where to begin with asking that question though.

Maybe you are looking for