Slow HDD performance in XP SP2 with MS-6380

Hi all,
I have this HDD throughput problem in XP:
First, I ran hdd testing utilities within a boot CD and I got:
35Mb/s, (75Mb/s burst)
After a clean fresh Windows XP (SP2 integrated) install, I got:
(with hd_speed and hdtach for windows, first applications that ran after install)
34Mb/s, 34Mb/s burst ( !?!)
After installing sound and modem drivers, antivirus and firewall... and a few reboots, I got:
(with the same testing apps)
20Mb/s, 20Mb/s burst (!?!)  
Specs:
MSI-6380 KT3 (without RAID) KT333 Via chipset
(all IDE related settings in BIOS are ON)
AMB 1800+
1x512 RAM @333
Maxtor HDD 60Gb ATA133
Onboard Via Sound
GeForce 5200XT AGP (running @ 4x, mobo max AGP rate)
IEEE PCI Card
Windows XP-SP2
DMA enabled
I also work with a NT4 PC and I get:
(with the same testing apps)
30Mb/s, 67Mb/s burst
Specs:
Mboard with i815E
intel ide controller
onboard gfx
iPentiumIII 933
2x256 RAM @ 133
Seagate 20Gb ATA100
Windows NT4-SP6, DMA enabled
Can somebody tell me what's wrong with this picture?
Will 4in1 drivers do any good? How about miniport ide drivers?  
Can't I get more of a 'better' system???

Hello again and thanx for your replies..,
Well, I don't think XP's Service Pack 2 might be the cause of it.
I was having the same problem with a previous WinXP installation (SP1a, I guess)
And yes, I defrag everything...
The interesting thing is, SOMETIMES, I get the 35-37Mb/s with George Breese's hd_speed. And on other times, I get 20-21Mb/s...
I don't know if it has something to with some services within SP2 (I got some good info at Black Viper's)
I'll tweak a little more with those and see what I can come up with.
I'll post results, if possible.
Btw, I have 2 partitions in XP:
'drive' C: with 5Gb (NTFS, 8Bytes/cluster) for SO, apps
'drive' D: with 55Gb (NTFS, 8Bytes/cluster) for media
See ya!

Similar Messages

  • Extremely slow HDD performance on Acer 7551

    Has anyone else experienced slow performance on their laptop (I wouldn't think this is specific to the Acer but I suppose its possible) where it takes at least 2 seconds or more for the command to seem to be executed? The login prompt is an obvious first example for me (via the command prompt not a DM) where after I enter my username the Password prompt wont appear for about 2-3 seconds and then even after the password being entered the bash prompt wont appear for another 2-3 seconds.  The HDD light is active/blinking during these pauses too so its like its seeking for something on the other end of the drive.
    Its a standard laptop HDD speed (5400 RPM) but I wouldn't think this would make the hiccups occur.  I never noticed this before in other installations such as Fedora and Ubuntu.
    Is it possible that its my partition schema? It is as follows:
    sda1 --> /boot
    sda2 --> /
    sda3 --> /var
    sda5 --> /swap (logical)
    sda6 --> /home (logical)
    sda7 --> /tmp (logical)
    I have also tried the standard recommended:
    sda1 --> /root
    sda2 --> /var
    sda3 --> /swap
    sda5 --> /home (logical)
    and still the issue remains.  Is it really just my HDD?

    t3kka wrote:Reading around on some other threads I found regarding a slow login to tty I noticed some folks state that a large root partition could be the problem.  My HDD is 500GB with the root, var, swap, and home partitions being ~80GB, ~20GB, ~6GB, and then the remainder as in home (respectively).  Maybe having a large root partition along with the location of the home partition makes the HDD head travel a much further distance which is physically equated to a delay in login and other actions.  With such a large HDD, what would you recommend for partition schemes?
    I have a 640 gb 5400rpm drive, and I've installed arch with only a huge root partition on it in the past, and had no speed issues (currently I have a 40gb root and a ~595gb home and a 5 gig swap).
    Last edited by bwat47 (2012-06-24 20:56:03)

  • Slow hdd performance

    Ok I did some research and still can't figure out why writing/reading speed is so slow in practice
    hdparm -tT /dev/sda
    /dev/sda:
    Timing cached reads: 1012 MB in 2.00 seconds = 506.11 MB/sec
    Timing buffered disk reads: 336 MB in 3.01 seconds = 111.58 MB/sec
    With dd I get ~10 MB/s (dd if=/dev/zero of=/test bs=100M count=10)
    Internally copy from mc 5-6 MB/s
    I did multiple tests
    What is going on here?
    hdparm -I /dev/sda
    /dev/sda:
    ATA device, with non-removable media
    Model Number: ST31000520AS
    Serial Number: 5VX0WRYD
    Firmware Revision: CC32
    Transport: Serial
    Standards:
    Used: unknown (minor revision code 0x0029)
    Supported: 8 7 6 5
    Likely used: 8
    Configuration:
    Logical max current
    cylinders 16383 16383
    heads 16 16
    sectors/track 63 63
    CHS current addressable sectors: 16514064
    LBA user addressable sectors: 268435455
    LBA48 user addressable sectors: 1953525168
    Logical/Physical Sector size: 512 bytes
    device size with M = 1024*1024: 953869 MBytes
    device size with M = 1000*1000: 1000204 MBytes (1000 GB)
    cache/buffer size = unknown
    Nominal Media Rotation Rate: 5900
    Capabilities:
    LBA, IORDY(can be disabled)
    Queue depth: 32
    Standby timer values: spec'd by Standard, no device specific minimum
    R/W multiple sector transfer: Max = 16 Current = 16
    Advanced power management level: 192
    Recommended acoustic management value: 254, current value: 254
    DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
    Cycle time: min=120ns recommended=120ns
    PIO: pio0 pio1 pio2 pio3 pio4
    Cycle time: no flow control=120ns IORDY flow control=120ns
    Commands/features:
    Enabled Supported:
    * SMART feature set
    Security Mode feature set
    * Power Management feature set
    * Write cache
    * Look-ahead
    * Host Protected Area feature set
    * WRITE_BUFFER command
    * READ_BUFFER command
    * DOWNLOAD_MICROCODE
    * Advanced Power Management feature set
    Power-Up In Standby feature set
    SET_FEATURES required to spinup after power up
    SET_MAX security extension
    * Automatic Acoustic Management feature set
    * 48-bit Address feature set
    * Device Configuration Overlay feature set
    * Mandatory FLUSH_CACHE
    * FLUSH_CACHE_EXT
    * SMART error logging
    * SMART self-test
    * General Purpose Logging feature set
    * WRITE_{DMA|MULTIPLE}_FUA_EXT
    * 64-bit World wide name
    Write-Read-Verify feature set
    * WRITE_UNCORRECTABLE_EXT command
    * {READ,WRITE}_DMA_EXT_GPL commands
    * Segmented DOWNLOAD_MICROCODE
    * Gen1 signaling speed (1.5Gb/s)
    * Native Command Queueing (NCQ)
    * Phy event counters
    Device-initiated interface power management
    * Software settings preservation
    * SMART Command Transport (SCT) feature set
    * SCT Long Sector Access (AC1)
    * SCT LBA Segment Access (AC2)
    * SCT Error Recovery Control (AC3)
    * SCT Features Control (AC4)
    * SCT Data Tables (AC5)
    unknown 206[12] (vendor specific)
    Security:
    Master password revision code = 65534
    supported
    not enabled
    not locked
    frozen
    not expired: security count
    supported: enhanced erase
    188min for SECURITY ERASE UNIT. 188min for ENHANCED SECURITY ERASE UNIT.
    Logical Unit WWN Device Identifier: 5000c500250e4711
    NAA : 5
    IEEE OUI : 000c50
    Unique ID : 0250e4711
    Checksum: correct
    smartctl -a /dev/sda
    smartctl 5.42 2011-10-20 r3458 [i686-linux-3.2.13-1-ARCH] (local build)
    Copyright (C) 2002-11 by Bruce Allen, [url]http://smartmontools.sourceforge.net[/url]
    === START OF INFORMATION SECTION ===
    Model Family: Seagate Barracuda LP
    Device Model: ST31000520AS
    Serial Number: 5VX0WRYD
    LU WWN Device Id: 5 000c50 0250e4711
    Firmware Version: CC32
    User Capacity: 1,000,204,886,016 bytes [1,00 TB]
    Sector Size: 512 bytes logical/physical
    Device is: In smartctl database [for details use: -P show]
    ATA Version is: 8
    ATA Standard is: ATA-8-ACS revision 4
    Local Time is: Fri Mar 30 00:45:54 2012 CEST
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    === START OF READ SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    General SMART Values:
    Offline data collection status: (0x00) Offline data collection activity
    was never started.
    Auto Offline Data Collection: Disabled.
    Self-test execution status: ( 0) The previous self-test routine completed
    without error or no self-test has ever
    been run.
    Total time to complete Offline
    data collection: ( 623) seconds.
    Offline data collection
    capabilities: (0x73) SMART execute Offline immediate.
    Auto Offline data collection on/off support.
    Suspend Offline collection upon new
    command.
    No Offline surface scan supported.
    Self-test supported.
    Conveyance Self-test supported.
    Selective Self-test supported.
    SMART capabilities: (0x0003) Saves SMART data before entering
    power-saving mode.
    Supports SMART auto save timer.
    Error logging capability: (0x01) Error logging supported.
    General Purpose Logging supported.
    Short self-test routine
    recommended polling time: ( 1) minutes.
    Extended self-test routine
    recommended polling time: ( 219) minutes.
    Conveyance self-test routine
    recommended polling time: ( 2) minutes.
    SCT capabilities: (0x103f) SCT Status supported.
    SCT Error Recovery Control supported.
    SCT Feature Control supported.
    SCT Data Table supported.
    SMART Attributes Data Structure revision number: 10
    Vendor Specific SMART Attributes with Thresholds:
    ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
    1 Raw_Read_Error_Rate 0x000f 119 099 006 Pre-fail Always - 216824470
    3 Spin_Up_Time 0x0003 096 096 000 Pre-fail Always - 0
    4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 526
    5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 3
    7 Seek_Error_Rate 0x000f 077 060 030 Pre-fail Always - 58496816
    9 Power_On_Hours 0x0032 084 084 000 Old_age Always - 14481
    10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
    12 Power_Cycle_Count 0x0032 100 037 020 Old_age Always - 265
    183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0
    184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
    187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0
    188 Command_Timeout 0x0032 100 098 000 Old_age Always - 8590065682
    189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0
    190 Airflow_Temperature_Cel 0x0022 070 054 045 Old_age Always - 30 (Min/Max 18/31)
    194 Temperature_Celsius 0x0022 030 046 000 Old_age Always - 30 (0 16 0 0 0)
    195 Hardware_ECC_Recovered 0x001a 039 021 000 Old_age Always - 216824470
    197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
    198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
    199 UDMA_CRC_Error_Count 0x003e 200 199 000 Old_age Always - 9
    240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 152454159153810
    241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 3670022033
    242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 448914133
    SMART Error Log Version: 1
    No Errors Logged
    SMART Self-test log structure revision number 1
    Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
    # 1 Short offline Completed without error 00% 14007 -
    # 2 Extended offline Interrupted (host reset) 00% 13651 -
    # 3 Extended offline Aborted by host 90% 8256 -
    # 4 Short offline Completed without error 00% 310 -
    SMART Selective self-test log data structure revision number 1
    SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
    1 0 0 Not_testing
    2 0 0 Not_testing
    3 0 0 Not_testing
    4 0 0 Not_testing
    5 0 0 Not_testing
    Selective self-test flags (0x0):
    After scanning selected spans, do NOT read-scan remainder of disk.
    If Selective self-test is pending on power-up, resume after 0 minute delay.
    cat /etc/fstab
    # <file system> <dir> <type> <options> <dump> <pass>
    devpts /dev/pts devpts defaults 0 0
    shm /dev/shm tmpfs defaults 0 0
    /dev/sr0 /mnt/cd iso9660 ro,user,noauto,unhide 0 0
    #/dev/dvd /media/dvd auto ro,user,noauto,unhide 0 0
    #/dev/fd0 /media/fl auto user,noauto 0 0
    UUID=934647db-14d1-4641-820c-e42c0f63c0e5 /boot ext2 defaults 0 2
    /dev/mapper/lvm-home /home ext4 defaults,noatime 0 2
    /dev/mapper/lvm-root / ext4 defaults,noatime 0 1
    /dev/mapper/lvm-swap swap swap defaults 0 0
    tmpfs /tmp tmpfs noatime,nodev,nosuid,noexec,size=1536M 0 0
    /dev/mapper/data /mnt/data ext4 acl,user_xattr,noatime 0 2
    dmesg | grep ata
    [ 0.000000] BIOS-e820: 000000001f5d7000 - 000000001f5d9000 (ACPI data)
    [ 0.000000] BIOS-e820: 000000001f600000 - 000000001f608000 (ACPI data)
    [ 0.000000] Memory: 481496k/515072k available (3766k kernel code, 31608k reserved, 1441k data, 524k init, 0k highmem)
    [ 0.000000] .data : 0xc04ad9d5 - 0xc06160c0 (1441 kB)
    [ 14.174799] Write protecting the kernel read-only data: 1112k
    [ 14.661110] libata version 3.00 loaded.
    [ 14.684788] ata1: SATA max UDMA/133 abar m1024@0xffe40000 port 0xffe40100 irq 45
    [ 14.684809] ata2: DUMMY
    [ 14.684826] ata3: SATA max UDMA/133 abar m1024@0xffe40000 port 0xffe40200 irq 45
    [ 14.684842] ata4: DUMMY
    [ 14.685022] ata_piix 0000:00:1f.1: version 2.13
    [ 14.685081] ata_piix 0000:00:1f.1: PCI INT A -> GSI 18 (level, low) -> IRQ 18
    [ 14.685243] ata_piix 0000:00:1f.1: setting latency timer to 64
    [ 14.699846] scsi4 : ata_piix
    [ 14.705173] scsi5 : ata_piix
    [ 14.715618] ata5: PATA max UDMA/100 cmd 0x1f0 ctl 0x3f6 bmdma 0xf100 irq 14
    [ 14.715642] ata6: PATA max UDMA/100 cmd 0x170 ctl 0x376 bmdma 0xf108 irq 15
    [ 15.006827] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
    [ 15.008016] ata1.00: ATA-8: ST31000520AS, CC32, max UDMA/133
    [ 15.008034] ata1.00: 1953525168 sectors, multi 16: LBA48 NCQ (depth 31/32)
    [ 15.009380] ata1.00: configured for UDMA/133
    [ 15.013441] ata3: SATA link down (SStatus 0 SControl 300)
    [ 40.720777] EXT4-fs (dm-1): mounted filesystem with ordered data mode. Opts: (null)
    [ 59.246412] EXT4-fs (dm-3): mounted filesystem with ordered data mode. Opts: (null)
    [ 59.339759] EXT4-fs (dm-4): mounted filesystem with ordered data mode. Opts: acl,user_xattr
    EDIT: Replaced quote with code - ngoonee
    EDIT: Ok, sorry - miskoala
    Last edited by miskoala (2012-03-30 08:18:09)

    I booted this drive from another computer (Asrock M3A770DE mobo) and works fine, speed with dd was ~100MB/s
    I also update bios on my first computer with Intel D945GSEJT (there is selected AHCI mode) still the same transfers with dd (~10Mb/s)
    I have no idea what to do with it now
    EDIT:
    More tests with other hdd (Samsung HD502IJ), dd on mounted NTFS partition:
    Intel D945GSEJT booted from Archlinux iso:
    hdparm: ~75 MB/s
    dd: ~10 MB/s
    Intel D945GSEJT booted from Archlinux iso, HDD connected to PCI SATA controller (Promise 378):
    hdparm: ~60 MB/s
    dd: ~10 MB/s
    Asrock M3A770DE booted from Archlinux iso:
    hdparm: ~87 MB/s
    dd: ~76 MB/s
    Asrock M3A770DE booted from Archlinux iso, HDD connected to PCI SATA controller (Promise 378):
    hdparm: ~81MB/s
    dd: ~76 MB/s
    What is broken on D945GSEJT?
    Last edited by miskoala (2012-03-30 13:47:23)

  • T520 - 42435gg / Sound stutter and slow Graphic performance with Intel Rapid Storage AHCI Driver

    Hi everybody,
    I have serious Problems with my 42435gg
    Any time I install the Intel Storage AHCI Driver (I've tried plenty of different versions) which is suggested by System Update I experience a horrible Sound stutter and slow Graphic performance in Windows 7 64-Bit.
    The funny thing in this case: If the external e-sata port is connected the problems do not occur. If the port is unused again, the stutter begins immediately.
    The only thing I can do is using the Windows internal Storage Driver with which I am not able to use my DVD recorder for example.
    The device was sent to lenovo for hardware testing with no result. It was sent back without any repairing.
    Anybody experience on this?
    Kind regards,
    Daniel

    Did you try the 11.5 RST beta? Load up DPClat and see if DPC conditions are favorable.
    What are you using to check graphics performance?
    W520: i7-2720QM, Q2000M at 1080/688/1376, 21GB RAM, 500GB + 750GB HDD, FHD screen
    X61T: L7500, 3GB RAM, 500GB HDD, XGA screen, Ultrabase
    Y3P: 5Y70, 8GB RAM, 256GB SSD, QHD+ screen

  • So disappointed with the slow Lion performance & don't know what to do

    I am an apple fan and I'm expressing my sincere disappointment! I guess those people who gave Lion 4-5 stars either installed it on their brilliant macbook pros or on those huge imacs or haven't used it before voting! I have a 2GB ram macbook and after installing lion it's gone terrible. Even text edit which is relatively a very light programme is not functioning many times!
    I care about Apple and I think if they don't do something asap it's gonna turn to their version of Vista and change the OS game's balance.
    I'm thinking of going back to Snow Leopard, that was way much faster. One funny thing is on AppStore it doesn't let me review the product saying you should have bought it from AppStore. Well of course I bought it from AppStore and you have it in your records, why don't you let me save others from this disaster. It takes a lot of my time now to go back to Snow Leopard. Honestly, with all due respect, and not in an angry mood Lion is bad! And makes your computer run much slower. DISAPPOINTED

    Apple says 2gb of ram is the minimum needed for lion. If you have an older machine and just the minimum ram lion will be slower than SL. I worked with a friend who had a MacBook model with minimum ram. When he upgraded the ram to 4gb his machine was about 40% faster. 
    Memory is relatively inexpensive these days so you might consider buying some to get to 4 or 8 gb.
    Jay

  • SSD + slower HDD

    Despite the abundant discussions about good hard drive configuration for Lightroom (v 4), I am still struggling to understand what to choose when buying my new desktop PC without blowing my budget.
    Assume that processor, RAM, motherboard and all that are up to par, what should I go with in terms of HDD/SSD (under a somewhat limited budget)?
    I want a *quiet* system so using three roaring 10,000 rpm HDDs is no option. If anything I would like to use low power and low noise HDDs (5400 rpm), which could be combined with an SSD. Is this a stupid setup with the HDDs being a bottle neck? For example, is a 128 GB SSD (for Windows 7/64 OS, apps, LR cache) + 1 TB 5400 rpm HDD (for the image files) a good trade-off between performance and noise?
    Jalamaar

    I believe the impact of a green HDD is negligible.
    2012/8/1 Jalamaar <[email protected]>
       Re: SSD + slower HDD
    created by Jalamaar <http://forums.adobe.com/people/Jalamaar> in *Photoshop
    Lightroom* - View the full discussion<http://forums.adobe.com/message/4591485#4591485

  • Slow JMS Performance

              Hi,
              I'm using WL7.1 SP2 for a JMS based application. The database is SQLServer 2000,
              and the JMS queues are persistent on this DB.
              A client outside the Weblogic Server connects and fires lots of JMS messages to
              the application. These messages are consumed by a pool of MDB's.
              I don't think it's performing as well as it could.
              For example, if I un-deploy my application, and fire 80,000 JMS messages at the
              server, it takes around 10 minutes to persist these messages to the queues. All
              good so far.
              However, with the application deployed, the same number of messages take around
              3 hours. The messages are consumed and processed as fast as they can be persisted
              to the JMS queue. It's almost as if the messages are being synchronysly put on
              the queue, ie. control is not returned to the client until the message is consumed
              and processed. Unless I'm misunderstanding this, as soon as the messages is successfully
              persisted to the JMS queue, control should be passed back to the client.
              What I'd expect to happen is that the client would take around the same time post
              the messages to the JMS Queue, ie. about 10 minutes - and the Pending Message
              count to increase on the WL Server. Once all 80,000 messages were on the queue,
              the Pending Message count would gradually go down as the messages were processed.
              If I'm misunderstanding the way this works, please let me know. If you need any
              further info, please ask and I'll post the answers.
              Thanks,
              Richard Kenyon
              EDS - UK
              

    No problem.
              Performance is one of my hot-buttons, so I can't help but
              noting that when it comes to queueing, or even asynchronous
              invokes in general, slowing down the requester is often
              goodness. It helps put "back-pressure" on the requesting
              application, and so helps prevent a permanently growing
              request backlog that the slower request handling
              applications could never dig their way out of.
              Tom
              P.S. An updated version of the JMS performance
              guide will be released in the next few weeks. It has
              corrections, adds information about 8.1 features, and
              expands information in a number of areas. I'll post
              a link in the newsgroup when it comes out.
              Richard Kenyon wrote:
              > Thanks Tom,
              >
              > I suspected (and hoped!) that this would be the answer. I just needed
              > clarification that I wasn't being stupid :-)
              >
              > I'll have a look at the Performance guide you mentioned.
              >
              > Thanks,
              > Richard Kenyon
              > EDS - UK
              >
              > "Tom Barnes" <[email protected]> wrote in message
              > news:[email protected]...
              >
              >>When the application is active, it is forcing the server to do
              >>other work as well. In effect you are sharing a limited
              >>resource. Transactions (if applicable), your
              >>application code, your application receives, etc.
              >>In effect, the sender is slowed down because it
              >>is competing with the MDB application for resources.
              >>I suggest you read the JMS Performance
              >>Guide white-paper on dev2dev.bea.com.
              >>
              >>Tom
              >>
              >>Richard Kenyon wrote:
              >>
              >>>Hi,
              >>>
              >>>I'm using WL7.1 SP2 for a JMS based application. The database is
              >>
              > SQLServer 2000,
              >
              >>>and the JMS queues are persistent on this DB.
              >>>
              >>>A client outside the Weblogic Server connects and fires lots of JMS
              >>
              > messages to
              >
              >>>the application. These messages are consumed by a pool of MDB's.
              >>>
              >>>I don't think it's performing as well as it could.
              >>>
              >>>For example, if I un-deploy my application, and fire 80,000 JMS messages
              >>
              > at the
              >
              >>>server, it takes around 10 minutes to persist these messages to the
              >>
              > queues. All
              >
              >>>good so far.
              >>>
              >>>However, with the application deployed, the same number of messages take
              >>
              > around
              >
              >>>3 hours. The messages are consumed and processed as fast as they can be
              >>
              > persisted
              >
              >>>to the JMS queue. It's almost as if the messages are being synchronysly
              >>
              > put on
              >
              >>>the queue, ie. control is not returned to the client until the message
              >>
              > is consumed
              >
              >>>and processed. Unless I'm misunderstanding this, as soon as the messages
              >>
              > is successfully
              >
              >>>persisted to the JMS queue, control should be passed back to the client.
              >>>
              >>>What I'd expect to happen is that the client would take around the same
              >>
              > time post
              >
              >>>the messages to the JMS Queue, ie. about 10 minutes - and the Pending
              >>
              > Message
              >
              >>>count to increase on the WL Server. Once all 80,000 messages were on the
              >>
              > queue,
              >
              >>>the Pending Message count would gradually go down as the messages were
              >>
              > processed.
              >
              >>>If I'm misunderstanding the way this works, please let me know. If you
              >>
              > need any
              >
              >>>further info, please ask and I'll post the answers.
              >>>
              >>>Thanks,
              >>>Richard Kenyon
              >>>EDS - UK
              >>>
              >>
              >
              >
              

  • Apex report performance is very poor with apex_item.checkbox row selector.

    Hi,
    I'm working on a report that includes some functionality to be able to select multiple records for further processing.
    The report is based on a view that contains a couple of hundred thousand records.
    When i make a selection from this view in sqlplus , the performance is acceptable but the apex report based on the same view performes very poorly.
    I've noticed that when i omit the apex_item.checkbox from my report query, performance is on par with sqlplus. (factor 10 or so quicker).
    Explain plan appears to be the same with or without checkbox function in the select.
    My query is:
    select apex_item.checkbox(1,tan_id) Select ,
    brt_id
    , tan_id
    , message_id
    , conversation_id
    , action
    , to_acn_code
    , information
    , brt_created
    , tan_created
    from (SELECT brt.id brt_id, -- view query
    MAX (TAN.id) tan_id,
    brt.message_id,
    brt.conversation_id,
    brt.action,
    TAN.to_acn_code,
    TAN.information,
    brt.created brt_created,
    TAN.created tan_created
    FROM (SELECT brt_id, id, to_acn_code, information, created
    FROM xxcjib_transactions
    WHERE tan_type = 'DELIVER' AND status = 'FINISHED') TAN,
    xxcjib_berichten brt
    WHERE brt.id = TAN.brt_id
    GROUP BY brt.id,
    brt.message_id,
    brt.conversation_id,
    brt.action,
    TAN.to_acn_code,
    TAN.information,
    brt.created,
    TAN.created)
    What could be the reason for the poor performance of the apex report?
    And is there another way to select multiple report records without the apex_item.checkbox function?
    I'm using apex 3.2 on oracle 10g database.
    Thanks,
    Niels Ingen Housz
    Edited by: user11986529 on 19-mrt-2010 4:06

    Thanks for your reply.
    Unfortunately changing the pagination doesnt make much of a difference in this case.
    Without the checkbox the query takes 2 seconds.
    With checkbox it takes well over 30 seconds.
    The second report region on this page based on another view seems to perform reasonably well with or without the checkbox.
    It has about the same number of records but with a different view query.
    There are also a couple of filter items in the where clause of the report queries (same for both reports) based on date and acn_code and both reports have a selectlist item displayed in their regions based on a simple lov. These filter items don't seem to be of influence on the performance.
    I have also recreated the report on a seperate page without any other page items or where clause and the same thing occurs.
    With the checkbox its very very slow (more like 20 times slower).
    Without it , the report performs well.
    And another thing, when i run the page with debug on i don't see the actual report query:
    0.08: show report
    0.08: determine column headings
    0.08: activate sort
    0.08: parse query as: APEX_CMA_ONT
    0.09: print column headings
    0.09: rows loop: 30 row(s)
    and then the region is displayed.
    I am using databaselinks in the views b.t.w
    Edited by: user11986529 on 19-mrt-2010 7:11

  • App-V 5.0 SP2 with Mandatory Profiles

    Hi,
    We are having some issues with App-V 5.0 SP2 with HFX2 and HFX4 on our Windows 7 x64 Clients
    We use mandatory profiles and delete the local profile on logoff,
    HFX2 Issue:
    The AppData\Local\Microsoft\AppV\Client\VFS\<GUID> is only created once, subsequent logins don't have the folder structure created for the package VFS and all folders have read/write access for the user, when they should follow the base OS folder permissions
    We have found that a registry entry is created in HKLM\Software\Microsoft\AppV\MAV\Configuration\Packages\<GUID>\UserConfigEx\<USER-GUID>
    if we manually delete this registry key then appv will re-create the VFS folders in local app data
    AppV seems to be setting key's saying its created those folders and expects them to be there on the next login, is there a way to tell AppV that we are using mandatory profiles and to re-create the folders?
    HFX4 Issue:
    Hotfix 4 is a bigger problem in that a registry entry for <USER-GUID> is created in HKLM\SOFTWARE\Microsoft\AppV\Client\Virtualzation\LocalVFSSecuredUser
    The first time appv is used for that user, it works, all subsequent times appv packages fail to load unless that registry string is manually deleted.
    Is there any way to fix App-V 5 to work with mandatory profiles which are deleted on logoff?
    Thanks!

    Hi Nicke,
    The registry key's are located in HKLM, which causes permission problems as our uses are all standard users
    the HFX2 issue sort of works, the
    UserConfigEx key is created with the logged on user having read/write permissions, so as long as that user successfully runs a logoff script then the key is deleted, if the desktop crashes without running the logoff script and a new user logs in, then they
    don't have the rights to delete the sub keys and all future app-v localappdata VFS permissions are broken on that client,
    The HFX4 issue registry keys are all write protected from standard users,
    I'm not 100% sure these are the only area's which are causing problems (they are just the ones I've found!) and that just deleting the registry entries completely fixes the problem.  I didn't go down the path of deleting HKLM keys as a user as it seemed
    a bit of a brute force hack to me which I was trying to avoid :)
    If I give users rights to delete HKLM\Software\Microsoft\AppV\MAV\Configuration\Packages subkeys
    then I think they could break published applications if the <GUID> folders are somehow deleted?
    I was actually pretty surprised to find out that App-V 5.0 stores permanent information about user's in the HKLM hive, I would have thought all of this should live in the HKCU hive!
    I've not been able to find any documentation on how App-V uses HKLM keys to keep track of processes its already performed on a user and doesn't want to do again,
    Thanks!

  • Sql query slowness due to rank and columns with null values:

        
    Sql query slowness due to rank and columns with null values:
    I have the following table in database with around 10 millions records:
    Declaration:
    create table PropertyOwners (
    [Key] int not null primary key,
    PropertyKey int not null,    
    BoughtDate DateTime,    
    OwnerKey int null,    
    GroupKey int null   
    go
    [Key] is primary key and combination of PropertyKey, BoughtDate, OwnerKey and GroupKey is unique.
    With the following index:
    CREATE NONCLUSTERED INDEX [IX_PropertyOwners] ON [dbo].[PropertyOwners]    
    [PropertyKey] ASC,   
    [BoughtDate] DESC,   
    [OwnerKey] DESC,   
    [GroupKey] DESC   
    go
    Description of the case:
    For single BoughtDate one property can belong to multiple owners or single group, for single record there can either be OwnerKey or GroupKey but not both so one of them will be null for each record. I am trying to retrieve the data from the table using
    following query for the OwnerKey. If there are same property rows for owners and group at the same time than the rows having OwnerKey with be preferred, that is why I am using "OwnerKey desc" in Rank function.
    declare @ownerKey int = 40000   
    select PropertyKey, BoughtDate, OwnerKey, GroupKey   
    from (    
    select PropertyKey, BoughtDate, OwnerKey, GroupKey,       
    RANK() over (partition by PropertyKey order by BoughtDate desc, OwnerKey desc, GroupKey desc) as [Rank]   
    from PropertyOwners   
    ) as result   
    where result.[Rank]=1 and result.[OwnerKey]=@ownerKey
    It is taking 2-3 seconds to get the records which is too slow, similar time it is taking as I try to get the records using the GroupKey. But when I tried to get the records for the PropertyKey with the same query, it is executing in 10 milliseconds.
    May be the slowness is due to as OwnerKey/GroupKey in the table  can be null and sql server in unable to index it. I have also tried to use the Indexed view to pre ranked them but I can't use it in my query as Rank function is not supported in indexed
    view.
    Please note this table is updated once a day and using Sql Server 2008 R2. Any help will be greatly appreciated.

    create table #result (PropertyKey int not null, BoughtDate datetime, OwnerKey int null, GroupKey int null, [Rank] int not null)Create index idx ON #result(OwnerKey ,rnk)
    insert into #result(PropertyKey, BoughtDate, OwnerKey, GroupKey, [Rank])
    select PropertyKey, BoughtDate, OwnerKey, GroupKey,
    RANK() over (partition by PropertyKey order by BoughtDate desc, OwnerKey desc, GroupKey desc) as [Rank]
    from PropertyOwners
    go
    declare @ownerKey int = 1
    select PropertyKey, BoughtDate, OwnerKey, GroupKey
    from #result as result
    where result.[Rank]=1
    and result.[OwnerKey]=@ownerKey
    go
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Firefox 4 is extremely slow to perform any function on my lap top compared to previous versions. Why? And how do I change this!

    Firefox 4 is extremely slow to perform any function on my lap top compared to previous versions. Why? And how do I change this!

    Firefox 21 and Firefox 22 running on Windows 7 have been reported to take a long time to "wake up" from sleep. I realize hibernation is different than sleep, but... this is the closest match for your description.
    Some users have reported that this problem is resolved in Firefox 23 (currently in beta). Others have had inconsistent luck with minimizing Firefox before letting Windows sleep and other measures.
    Please check out this (very long) thread for more information: [https://support.mozilla.org/questions/961898 browser freezes after resuming from sleep]
    Or jump to the part about Firefox 23: https://support.mozilla.org/questions/961898?page=3#answer-457321
    The fact that this just started recently suggests perhaps it is related to another program or update, but I don't think anyone has confirmed the exact interaction that causes the problem.

  • Hi guys, can you please explain how to perform automatic system scan with CleanApp - it looks like it wants me to manually chose files to delete and I just want an automatic scan (like cleanmymac does)

    Hi guys, can you please explain how to perform automatic system scan with CleanApp - it looks like it wants me to manually chose files to delete and I just want an automatic scan (like cleanmymac does)

    Slowness...First Try using Disk Utility to do a Disk Repair, as shown in this link, while booted up on your install disk.
      You could have some directory corruption. Let us know what errors Disk Utility reports and if DU was able to repair them. Disk Utility's Disk Repair is not perfect and may not find or repair all directory issues. A stronger utility may be required to finish the job.
      After that Repair Permissions. No need to report Permissions errors....we all get them.
    Here's Freeing up Disk Space.
    DALE

  • Jdbc thin driver bulk binding slow insertion performance problem

    Hello All,
    We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
    here is the trace report from 10046 event, I hide table name for privacy reason.
    Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
    INSERT INTO ...
    values
    (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
    :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
    :33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.02 14.29 1 94 2565 200
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.02 14.29 1 94 2565 200
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 25
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net more data from client 28 6.38 14.19
    db file sequential read 1 0.02 0.02
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    ********************************************************************************

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

  • Jdbc thin driver and bulk binding slow insertion performance

    Hello All,
    We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
    here is the trace report from 10046 event, I hide table name for privacy reason.
    Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
    INSERT INTO ...
    values
    (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
    :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
    :33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.02 14.29 1 94 2565 200
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.02 14.29 1 94 2565 200
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 25
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net more data from client 28 6.38 14.19
    db file sequential read 1 0.02 0.02
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    ********************************************************************************

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

  • Continued SLOW online performance after two Archive & Installs

    Hi.
    I have been living with slow online performance – pages slow to load, movies very slow to load, movies don't play back smoothly, constant buffering – for some 3-4 months before trying to do something about it... (this timeline coincides with a potential Security Update 2008-006 issue discussed here: http://discussions.apple.com/thread.jspa?threadID=1730909&tstart=0 , but I don't have enough savvy to know if that's the problem.)
    Equivalent slow performance in both Safari and Firefox... I've compared the same sites I visit on other peoples workstations, PC and Mac, various ISPs – all of their pages and movies light up and play instantly.
    Tried to research and do as much as I could before coming here, but I don't have great diagnostic skills... here's what I've got so far:
    By late November, I managed to eliminate the ISP (Earthlink) as a cause of the problem... however, due to their direction to change DNS numbers, I can no longer automatically connect online upon booting, but must repeatedly connect via Internet Connect (separate issue?)... via these forums, I've reset the DNS to 208.67.220.220, 208.67.222.222.
    Neither hard drive is even half full.
    I did an initial Disk and Permissions Repair prior to first A&I... they needed repair then, but subsequent verifications check out clean.
    After the first A&I (using the original OS X 10.4 install disk), I downloaded a complete full 10.4.11 update, including all current Security Updates... performance wound up much the same as before.
    After the second A&I, and before any updating, I tested online performance – still slow... this time I downloaded only the Mac OS X 10.4.11 Combined Update from November 07, looking to avoid the 2008-006 Security Update... general browsing speed is slightly improved, but any site with a movie or rich graphics behaves like dial-up, as before... Mail is just ok – not fast, but not problematic.
    Otherwise, the Mac works well enough, but it's never been a real speed-burner, imo... mainstream graphics apps (Adobe CS2, Quark 6)... the only atypical thing I might have is a Wacom graphics tablet (Intuos 3), but it's always worked fine... no games or brand-X playtime software.
    The only other thing I'd be suspicious of is Network Settings, after getting the runaround at Earthlink... but the fact that I can get online at all might rule that out.
    I've been at this for days, and am out of ideas... little help?
    Thanx.

    OK, network settings is the scary stuff that I DO NOT understand...
    BDAqua wrote:
    Make a New location in Network>Location>New, try it without PPoE, just Using DHCP under the TCP/IP tab>IPv4 setting. You can always switch back if it doesn't work.
    I found the PPPoE subpane.
    Made the New location, it seemed to work briefly, then didn't, then I fumbled my way back to prior settings... I'm willing to try it again, BUT...
    What scared me was that after setting New Location, the browser window opened to something called *Internet Configurator*, never seen this before... it asked for my email and password... is this normal?... I've got major privacy and security concerns and DO NOT want to put that password out there if I don't have to.
    Please advise before I do this... thanks.

Maybe you are looking for

  • Problem with G 580

    Recently I've bought new Lenovo G 580 but I am having a problem: Every time when I type in any number or letter(mostly no.1) into any window( password,bank account,log in,google translate, etc.), inexplicably starts endless series of dots which I mus

  • SG300-20 help...

    Hi A year ago I bought this new switch for a small business.  I just plugged it in unmanaged mode and all was well. Everything was running internally at 1GBs speed.    About a month ago the clients complained about network speed. I checked the Local

  • SAP ISU

    Hello - I am a technical consultant for last 4 yrs. My current project demands SAP IS-Utility knowledge, mainly on the Technical front. Can someone share his/her technical knowledge in ISU ? Thanks much in advance.. Kapil Sharma

  • Load document failed

    Platform: Oracle9iR2, chinese version. I created a table as "create table xmltable of xmltype;", then use the following statement to load document via jdbc driver: //str is the content of my xml file. XMLType xt = XMLType.createXML(conn, str); stmt.s

  • Document archiving solutions

    Hi we have sharepoint 2010 farm and many users will upload documents daily  and we have plan to archive this documents and implement any archiving solutions what are the best archiving solutions that work perfect with sharepoint 2010 -i mean when any