Local vs NFS disk access

We're buying a new platform for our Oracle server. It'll be...
(4) 1.3ghz 64 bit (Madisons)
48gb memory
(2) 36gb drives - For OS and Swap
(2) 146gb drives - For general work space
The question is where to put the disks that will contain the database proper. There will be only one Oracle server associated with the data on these disks. This is not a distributed DB application of any kind. It is a single server app having sole access to it's data disks.
Our IS group is proposing to use NAS (network attached storage) like a netapp filter which would provide NFS access via it's own OS. Access to the filter would be via 1 or more copper gigabyte connections. The network itself wiould be shared by other users in a large engineering group.
The alternative is to mount the disks directly onto the server itself... no network.
My concern is that the networked approach will be noticably slower than the local mount, especially where network loads can be unpredictable and heavy at times in the engineering group at large. But I have no experience or data to support the fear that it'll be ~that~ much slower.
The activity on our DB is...
DB loading done by C/SQL programs loading large volumes of data, all day long.
DB reads also done by DB loaders (as part of their normal operation) but most important, end users query during normal work hours with many moderately sized queries involving several table joins and retrieval of perhaps a few hundred records.
Does anyone have any experience the performance differences between locally mounted RAID arrays and networked disks?
Does anyone have any before/after stats that may shed some light on what we can expect in the area of performance degredation?
P.S. I had previously posted this message in "Database General" but expect that it should have been posted here in stead.

Just out of Morbid Curiousity- If you get info on those Firewire drives- do they have the "ignore Permissions...." box checked.

Similar Messages

  • Full (Level 0) backup to local/NFS disk

    I am new to SAP's BR*Tools, and need advice in configuring it to use RMAN.  
    1. My backups are to be written to a staging area on our SAN.
    2. What specific parameters do I need to configure in the initSID.sap file to achive Full (Level 0) backup to local/NFS disk - preferably using RMAN?
    3. I have successfully performed a Full (Level 0) backup without RMAN to local disk.  Also, I have successfully done an incremental (Level 1) backup using BRTools with RMAN.  I want to get BRTools + RMAN based Full (Level 0) backup to local/NFS disk.
    --VJ

    You need to set the values for the following.
    tape_address
    archive_copy_dir
    tape_size =
    backup_mode =
    backup_dev_type =
    backup_type =
    backup_root_dir
    volume_archive
    volume_backup
    tape_pos_cmd
    Cheers
    Shaji

  • Windows 2008 R2 Hyper-V has really slow VMs disk access

    Wondering if anyone can help? I have noticed in a couple of Windows Server 2008 R2 Hyper-V scenarios, that the VM guests are often very slow. Disk access is especially slow.
    In one situation their was only a couple of guest VMs, and they were running of mirrored IDE HDDs.  Performance improved significantly when I used two mirrored SSD hdds instead.
    My other main situation is where the VHDs are stored on HP Lefthand P4300, over 2-nodes with 8 SATA hdds in each, using iSCSI.
    So I ran a few tests with robocopy copying 4-6GB of files (mostly large) or whole VHDs (40GB or more), using HP4300 SATA, a newer HP 4330 SAS, a THECUS NAS with SATA hdds,  and an SSD in a THECUS NAS, and local SAS mirrored hdds. All using iSCSI except
    the local SAS hdds.
    You can see from results that Guests run considerably slower than host. Any ideas?
    Sample data (although not always same background conditions of load etc):
    From within Guest direct iSCSI to same (from HP 4330): 34MB/s
    From other Guest using VHD (from HP4300) to direct iSCSI (from HP4330): 14MB/s
    From within Guest using VHD (from HP4330) to same: 9MB/s
    From within Guest using VHD (from HP4300) to VHD (from HP4330): 11MB/s
    From within Guest using VHD (from HP4330) to VHD (from HP4330): 19MB/s
    From within Guest using VHD (from THECUS SSD) to same VHD (from THECUS SSD): 18MB/s
    From within Guest using VHD (from HP4300) to VHD (from THECUS SSD): 13MB/s
    From within Host from HP4300 iSCSI to HP4330 iSCSI: 12MB/s
    From within Host using VHD (from THECUS SSD) to same VHD (from THECUS SSD): 40MB/s
    From within Host from HP4330 iSCSI to same HP4330 iSCSI: 232MB/s & 132MB/s
    From within Host from HP4330 iSCSI to HP4300 iSCSI: 40MB/s & 57MB/s
    From within Host from HP4300 iSCSI to same HP4300 iSCSI: 26MB/s & 47MB/s
    From within Host from HP4300 iSCSI to Guest VHD (from THECUS SATA): 15MB/s

    Hi aucj,
    I would suggest to use host level ISCSI connection then build VM on that ISCSI disk .
    please add more Vcpu for that VM then try to copy again (shutdown the other VMs ,in that online VM from one folder to another folder ).
    Hp4330 ISCSI connection is 1GB , right ?
    If yes , it should be with max 100M/s with the single ISCSI connection .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • [SOLVED] Long time with excessive disk access before system reboot.

    I feel I would be grateful for some help here. It's my first go at Arch Linux having used Xubuntu for several years. It may be I'm missing something obvious but then I would be happy if someone could point me in the right direction.
    Problem: When I do a system restart by issuing
    $ systemctl reboot
    I get the following output
    Sending SIGTERM to remaining processes...
    Sending SIGKILL to remaining processes...
    Unmounting file systems.
    Unmounted /sys/kernel/debug.
    Unmounted /dev/hugepages.
    Unmounted /dev/mqueue.
    Not all file systems unmounted, 1 left.
    Disabling swaps.
    Detaching loop devices.
    Detaching DM devices.
    Unmounting file systems.
    Not all file systems unmounted, 1 left.
    Cannot finalize remaining filesystems and devices, giving up.
    Successfully changed into root pivot.
    Unmounting all devices.
    Detaching loop devices.
    Diassembling stacked devices.
    mdadm: stopped /dev/md126
    [ 1654.867177] Restarting system.
    However, after the last line is printed, the system does not reboot immediately but hangs for about 2 minutes with heavy disk activity. I can't say if it is read or write or both, but the led of my HDD is lit constantly. When this activity stops, the machine reboots.
    $ systemctl poweroff
    works as expected, i.e. shuts down immediately without excessive disk access.
    I see this behaviour both with the installed Arch system and when I run the live installation/recovery CD. It is also the same if I boot into the busybox rescue shell and then restarts the machine from there. It also does not seem to matter if any partition on the disk is is mounted or not, the behaviour is always the same with 2 min. heavy activity before reboot.
    System setup:
    Sony Vaio VPZ13. Intel Core i5 M460, 4GB ram, 2x64GB SSD in RAID0 configuration via bios setting (a.k.a. fake raid), partitioned like:
    windows boot
    windows system
    linux swap
    linux "/"
    linux "/home"
    So it's a dual boot setup with Windows 7.
    The raid array is assembled by mdadm, and I have mdadm_udev among my mkinitcpio.conf hooks (after blocks but before filesystems).
    Snip from journalctl log showing actions when reboot has been issued:
    jan 18 12:24:23 wione systemd[1]: Stopping Sound Card.
    jan 18 12:24:23 wione systemd[1]: Stopped target Sound Card.
    jan 18 12:24:23 wione systemd[1]: Stopping Bluetooth.
    jan 18 12:24:23 wione systemd[1]: Stopped target Bluetooth.
    jan 18 12:24:23 wione systemd[1]: Stopping Graphical Interface.
    jan 18 12:24:23 wione systemd[1]: Stopped target Graphical Interface.
    jan 18 12:24:23 wione systemd[1]: Stopping Multi-User.
    jan 18 12:24:23 wione systemd[1]: Stopped target Multi-User.
    jan 18 12:24:23 wione systemd[1]: Stopping Login Prompts.
    jan 18 12:24:23 wione systemd[1]: Stopped target Login Prompts.
    jan 18 12:24:23 wione systemd[1]: Stopping Getty on tty1...
    jan 18 12:24:23 wione systemd[1]: Stopping Login Service...
    jan 18 12:24:23 wione login[333]: pam_unix(login:session): session closed for user root
    jan 18 12:24:23 wione login[333]: pam_systemd(login:session): Failed to connect to system bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
    jan 18 12:24:23 wione systemd[1]: Stopped D-Bus System Message Bus.
    jan 18 12:24:23 wione systemd[1]: Stopped Getty on tty1.
    jan 18 12:24:23 wione systemd[1]: Stopping Permit User Sessions...
    jan 18 12:24:23 wione systemd[1]: Stopped Permit User Sessions.
    jan 18 12:24:23 wione systemd[1]: Stopped Login Service.
    jan 18 12:24:23 wione systemd[1]: Stopping Basic System.
    jan 18 12:24:23 wione systemd[1]: Stopped target Basic System.
    jan 18 12:24:23 wione systemd[1]: Stopping Dispatch Password Requests to Console Directory Watch.
    jan 18 12:24:23 wione systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
    jan 18 12:24:23 wione systemd[1]: Stopping Daily Cleanup of Temporary Directories.
    jan 18 12:24:23 wione systemd[1]: Stopped Daily Cleanup of Temporary Directories.
    jan 18 12:24:23 wione systemd[1]: Stopping Sockets.
    jan 18 12:24:23 wione systemd[1]: Stopped target Sockets.
    jan 18 12:24:23 wione systemd[1]: Stopping D-Bus System Message Bus Socket.
    jan 18 12:24:23 wione systemd[1]: Closed D-Bus System Message Bus Socket.
    jan 18 12:24:23 wione systemd[1]: Stopping System Initialization.
    jan 18 12:24:23 wione systemd[1]: Stopped Setup Virtual Console.
    jan 18 12:24:23 wione systemd[1]: Unmounting Temporary Directory...
    jan 18 12:24:23 wione systemd[1]: Unmounted Temporary Directory.
    jan 18 12:24:23 wione systemd[1]: Unmounted /home.
    jan 18 12:24:23 wione systemd[1]: Starting Unmount All Filesystems.
    jan 18 12:24:23 wione systemd[1]: Reached target Unmount All Filesystems.
    jan 18 12:24:23 wione systemd[1]: Stopping Local File Systems (Pre).
    jan 18 12:24:23 wione systemd[1]: Stopped target Local File Systems (Pre).
    jan 18 12:24:23 wione systemd[1]: Stopping Remount Root and Kernel File Systems...
    jan 18 12:24:23 wione systemd[1]: Stopped Remount Root and Kernel File Systems.
    jan 18 12:24:23 wione systemd[1]: Starting Shutdown.
    jan 18 12:24:23 wione systemd[1]: Reached target Shutdown.
    jan 18 12:24:23 wione systemd[1]: Starting Save Random Seed...
    jan 18 12:24:23 wione systemd[1]: Starting Update UTMP about System Shutdown...
    jan 18 12:24:23 wione systemd[1]: Started Save Random Seed.
    jan 18 12:24:23 wione systemd[1]: Started Update UTMP about System Shutdown.
    jan 18 12:24:23 wione systemd[1]: Starting Final Step.
    jan 18 12:24:23 wione systemd[1]: Reached target Final Step.
    jan 18 12:24:23 wione systemd[1]: Starting Reboot...
    jan 18 12:24:23 wione systemd[1]: Shutting down.
    jan 18 12:24:23 wione systemd-journal[189]: Journal stopped
    -- Reboot --
    Since I have used Xubuntu without hassle for several years, I first thought the problem may be related to systemd reboot and something in my system setup. But I have tried the Fedora 17 live CD and rebooting there works as expected. So, since it works in one systemd distro, it should work with Arch as well.
    Then I thought that it maybe had something to do with the raid-array, something along the lines of
    https://bugzilla.redhat.com/show_bug.cgi?id=752593
    https://bugzilla.redhat.com/show_bug.cgi?id=879327
    But then I found the shutdown hook for mkinitcpio and now I see that the array is stopped and dissassembled. So thats not the problem either. (Or thats what I guess at least.)
    Unfortunately I'm out of ideas. Any help would be grateful.
    Last edited by wingbrant (2013-02-02 22:20:20)

    It turned out that the magic word for me was "reboot=pci" on the kernel command line. With that option set it works lika a charm The machine reboots nice and clean.

  • Shared NFS Disk causing high Wait CPU on dd command

    Hi,
    we have mounted the partition /OVS to a NFS shared disk, and we are running a virtual machine from this shared disk.
    Doing the following command in the virtual machine, we calculate the needed time to write a file into the NFS disk with block size = 1M. We also tried with different block sizes.
    time dd if=/dev/zero of=/tmp/1GB.test bs=1M count=1024
    1024+0 records in
    1024+0 records out
    real     0m4.085s
    user     0m0.000s
    If we check the Virtual Machine CPU Usage during this process, Wait CPU time highly increases as we show below (doesn't matter the block size)
    !http://img69.imageshack.us/img69/7651/highwaitcpu.th.png!
    http://img69.imageshack.us/i/highwaitcpu.png/
    In the other hand, executing the same command in a local disk, Wait CPU is really low and normal.
    Is a NFS issue? Is the right way to work? Do we have a wrong Virtual Machine configuration?
    Thanks in advance,
    Marc
    Edited by: Marc Caubet on Nov 23, 2009 8:20 AM
    Edited by: Marc Caubet on Nov 23, 2009 8:20 AM
    Edited by: Marc Caubet on Nov 23, 2009 8:21 AM

    Hi Herb,
    thanks for your fast reply. Elapsed time using NFS should be lower than using the local disk since throughput is faster. This should be in theory
    I took some samples by using the NFS. and I got the following numbers:
    +0.00user 3.75system 0:12.46elapsed 30%CPU (0avgtext+0avgdata 0maxresident)k+
    +1920inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
    +0.00user 4.14system 0:08.83elapsed 46%CPU (0avgtext+0avgdata 0maxresident)k+
    +112inputs+2097152outputs (0major+159minor)pagefaults 0swaps+
    +0.00user 4.24system 0:04.46elapsed 94%CPU (0avgtext+0avgdata 0maxresident)k+
    +8inputs+2097152outputs (0major+159minor)pagefaults 0swaps+
    +0.00user 4.29system 2:06.02elapsed 3%CPU (0avgtext+0avgdata 0maxresident)k+
    +88inputs+2097160outputs (0major+160minor)pagefaults 0swaps+
    +0.00user 4.08system 0:04.14elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k+
    +0inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
    +0.00user 4.12system 0:04.43elapsed 92%CPU (0avgtext+0avgdata 0maxresident)k+
    +0inputs+2097152outputs (0major+160minor)pagefaults 0swaps+
    +0.00user 4.25system 0:04.34elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k+
    +0inputs+2097152outputs (0major+160minor)pagefaults 0swaps+
    +0.00user 4.29system 3:44.51elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k+
    +0inputs+2097160outputs (0major+160minor)pagefaults 0swaps+
    +0.00user 4.10system 0:04.18elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k+
    +0inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
    +0.00user 3.96system 0:04.00elapsed 98%CPU (0avgtext+0avgdata 0maxresident)k+
    +0inputs+2097152outputs (0major+159minor)pagefaults 0swaps+
    +0.00user 4.06system 0:04.16elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k+
    +0inputs+2097152outputs (0major+159minor)pagefaults 0swaps+
    +0.00user 4.06system 0:04.15elapsed 97%CPU (0avgtext+0avgdata 0maxresident)k+
    +0inputs+2097152outputs (0major+160minor)pagefaults 0swaps+
    Some examples with local disk:
    +0.00user 3.52system 0:07.46elapsed 47%CPU (0avgtext+0avgdata+ 0maxresident)k+
    +96inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
    +0.00user 3.77system 0:04.51elapsed 83%CPU (0avgtext+0avgdata 0maxresident)k
    +0inputs+2097160outputs (0major+160minor)pagefaults 0swaps
    +0.00user 3.85system 0:08.65elapsed 44%CPU (0avgtext+0avgdata 0maxresident)k+
    +104inputs+2097160outputs (0major+160minor)pagefaults 0swaps+
    +0.00user 3.80system 0:04.73elapsed 80%CPU (0avgtext+0avgdata 0maxresident)k+
    +8inputs+2097160outputs (0major+160minor)pagefaults 0swaps+
    +0.00user 3.90system 0:04.79elapsed 81%CPU (0avgtext+0avgdata 0maxresident)k+
    +16inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
    +0.00user 3.91system 0:04.37elapsed 89%CPU (0avgtext+0avgdata 0maxresident)k+
    +0inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
    +0.00user 4.03system 0:19.57elapsed 20%CPU (0avgtext+0avgdata 0maxresident)k+
    +160inputs+2097160outputs (0major+160minor)pagefaults 0swaps+
    +0.00user 3.44system 0:05.53elapsed 62%CPU (0avgtext+0avgdata 0maxresident)k+
    +0inputs+2097152outputs (0major+160minor)pagefaults 0swaps+
    +0.00user 3.44system 0:05.20elapsed 66%CPU (0avgtext+0avgdata 0maxresident)k+
    +8inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
    +0.00user 3.43system 0:07.04elapsed 48%CPU (0avgtext+0avgdata 0maxresident)k+
    +120inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
    +0.00user 3.42system 0:06.34elapsed 53%CPU (0avgtext+0avgdata 0maxresident)k+
    +56inputs+2097160outputs (0major+159minor)pagefaults 0swaps+
    +0.01user 3.59system 0:13.35elapsed 26%CPU (0avgtext+0avgdata 0maxresident)k+
    +160inputs+2097160outputs (1major+175minor)pagefaults 0swaps+
    Elapsed time in NFS case, sometimes is really huge. Hence, this should mean network problems, am I right? Maybe we should increase bandwidth. Actually we got 1GB of network connection shared with 10 hypervisors.
    Edited by: Marc Caubet on Nov 24, 2009 6:38 AM

  • Firefox is using large amounts of CPU time and disk access, and I need to know how to shut down most of this so I can actually use the browser.

    Firefox is a very busy piece of software. It's using large amounts of CPU time and disk access. It puts my usage at low priority, so I have to wait for some time to be able to use my pointer or keyboard. I don't know what it uses all that CPU and disk access time for, but it's of no use to me. It often takes off with massive use of resources when I'm not doing anything, and I may not have use of my pointer for several minutes. How can I shut down most of this so I can use the browser to get my work done. I just want to use the web site access part of the software, and drop all the extra. I don't want Firefox to be able to recover after a crash. I just want to browse with a minimum of interference from Firefox. I would think that this is the most commonly asked question.

    Firefox consumes a lot of CPU resources
    * https://support.mozilla.com/en-US/kb/Firefox%20consumes%20a%20lot%20of%20CPU%20resources
    High memory usage
    * https://support.mozilla.com/en-US/kb/High%20memory%20usage
    Check and tell if its working.

  • Time Capsule disk access SLOW!   2TB - Dual band

    I have a Time Capsule (2TB, Dual Band, Firmare at 7.4.2)
    Its been working like a dream, backing up 3 Macs for the last 8 months.
    I also use it for connecting two or three other disks via USB and then making those accessible over the LAN, and there is also about 100GB of data on the Time capsule disk itself that i use as a dump for files so i can get to them from anywhere on my LAN.
    Lately I started to notice EXTREMELY slow disk access, even when i am connected via ethernet (1000 base) - and equally when I am connected via wifi (n)
    for example, copying over a 2Mb photo can take minutes when it should take seconds.
    i've been trying to figure out whats going on here for a while and so i checked out the sizes of the backup files.. the sparseimages that time machine creates.
    one of them was nearly 1TB, while another was around 500GB and the other was around 120GB
    that plus my data meant there was only about 300GB free on the 2TB disk.
    I have a feeling the slowness could be down to extreme fragmentation of the drive... ??
    so i decided to dump the backup files and copy off my other data and do a erase of the disk and start again.. but get this.. i can't even delete the 1TB file ! I have been trying for about 3 days now and it just will not delete.. i get the Deleting msg box and it just sits there, literally ALL night...
    i know it isn't just hanging on the Mac (10.6.4) because deleting the other two images also took a long time (but not this long!)
    i can't just wipe the disk now because i can't get my other data off either.. I have tried copying it over the network and onto a USB disk connected directly to the Time Capsule, but 3 days later and i'm still copying files..
    to put some context into this, I copied over 1.8Gb today and it took about 4 hours...
    has anyone had this problem before?? what on earth is going on?? is it down to fragmentation, or something else?? the TC has been working fine up until a few weeks ago when i started to notice slow speed.
    any ideas ?
    thanks
    Adam

    if anyone ever has this issue ....
    it must have been EXTREME fragmentation. I bought another 2TB external disk, connected it to the TC, did a complete archive, which took around 24 hours... then i wiped the TC and put stuff back... it now works fine again...

  • Error "NOTICE: [0] disk access failed" during guest domain network booting

    Hi,
    Could you please tell me what is the problem with my configuration?
    I created guest domain on my T1000 server.
    As a disk I used disk from disk array: /dev/dsk/c0t18d0
    I added disk using commands:
    # ldm add-vdsdev /dev/dsk/c0t18d0 vol1@primary-vds0
    # ldm add-vdisk vdisk1 vol1@primary-vds0 myldom1
    # ldm set-variable auto-boot\?=false myldom1
    # ldm set-variable boot-device=/virtual-devices@100/channel-devices@200/disk@0 myldom1
    Then I logged to guest domain and booted from network to install OS from JumpStart server:
    {0} ok boot net - install
    Boot device: /virtual-devices@100/channel-devices@200/network@0 File and args: - install
    Requesting Internet Address for 0:14:4f:f9:78:19
    SunOS Release 5.10 Version Generic_137137-09 64-bit
    Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Configuring devices.
    NOTICE: [0] disk access failed.
    Checking rules.ok file...
    Using begin script: install_begin
    Using finish script: patch_finish
    Executing SolStart preinstall phase...
    Executing begin script "install_begin"...
    Begin script install_begin execution completed.
    ERROR: No disks found
    - Check to make sure disks are cabled and powered up
    Solaris installation program exited.
    Configuration:
    [root@gt1000a /]# ldm list-bindings
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    primary active -n-cv- SP 4 2G 0.5% 2h 23m
    MAC
    00:14:4f:9f:71:4e
    HOSTID
    0x849f714e
    VCPU
    VID PID UTIL STRAND
    0 0 5.3% 100%
    1 1 0.5% 100%
    2 2 0.5% 100%
    3 3 0.4% 100%
    MAU
    ID CPUSET
    0 (0, 1, 2, 3)
    MEMORY
    RA PA SIZE
    0x8000000 0x8000000 2G
    VARIABLES
    keyboard-layout=US-English
    IO
    DEVICE PSEUDONYM OPTIONS
    pci@780 bus_a
    pci@7c0 bus_b
    VCC
    NAME PORT-RANGE
    primary-vcc0 5000-5100
    CLIENT PORT
    myldom1@primary-vcc0 5000
    VSW
    NAME MAC NET-DEV DEVICE DEFAULT-VLAN-ID PVID VID MODE
    primary-vsw0 00:14:4f:fa:ca:94 bge0 switch@0 1 1
    PEER MAC PVID VID
    vnet0@myldom1 00:14:4f:f9:78:19 1
    VDS
    NAME VOLUME OPTIONS MPGROUP DEVICE
    primary-vds0 vol1 /dev/dsk/c0t18d0
    CLIENT VOLUME
    vdisk1@myldom1 vol1
    VCONS
    NAME SERVICE PORT
    SP
    NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
    myldom1 active -n---- 5000 12 2G 0.1% 2h 18m
    MAC
    00:14:4f:f9:e7:ae
    HOSTID
    0x84f9e7ae
    VCPU
    VID PID UTIL STRAND
    0 4 0.5% 100%
    1 5 0.0% 100%
    2 6 0.0% 100%
    3 7 0.0% 100%
    4 8 0.0% 100%
    5 9 0.0% 100%
    6 10 0.0% 100%
    7 11 0.0% 100%
    8 12 0.0% 100%
    9 13 0.0% 100%
    10 14 0.0% 100%
    11 15 0.0% 100%
    MEMORY
    RA PA SIZE
    0x8000000 0x88000000 2G
    VARIABLES
    auto-boot?=false
    boot-device=/virtual-devices@100/channel-devices@200/disk@0
    NETWORK
    NAME SERVICE DEVICE MAC MODE PVID VID
    vnet0 primary-vsw0@primary network@0 00:14:4f:f9:78:19 1
    PEER MAC MODE PVID VID
    primary-vsw0@primary 00:14:4f:fa:ca:94 1
    DISK
    NAME VOLUME TOUT DEVICE SERVER MPGROUP
    vdisk1 vol1@primary-vds0 disk@0 primary
    VCONS
    NAME SERVICE PORT
    myldom1 primary-vcc0@primary 5000
    [root@gt1000a /]#
    Kind regards,
    Daniel

    Issue solved.
    There was a wrong disk name:
    primary-vds0 vol1 /dev/dsk/c0t18d0
    I changed to c0t18d0s2 and now I sucessfuly installed OS from Jumpstart.

  • Disk Access

    I am wondering about disk access I have a Toshiba 505-890 running Win-7
    I notice that I am getting a disk access about every second as per the disk access light
    coming on.
    On my Win XP mach I don't get a light unless the computer is doing something for the most part.
     Definitely not accessing every second.
    I was wondering if this problem is endemic to win 7 or some Toshiba program I have not removed.?
    If I put a clean install of win 7 will I still have this problem???

    This program is not giving me much info on what is doing the disk access.
    I have attached a pic of what it comes up with.
    Attachments:
    Capture.JPG.txt ‏157 KB

  • USB Disk access "Extremely" Slow

    I am unable to use my network disk essentially because the disk access write/read times are SO SLOW...I am better off running backups by connecting my disk directly to the compter. Does anyone else have this problem?
    It took 5 Minutes to read my Music directory; connected directly via the same cable it takes seconds.
    What gives?
    Any help would be appreciated.

    I got my AE and Seagate FreeAgent Pro hard drive
    yesturday and I have the exact same problem.
    If I hook up the drive to the computer directly it is
    very very fast.
    If I hook it up to the AE it takes about 2 hours to
    move a couple fils totaling 2 gigs over ETHERNET!
    If I transfer the same files over the same network
    between 2 computers the speed is much much faster.
    Is this normal or some nasty firmware bug. It almost
    feels like the AE is using USB 1.0 for some reason.
    You guys are funny. How about checking out the specs of USB 2.0 and 802.11g/n?
    Go here:
    http://en.wikipedia.org/wiki/802.11g#802.11g
    and here:
    http://en.wikipedia.org/wiki/USB_2.0
    Firewire vs. USB tests are here:
    http://www.g4tv.com/techtvvault/features/39129/USB20_Versus_FireWirepg3.html
    802.11g:
    Release Date Op. Frequency Data Rate (Typ) Data Rate (Max) Range (Indoor)
    June 2003 2.4 GHz 19 Mbit/s 54 Mbit/s ~35 meters
    802.11n:
    Release Date Op. Frequency Data Rate (Typ) Data Rate (Max) Range (Indoor)
    Mid 2008 5 GHz and/or 2.4 GHz 74 Mbit/s 248 Mbit/s (2 stream) ~70 meters
    Bottom line: 802.11n typical is 74Mbps, 802.11g typical is 19Mbps. USB 2.0 typical is 50Mbps.
    On the average, you should expect your speed to drop by over 50% when connecting the USB drive to AESB using an 802.11g device. If your device is 802.11n, then you should have pretty much similar performance.
    Now, if you increase the distance from your base station, or you put walls in between, the performance of 802.11g will go down "dramatically", all the way to 2-3Mbps, easy. Not so for 802.11n, which behaves much better at longer distances.

  • Slow hard disk access --- still wonky

    Hi,
    Recently i've noticed that hard disk access has become very laggy, to the point
    where it's driving me crazy.
    For example, if i want to tab-complete through my directories, i have to wait a
    few seconds each time.  Similarly with saving files in vim, or just using
    firefox, which seems to suffer frequent hangs while the disk is spinning.
    I tried downgrading the kernel to 3.4.something, to no avail (it definitely used
    to work just fine with the old kernels).  I've also tried adding "commit=60" to
    my fstab to reduce journalling access.
    I ran bonnie++ and the following results came back:
    Version 1.03e ------Sequential Output------ --Sequential Input- --Random-
    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
    mattdell 7672M 102163 88 106118 6 38973 3 83155 67 124734 4 207.5 0
    ------Sequential Create------ --------Random Create--------
    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
    files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
    16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    mattdell,7672M,102163,88,106118,6,38973,3,83155,67,124734,4,207.5,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
    t
    which (by comparison with other results i've seen online) seem to indicate
    there's nothing particularly wrong with the disk (it's a toshiba 7200rpm, i
    think).
    So i'm at a bit of a loss what to do next.  I've attached my dmesg output.  If
    anyone has any suggestions, that would be awesome.
    Dmesg output: http://pastebin.com/kJcbZVBT
    Thanks,
    Matt
    Last edited by yourealwaysbe (2012-11-11 14:41:24)

    Arf -- i noticed firefox was still laggy last night, and on a (second or third) reboot this morning, things are back to being laggy even without having run firefox...
    I'm not sure where to look, but here's the output of mount if that will be of any use:
    proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
    sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
    dev on /dev type devtmpfs (rw,nosuid,relatime,size=1962404k,nr_inodes=490601,mode=755)
    run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
    /dev/sda3 on / type ext4 (rw,relatime,data=ordered)
    securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
    tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
    devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
    tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755)
    cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
    cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
    cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
    cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
    cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
    cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
    cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
    cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
    systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=27,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
    hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
    mqueue on /dev/mqueue type mqueue (rw,relatime)
    binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
    debugfs on /sys/kernel/debug type debugfs (rw,relatime)
    tmpfs on /tmp type tmpfs (rw,nosuid,nodev,relatime)
    /dev/sda1 on /boot type ext2 (rw,relatime)
    /dev/sda4 on /home type ext4 (rw,relatime,data=ordered)
    All suggestions appreciated
    edit: also, nothing untoward reported by top (i don't think):
    top - 15:52:02 up 16 min, 0 users, load average: 0.36, 0.41, 0.30
    Tasks: 104 total, 2 running, 102 sleeping, 0 stopped, 0 zombie
    %Cpu(s): 0.1 us, 0.2 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
    KiB Mem: 3930516 total, 779240 used, 3151276 free, 51616 buffers
    KiB Swap: 2626620 total, 0 used, 2626620 free, 285284 cached
    PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
    1 root 20 0 32552 3432 1924 S 0.0 0.1 0:00.51 systemd
    2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
    3 root 20 0 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/0
    4 root 20 0 0 0 0 S 0.0 0.0 0:00.14 kworker/0:0
    5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
    7 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/u:0H
    8 root rt 0 0 0 0 S 0.0 0.0 0:00.00 migration/0
    9 root rt 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/0
    10 root rt 0 0 0 0 S 0.0 0.0 0:00.00 migration/1
    The lag tends to occur when first tabbing into a directory.  On the second time
    tabbing through things seem to be fast -- i guess that's cached somewhere.
    editedit: i also tried switching back to initscripts, with no improvement, so i guess systemd is off the hook for the remaining problems
    Last edited by yourealwaysbe (2012-11-11 15:13:57)

  • Extremely slow disk access iMac5.1

    Hello, my daughter is having problems with her computer. She has a Mac5.1 ( intel ) running 10.4.11.
    In the process of trying to determine the problem, I installed Xbench on her computer. The numbers for the disk access category are dismal, all others (although low) appear normal.
    This disk values are:
    Sequential                         0.03
        Uncached Write            0.01        0.01 MB/sec [4K blocks]
        Uncached Write            0.17        0.10 MB/sec [256K blocks]
        Uncached Read            60.40        17.68 MB/sec[4K blocks]
        Uncached Read            86.73        43.59 MN/sec[256K blocks]
    Random                            1.35
        Uncached Write           15.89        0.01 MB/sec [4K blocks]
        Uncached Write            0.35        0.10 MB/sec [256K blocks]
        Uncached Read            63.58        17.68 MB/sec[4K blocks]
        Uncached Read            99.67        43.59 MN/sec[256K blocks
    Aparently sequential access is much worse that random access however Apple utilities as well as Onyx claims that the disk is fine. However the computer is nearly unusable.   What else can I try?  I ran an fsck on the disk, I also caused the write permissions to be corrected. I am running out of ideas.
    BTW her disk is a WDC WD160JS-40TGB0.  Out of 148 Gig, 110 are used and about 40 are free. 
    Any Ideas? Any help is greatly appreciated.

    How much RAM does the iMac have and how many Applications are running concurrently or set to open at login?
    40GB should be ample free space for V-RAM, but it could definitely get bogged down if there is to little RAM for the users needs.  
    Sometimes one will inadvertently set a bunch of stuff to automatically open at login. Go to: Apple > System Preferences > Accounts > Login Items and remove any unnecessary Application automatically set to open at login-in and then restart the computer.
    Then again it could be that the HD is just suffering from age, if it has been thrashed because of continual page outs due to the lack of system memory. Because my Earlier 2006 Mac4,1 with 2GB of RAM that I upgraded (for space reasons) to a WD 320GB Hard Drive in 2009 is getting:
    Disk Test
    55.08
    Sequential
    47.67
    Uncached Write
    177.90
    109.23 MB/sec [4K blocks]
    Uncached Write
    175.61
    99.36 MB/sec [256K blocks]
    Uncached Read
    14.73
    4.31 MB/sec [4K blocks]
    Uncached Read
    211.81
    106.45 MB/sec [256K blocks]
    Random
    65.23
    Uncached Write
    25.42
    2.69 MB/sec [4K blocks]
    Uncached Write
    182.78
    58.51 MB/sec [256K blocks]
    Uncached Read
    93.31
    0.66 MB/sec [4K blocks]
    Uncached Read
    172.38
    31.99 MB/sec [256K blocks]
    Dennis

  • Fast User Switching-- Disk Access Error

    I have created a Standard User Account with no Parental Controls. I have not had any problems whatsoever logging in to iVisit (video conferencing) as an Admin, but when I log into the Standard account, and try to log into iVisit (using the same log in information as my Admin account, and this is what iVisit suggests I can do) I get the following error message when I click the log in button:
    Disk access error. Check free space and write permissions.
    I've tried logging out of my Admin account first, then logging into the Standard account before I try iVisit, but that didn't work. I've opened up the Standard account as much as possible (I think) to remove any restrictions, and even selected the option to allow user to administer this computer, but that didn't work either.
    Another piece to this puzzle, is that iVisit Help suggested that I install iVisit under the Standard User, just to be safe. But during the install process, the installer quits, and I get an error message that says it can't continue.
    Does anybody know what's going on? I really appreciate any help! Thanks
    Powerbook G4 1.33 Mhz   Mac OS X (10.4.3)   External OWC, 512 RAM, iSight

    I don't know anything about iVisit, but it sounds like it may not be behaving exactly as an OS X installer should — placing files in spots that only admins can access. I would try this as a next step:
    1. Uninstall iVisit.
    2. Make your standard account an admin.
    3. Try and install iVisit through the newly admin'd standard account. Hopefully this avoids the error/crash.
    4. Confirm that iVisit works through this account.
    5. Switch it back to a standard account and test again.
    I don't know if this will fix your problem, but it's worth a shot and may help narrow down the cause of the problem.

  • Disk Access Failed while Installing Solaris Container.

    I have setup 5 guest domains together with the Control Domain.
    $ ldm list
    Name State Flags Cons VCPU Memory Util Uptime
    primary active -t-cv SP 4 4G 0.8% 2d 2h 15m
    secondary active -t--v 5000 4 2G 0.5% 3h 5m
    dmz active -t--- 5001 8 2G 0.0% 46m
    sunray inactive -----
    application inactive -----
    identity active -t--- 5002 4 4G 0.1% 1h 2m
    In each of the guest domain, I plan to install a number of Solaris Containers to run different applications. While installing one Solaris container (zoneadm �z <zone name> install) in dmz domain, I start the installation/configuration (either zoneadm �z <zone name> install or zlogin �C <zone name>) of another Solaris container in identity domain. Everything starts OK until half way I get an error message as follow:
    Jun 13 18:33:16 dmz vdc: NOTICE: [1] disk access failed.
    The installation of the Solaris Container in dmz halts as soon as I see the message above. Nothing will work except a force stop in the Control Domain using ldm stop �f dmz and restart.
    What happen to the Solaris Container installation/configuration in identity domain? It either continues the process without error or fails with the same error message shortly after. For example,
    Jun 13 18:33:24 identity vdc: NOTICE: [1] disk access failed.
    The vdisks for each guest domain are setup by following the steps on section �Using ZFS Over a Virtual Disk� in LDoms 1.0 Administration Guide. Each guest domain is booted from a disk image in the Control Domain and has two 9GB data vdisks (mapped to two physical 9GB disks on A5200 disk array) running ZFS striping. Each guest domain vdisks are serviced by a separate vdisk server.
    Any idea?

    I am not sure if multiple virtual disk servers are officially supported under current LDoms release.
    I had tried using only one virtual disk server. The same problem exhibited. I thought may be the vds was not able to keep up with the virtual disk I/O. That's why I setup multiple vds.

  • PerfMon reporting dramatic disk access time increase on Oracle startup

    Hi,
    My oracle 10g (10.2.0.4) database is hosted on a windows 2003 server.
    The datafiles are stored on a RAID1 disk array, on a dedicated partition : currently 30 gigs free out of 180, wich should not be a concern unless i'm wrong, because the datafiles were created as 10 Go files with no autogrowth. I add a new datafile whenever i need more room for my tables (alerts when 80% used).
    Since 2 days i experience a dramatic performance loss :
    The EM console reports nothing special (no alarms related to storage) apart from the need for more paginated memory.
    I issue a reorg when the segmentation advisor suggests it.
    My optimizer statistics are calculated by the default scheduled job.
    The weird thing I noticed is that as soon as I start the database, there's a huge increase in disk activity even though no query at all is submitted to the database.
    PerfMon reports Current Disk Queue Length > 1000 and disk access time > 3000 ms
    CPU is 2% activity on the 4-cpus server.
    I have plenty of spare memory (currently 3 Go used out of 16).
    This is only a dev server for ETL processes, it has very few concurrent connections.
    Any suggestions welcome.
    AWR report is available here
    http://min.us/mqnXQhd5Z
    Edited by: user10799939 on 22 mars 2012 09:30

    Cache Sizes
    ~~~~~~~~~~~                       Begin        End
                   Buffer Cache:     1,296M     1,296M  Std Block Size:         8K
               Shared Pool Size:       160M       160M      Log Buffer:    14,364K
    Load Profile
    ~~~~~~~~~~~~                            Per Second       Per Transaction
                      Redo size:            460,955.72 ;         2,477,358.63
                  Logical reads:              3,392.16 ;            18,230.80
                  Block changes:              6,451.93 ;            34,675.22
                 Physical reads:                  2.92 ;                15.67
                Physical writes:                394.52 ;             2,120.28
                     User calls:                  1.69 ;                 9.08
                         Parses:                  3.31 ;                17.81
                    Hard parses:                  0.17 ;                 0.90
                          Sorts:                  1.32 ;                 7.09
                         Logons:                  0.06 ;                 0.31
                       Executes:                  7.01 ;                37.68
                   Transactions:                  0.19
      % Blocks changed per Read:  190.20 ;   Recursive Call %:    96.23
    Rollback per transaction %:    0.30 ;      Rows per Sort:    14.41
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:   99.98 ;      Redo NoWait %:   99.86
                Buffer  Hit   %:   99.92 ;   In-memory Sort %:  100.00
                Library Hit   %:   96.30 ;       Soft Parse %:   94.96
             Execute to Parse %:   52.74 ;        Latch Hit %:   99.07
    Parse CPU to Parse Elapsd %:    0.35 ;    % Non-Parse CPU:   99.30
    Shared Pool Statistics        Begin    End
                 Memory Usage %:   75.48 ;  75.51
        % SQL with executions>1:   79.92 ;  85.03
      % Memory for SQL w/exec>1:   77.07 ;  70.09
    Top 5 Timed Events                                         Avg %Total
    ~~~~~~~~~~~~~~~~~~                                        wait   Call
    Event                                 Waits    Time (s)   (ms)   Time Wait Class
    db file sequential read               9,052      17,688   1954   51.3 ;  User I/O
    log file switch (checkpoint in        5,303       4,649    877   13.5 Configurat
    log file switch completion            4,245       4,023    948   11.7 Configurat
    wait for a undo record               32,393       3,531    109   10.3 ;     Other
    db file parallel write               18,771       3,437    183   10.0 System I/O Havent seen this much wait on average. For example 877ms for "log file switch" is over threshold. And other wait events too..
    Time Model Statistics                DB/Inst: MDMPRJ/MDMPRJ  Snaps: 2840-2841
    -> Total time in database user-calls (DB Time): 34446.5s
    -> Statistics including the word "background" measure background process
       time, and so do not contribute to the DB time statistic
    -> Ordered by % or DB time desc, Statistic name
    Statistic Name                                       Time (s) % of DB Time
    sql execute elapsed time                              4,008.5 ;        11.6
    parse time elapsed                                      352.9 ;         1.0
    hard parse elapsed time                                 352.7 ;         1.0
    PL/SQL compilation elapsed time                         120.1 ;          .3
    DB CPU                                                   61.8 ;          .2
    failed parse elapsed time                                21.3 ;          .1
    PL/SQL execution elapsed time                             8.0 ;          .0
    connection management call elapsed time                   0.0 ;          .0
    hard parse (sharing criteria) elapsed time                0.0 ;          .0
    repeated bind elapsed time                                0.0 ;          .0
    hard parse (bind mismatch) elapsed time                   0.0 ;          .0
    DB time                                              34,446.5 ;         N/A
    background elapsed time                              14,889.7 ;         N/A
    background cpu time                                      39.0 ;         N/A
    Wait Class                            DB/Inst: MDMPRJ/MDMPRJ  Snaps: 2840-2841
    -> s  - second
    -> cs - centisecond -     100th of a second
    -> ms - millisecond -    1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc
                                                                      Avg
                                           %Time       Total Wait    wait     Waits
    Wait Class                      Waits  -outs         Time (s)    (ms)      /txn
    User I/O                       10,515     .1           17,785    1691      15.8
    Configuration                  10,186   79.5 ;           8,865     870      15.3
    System I/O                     27,619     .0            8,774     318      41.6
    Other                          57,768   98.3 ;           6,915     120      87.0
    Commit                          2,634   88.6 ;           2,481     942       4.0
    Concurrency                     2,847   75.4 ;           2,240     787       4.3
    Application                       219    2.3 ;              23     105       0.3
    Network                         4,790     .0                0       0       7.2
              ------------------------------------------------------------- again seen, there is very high wait on User IO
    Wait Events                          DB/Inst: MDMPRJ/MDMPRJ  Snaps: 2840-2841
    -> s  - second
    -> cs - centisecond -     100th of a second
    -> ms - millisecond -    1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
                                                                       Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    db file sequential read               9,052     .0      17,688    1954      13.6
    log file switch (checkpoint           5,303   78.0 ;      4,649     877       8.0
    log file switch completion            4,245   89.2 ;      4,023     948       6.4
    wait for a undo record               32,393   99.8 ;      3,531     109      48.8
    db file parallel write               18,771     .0       3,437     183      28.3
    wait for stopper event to be         24,203   99.8 ;      2,634     109      36.5
    log file sync                         2,634   88.6 ;      2,481     942       4.0
    control file sequential read          7,356     .0       2,431     330      11.1
    buffer busy waits                     2,513   83.1 ;      2,173     865       3.8
    log file parallel write                 520     .0       1,566    3012       0.8
    control file parallel write             840     .0       1,334    1588       1.3
    rdbms ipc reply                         172   91.3 ;        330    1916       0.3
    enq: CF - contention                    309   23.0 ;        268     867       0.5
    log buffer space                        638   28.5 ;        192     301       1.0
    enq: PS - contention                     52   23.1 ;         71    1362       0.1
    db file scattered read                  113     .0          67     590       0.2
    os thread startup                        76   77.6 ;         63     834       0.1
    reliable message                         57   78.9 ;         50     878       0.1
    enq: RO - fast object reuse              22   22.7 ;         23    1038       0.0
    latch free                              537     .0          16      30       0.8
    Streams AQ: qmn coordinator               3  100.0 ;         15    5005       0.0 Overstepping
    Background Wait Events               DB/Inst: MDMPRJ/MDMPRJ  Snaps: 2840-2841
    -> ordered by wait time desc, waits desc (idle events last)
                                                                       Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    db file parallel write               18,772     .0       3,437     183      28.3
    events in waitclass Other            24,367   99.5 ;      3,010     124      36.7
    control file sequential read          6,654     .0       2,333     351      10.0
    log file parallel write                 520     .0       1,566    3012       0.8
    control file parallel write             840     .0       1,334    1588       1.3
    buffer busy waits                       899   94.2 ;        884     984       1.4
    log file switch (checkpoint             206   82.0 ;        185     898       0.3
    os thread startup                        76   77.6 ;         63     834       0.1
    log file switch completion               46   93.5 ;         45     982       0.1
    log buffer space                        158   31.0 ;         12      77       0.2
    db file sequential read                  62     .0           7     111       0.1
    db file scattered read                   20     .0           6     318       0.0
    direct path read                        660     .0           5       7       1.0
    log file sequential read                 66     .0           4      65       0.1
    log file single write                    66     .0           1      16       0.1
    enq: RO - fast object reuse               2     .0           0      38       0.0
    latch: cache buffers chains               3     .0           0       6       0.0
    direct path write                       660     .0          -5      -8       1.0
    rdbms ipc message                     9,052   87.5 ;     21,399    2364      13.6
    pmon timer                            1,318   90.4 ;      3,562    2703       2.0
    Streams AQ: qmn coordinator             633   97.6 ;      3,546    5602       1.0
    Streams AQ: waiting for time             77   61.0 ;      3,449   44795       0.1
    PX Deq: Join ACK                         21     .0           0       0       0.0 Again overshooting
    Tablespace IO Stats                  DB/Inst: MDMPRJ/MDMPRJ  Snaps: 2840-2841
    -> ordered by IOs (Reads + Writes) desc
    Tablespace
                     Av      Av     Av                       Av     Buffer Av Buf
             Reads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)
    UNDOTBS1
               914       0 ######     1.0 ;   1,368,515      383      2,534  863.2
    MDMREF_INDICES
             6,918       2 ######     1.0 ;      11,086        3          0    0.0
    SYSAUX
               626       0 ######     1.1 ;       1,804        1          0    0.0
    SYSTEM
               850       0 ######     1.7 ;         296        0          0    0.0
    MDMREF_DATA
               293       0  712.3 ;    1.0 ;         274        0          0    0.0
    MDMPRJ_ODS
               198       0   72.1 ;    1.0 ;         198        0          0    0.0
    FEU_VERT
                33       0   61.5 ;    1.0 ;          33        0          0    0.0
    USERS
                33       0   31.5 ;    1.0 ;          33        0          0    0.0
              ------------------------------------------------------------- Now have a serious look at it. Av Rd(ms). Now for some tablespace value cannot event fit in window thats why its showing ##
    According to oracle recommendation Av Rd(ms) shouldn't be greater then 20, if its goes over 20 then its considered to be an issue with IO subsystem. But as its seen that in your case its overshooting.
    Now the question from my side
    Have done any configuration changes?
    I would suggest you to revert these changes asap and contact storage admin guys...
    Hope this helps

Maybe you are looking for

  • Rendering problem in output of Pages 5.1

    Has anyone else seen a print rendering problem when trying to print a table in Pages 5.1? All my text in tables is jagged and lower-resolution than other text: (Printed from Pages 5.1 (1769) on a HP Color LaserJet CP2025 from a retina MacBook Pro ori

  • Repeated loss of landline service for 78 year old ...

    My 78 year old mother lives alone and has some fairly major health issues.  For the third time in three weeks, she has lost the use of her landline and would be unable to call for an ambulance if one was needed.  BT say this is due to the theft of co

  • Hide Upload Document button in a sharepoint library programmatically

    Hi, How to hide Upload Document button in a sharepoint library programmatically? Thank you, AA.

  • Importing Data into Apex 3.0.1

    I know that this question has a simple answer, but here goes. I know how to import data into oracle using sql loader and sql insert commands but can Apex import data into a blank application form with multiple rows of data to import. Like: U_Id L_Nam

  • 32 bit audio unit bridge unexpectingly quits

    I've just updated to Snow Leopard and I've been using Melodyne to fix vocals. When I used Melodyne on the first 3 vocals, everything in Logic was functioning fine but when I try to import the last vocal track into Melodyne, the 32 bit audio unit brid