Hyper-V Virtual Storage Device - Read and Write Operations/Sec

Hi,
Everything is Windows 2012 R2.
SCOM does not gather any data for the following performance counters:
Hyper-V Virtual Storage Device - Write Operations/Sec
Hyper-V Virtual Storage Device - Read Operations/Sec
Hyper-V Management Pack Extensions 2012 / 2012 R2 are imported.
If I add the counters in PerfMon, it works.
Any idea why? I've tried in several environments, DEV, QA and PROD, but it does not work anywhere.
Thanks!

To monitor Hyper-V Virtual Storage Device - Read and Write Operations/Sec, you can refer below link
http://www.aidanfinn.com/?p=15386
Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"

Similar Messages

  • What does this actually mean and how do I fix it: Alert: Error Count Monitor Resolution state Object Hyper-V Virtual Storage Device Has a value

    We are getting this alert on a fair few of our VMs with VHDXs and Dynamic VHDs. Everything seems OK but I am not sure what this actually means and what I need to do to resolve the issue. How do I reset the error count if that is what is required? Thanks
    in advance.
    Alert: Error Count Monitor Resolution state: New
     Error Count Monitor Source: MyVm01 Path: MyHost.MyDomain.local;MyHost.MyDomain.local;FE71577B-A2E2-45C0-B757-2FBCEC9311DE Last modified by: System Last modified time: 2/9/2013 2:08:48 PM Alert description: Instance c:-clusterstorage-volume1-MyVm01-virtual
    Sat 09/02
    To:Administrator
     09 February 2013 14:09
    Alert: Error Count Monitor
    Source: MyVm01
    Path: MyHost.MyDomain.local;MyHost.MyDomain.local;FE71577B-A2E2-45C0-B757-2FBCEC9311DE
    Last modified by: System
    Last modified time: 2/9/2013 2:08:48 PM
    Alert description: Instance c:-clusterstorage-volume1-MyVm01-virtual hard disks-MyVm01-DATA02.vhdx
                Object Hyper-V Virtual Storage Device
                Counter Error Count
                Has a value 9
                At time 2013-02-09T14:08:48.0000000+00:00
    Darren

    But I am getting this alert from SCOM and SCOM has no information about the alert for me to find out what to do - thought that was the point of SCOM to let you know of problems and how to resolve them. :)
    The alert is coming from the Error Count Monitor that is part of the Hyper-V Management Pack Extensions (v 4.0.0.0)
    I have tried looking in the Event Logs on the Host and there doesn't seem to be any storage related errors there. I am trying to establish if this is a false positive, why it is happening and if it is safe to override and ignore.
    There is nothing on the Product Knowledge tab and nothing on the Alert Context other than what I have already mentioned (see below).
    Thanks for responding.
    Time Sampled:
    09/02/2013 14:08:48
    Object Name:
    Hyper-V Virtual Storage Device
    Counter Name:
    Error Count
    Instance Name:
    c:-clusterstorage-volume1-myvm-virtual
    hard disks-MyVM-DATA02.vhdx
    Value:
    9
    Darren

  • Slow read and write operations on DAQmx

    I am trying to build up a feedback control system using PCI-6052E and PCI-6722 cards, so that the computation of the control algorithm is performed on computer's CPU. I am trying to reach sampling period of 1kHz. It turns out that the bottleneck of my system are the read and write operations from and to cards that consume lot of processor time.
    An example code (C#) that shows how the reads and writes are implemented is as attachment. On my tests the example code produces a read-time of 1000 samples on 6 channels 7.58s and a write-time of 4.69s. Is there any way to improve the performance?
    The program is running on Windows XP on 1000Mhz processor.
    Attachments:
    DAQmxPerformanceTest.cs ‏3 KB

    Petteri,
    I don't have the hardware to reproduce this, but I have a few ideas. For analog output, are you creating a task, starting it, and calling write repeatedly, or are you simply calling write? While an AO Task will auto start on write, it will also go through the process of stopping when the write is complete. Which means next time you call write, the task will need to start again. It will be much more effecient if you explicitly call start on the task once, perform as many writes as required, and stop/clear the task when you are done. This same principle applies to you analog input reads as well.
    I hope this helps,
    Dan

  • File read and write operations

    how do use file read and write operations?
    can anyone give simple program?

    http://www.tutorialspoint.com/cplusplus/cpp_files_streams.htm
    http://www.cplusplus.com/doc/tutorial/files/
    check this
    and with mfc
    http://www.functionx.com/visualc/fileprocessing/serialization.htm
    https://msdn.microsoft.com/en-us/library/6337eske.aspx
    http://www.informit.com/library/content.aspx?b=Visual_C_PlusPlus&seqNum=90

  • Are the read and write operations atomic for an array in a local variable.

    Hi,
    I would like to know when you access an array in a local variable, is it an atomic operation?
    Thanks,
    Mat

    Thanks for the comments. I agree with you. However, I my case, race conditions and synchronization are not issues. Therefore, the only thing that matters to me is that the write and read operation of the array must be atomic. I know that I can implement that with a LV2 style global but I want to avoid it if possible.
    If writing and reading to an array are atomic operations then I can simply use local or global variables.
    All I need to know is: Is reading or writing an array in a local variable an atomic operation?
    Thanks,
    Mat

  • Observing ORA-00028 during read and write operations

    In toplink connection pool, if a connection becomes stale, does Toplink try to re-connect or throw an exception? Under what circumstances we could see this error?
    Thanks,
    Exception [TOPLINK-4002] (Oracle TopLink - 11g Release 1 (11.1.1.3.0) (Build 100323)): oracle.toplink.exceptions.DatabaseException
    Internal Exception: java.sql.SQLException: ORA-00028: your session has been killed
    Error Code: 28
    Call: UPDATE IdcUser SET lastLogout = ?, version = ?, modifiedDate = ? WHERE ((id = ?) AND (version = ?))
    bind => [2011-01-27 02:52:17.0, 91, 2011-01-27 02:52:17.447, 1024077, 90]
    Query: UpdateObjectQuery(User Kislay)
    at oracle.toplink.exceptions.DatabaseException.sqlException(DatabaseException.java:305)
    at oracle.toplink.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1328)
    at oracle.toplink.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:722)
    at oracle.toplink.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:790)
    at oracle.toplink.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:524)
    at oracle.toplink.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:467)
    at oracle.toplink.internal.sessions.AbstractSession.executeCall(AbstractSession.java:800)
    at oracle.toplink.internal.queryframework.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:193)
    at oracle.toplink.internal.queryframework.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:179)
    at oracle.toplink.internal.queryframework.DatasourceCallQueryMechanism.updateObject(DatasourceCallQueryMechanism.java:670)
    at oracle.toplink.internal.queryframework.StatementQueryMechanism.updateObject(StatementQueryMechanism.java:421)
    at oracle.toplink.internal.queryframework.DatabaseQueryMechanism.updateObjectForWriteWithChangeSet(DatabaseQueryMechanism.java:1131)
    at oracle.toplink.queryframework.UpdateObjectQuery.executeCommitWithChangeSet(UpdateObjectQuery.java:69)
    at oracle.toplink.internal.queryframework.DatabaseQueryMechanism.executeWriteWithChangeSet(DatabaseQueryMechanism.java:279)
    at oracle.toplink.queryframework.WriteObjectQuery.executeDatabaseQuery(WriteObjectQuery.java:47)
    at oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:674)
    at oracle.toplink.queryframework.DatabaseQuery.executeInUnitOfWork(DatabaseQuery.java:597)
    at oracle.toplink.queryframework.ObjectLevelModifyQuery.executeInUnitOfWorkObjectLevelModifyQuery(ObjectLevelModifyQuery.java:103)
    at oracle.toplink.queryframework.ObjectLevelModifyQuery.executeInUnitOfWork(ObjectLevelModifyQuery.java:75)
    at oracle.toplink.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2753)
    at oracle.toplink.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1079)
    at oracle.toplink.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1063)
    at oracle.toplink.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1022)
    at oracle.toplink.internal.sessions.CommitManager.commitChangedObjectsForClassWithChangeSet(CommitManager.java:288)
    at oracle.toplink.internal.sessions.CommitManager.commitAllObjectsWithChangeSet(CommitManager.java:167)
    at oracle.toplink.internal.sessions.AbstractSession.writeAllObjectsWithChangeSet(AbstractSession.java:3459)
    at oracle.toplink.internal.sessions.UnitOfWorkImpl.commitToDatabase(UnitOfWorkImpl.java:1327)
    at oracle.toplink.internal.sessions.UnitOfWorkImpl.commitToDatabaseWithChangeSet(UnitOfWorkImpl.java:1423)
    at oracle.toplink.internal.sessions.UnitOfWorkImpl.commitRootUnitOfWork(UnitOfWorkImpl.java:1169)
    at oracle.toplink.internal.sessions.UnitOfWorkImpl.commit(UnitOfWorkImpl.java:941)
    at com.integral.session.ejb.UserServiceManager.setUserLogoutTime(UserServiceManager.java:1199)
    at com.integral.session.ejb.UserServiceManager$WebUser.invalidateAndRemoveSession(UserServiceManager.java:812)
    at com.integral.session.ejb.UserServiceManager$WebUser.invalidateAndRemoveSession(UserServiceManager.java:745)
    at com.integral.session.ejb.UserServiceManager.removeUserSession(UserServiceManager.java:257)
    at com.integral.apps.session.IdcHttpSessionListener.sessionDestroyed(IdcHttpSessionListener.java:83)
    at org.apache.catalina.session.StandardSession.expire(StandardSession.java:687)
    at org.apache.catalina.session.StandardSession.isValid(StandardSession.java:579)
    at org.apache.catalina.session.ManagerBase.processExpires(ManagerBase.java:678)
    at org.apache.catalina.session.ManagerBase.backgroundProcess(ManagerBase.java:663)
    at org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1284)
    at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1569)
    at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1578)
    at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1578)
    at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1558)
    at java.lang.Thread.run(Thread.java:595)
    Caused by: java.sql.SQLException: ORA-00028: your session has been killed
    at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:125)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:316)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:282)
    at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:639)
    at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:185)
    at oracle.jdbc.driver.T4CPreparedStatement.execute_for_rows(T4CPreparedStatement.java:633)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1086)
    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:2984)
    at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3057)
    at oracle.toplink.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:715)

    The error seems to indicate that the connection has been killed by the server, perhaps due to a timeout.
    In the latest EclipseLink version shipped with TopLink, dead connections in the EclipseLink connection pool should be automatically reconnected.
    Your error indicates you are not using the latest EclipseLink version, but the obsolete toplink.jar shipped with 11g.
    To get dead connection handling you may need to upgrade to using the eclipselink.jar.
    James : http://www.eclipselink.org

  • Upgrading a 3-node Hyper-V clusters storage for £10k and getting the most bang for our money.

    Hi all, looking for some discussion and advice on a few questions I have regarding storage for our next cluster upgrade cycle.
    Our current system for a bit of background:
    3x Clustered Hyper-V Servers running Server 2008 R2 (72TB Ram, dual cpu etc...)
    1x Dell MD3220i iSCSI with dual 1GB connections to each server (24x 146GB 15k SAS drives in RAID 10) - Tier 1 storage
    1x Dell MD1200 Expansion Array with 12x 2TB 7.2K drives in RAID 10 - Tier 2 storage, large vm's, files etc...
    ~25 VM's running all manner of workloads, SQL, Exchange, WSUS, Linux web servers etc....
    1x DPM 2012 SP1 Backup server with its own storage.
    Reasons for upgrading:
    Storage though put is becoming an issue as we only get around 125MB/s over the dual 1GB iSCSI connections to each physical server.  (tried everything under the sun to improve bandwidth but I suspect the MD3220i Raid is the bottleneck here.
    Backup times for vm's (once every night) is now in the 5-6 hours range.
    Storage performance during backups and large file syncronisations (DPM)
    Tier 1 storage is running out of capacity and we would like to build in more IOPS for future expansion.
    Tier 2 storage is massively underused (6tb of 12tb Raid 10 space)
    Migrating to 10GB server links.
    Total budget for the upgrade is in the region of £10k so I have to make sure we get absolutely the most bang for our buck.  
    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks

    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks
    1) You can use direct connection to SAS with a 3-node cluster of course (4-node, 5-node etc). Sure it would be much faster then running with an additional SoFS layer (with SAS fed directly to your Hyper-V cluster nodes all reads and writes would be local
    travelling down the SAS fabric and with SoFS layer added you'll have the same amount of I/Os targeting SAS + Ethernet with its huge compared to SAS latency sitting in between a requestor and your data residing on SAS spindles, I/Os overwrapped into SMB-over-TCP-over-IP-over-Etherent
    requests at the hypervisor-SoFS layers). Reason why SoFS is recommended - final SoFS-based solution would be cheaper as SAS-only is a pain to scale beyond basic 2-node configs. Instead of getting SAS switches, adding redundant SAS controllers to every hypervisor
    node and / or looking for expensive multi-port SAS JBODs you'll have a pair (at least) of SoFS boxes doing a file level proxy in front of a SAS-controlled back end. So you'll compromise performance in favor of cost. See:
    http://davidzi.com/windows-server-2012/hyper-v-and-scale-out-file-cluster-home-lab-design/
    Used interconnect diagram within this design would actually scale beyond 2 hosts. But you'll have to get a SAS switch (actually at least two of them for redundancy as you don't want any component to become a single point of failure, don't you?)
    2) With 2012 R2 all I/O from a multiple hypervisor nodes is done thru the storage fabric (in your case that's SAS) and only metadata updates would be done thru the coordinator node and using Ethernet connectivity. Redirected I/O would be used in a two cases
    only a) no SAS connectivity from the hypervisor node (but Ethernet one is still present) and b) broken-by-implementation backup software would keep access to CSV using snapshot mechanism for too long. In a nutshell: you'll be fine :) See for references:
    http://www.petri.co.il/redirected-io-windows-server-2012r2-cluster-shared-volumes.htm
    http://www.aidanfinn.com/?p=12844
    3) These are independent things. CSV is not active/passive (see 2) so basically with an interconnection design you'll be using there's virtually no point to having one-CSV-per-hypervisor. There are cases when you'd still probably do this. For example if
    you'd have all-flash and combined spindle/flash LUNs and you know for sure you want some VMs to sit on flash and others (no so I/O hungry) to stay within "spinning rust". One more case is many-node cluster. With it multiple nodes basically fight for a single
    LUN and a lot of time is wasted for SCSI reservation conflicts resove (ODX has no reservation offload like VAAI has so even if ODX is present its not going to help). Again it's a place where SoFS "helps" as having intermediate proxy level turns block I/O into
    file I/O triggering SCSI reservation conflicts for a two SoFS nodes only instead of an evey node in a hypervisor cluster. One more good example is when you'll have a mix of a local I/O (SAS) and Ethernet with a Virtual SAN products. Virtual SAN runs directly
    as part of the hypervisor and emulates high performance SAN using cheap DAS. To increase performance it DOES make sense to create a  concept of a "local LUN" (and thus "local CSV") as reads targeting this LUN/CSV would be passed down the local storage
    stack instead of hitting the wire (Ethernet) and going to partner hypervisor nodes to fetch the VM data. See:
    http://www.starwindsoftware.com/starwind-native-san-on-two-physical-servers
    http://www.starwindsoftware.com/sw-configuring-ha-shared-storage-on-scale-out-file-servers
    (feeding basically DAS to Hyper-V and SoFS to avoid expensive SAS JBOD and SAS spindles). The same thing as VMware is doing with their VSAN on vSphere. But again that's NOT your case so it DOES NOT make sense to keep many CSVs with only 3 nodes present or
    SoFS possibly used. 
    4) DPM is going to put your cluster in redirected mode for a very short period of time. Microsoft says NEVER. See:
    http://technet.microsoft.com/en-us/library/hh758090.aspx
    Direct and Redirect I/O
    Each Hyper-V host has a direct path (direct I/O) to the CSV storage Logical Unit Number (LUN). However, in Windows Server 2008 R2 there are a couple of limitations:
    For some actions, including DPM backup, the CSV coordinator takes control of the volume and uses redirected instead of direct I/O. With redirection, storage operations are no longer through a host’s direct SAN connection, but are instead routed
    through the CSV coordinator. This has a direct impact on performance.
    CSV backup is serialized, so that only one virtual machine on a CSV is backed up at a time.
    In Windows Server 2012, these limitations were removed:
    Redirection is no longer used. 
    CSV backup is now parallel and not serialized.
    5) Yes, VSS and CBT would be used so data would be incremental after first initial "seed" backup. See:
    http://technet.microsoft.com/en-us/library/ff399619.aspx
    http://itsalllegit.wordpress.com/2013/08/05/dpm-2012-sp1-manually-copy-large-volume-to-secondary-dpm-server/
    I'd look at some other options. There are few good discussion you may want to read. See:
    http://arstechnica.com/civis/viewtopic.php?f=10&t=1209963
    http://community.spiceworks.com/topic/316868-server-2012-2-node-cluster-without-san
    Good luck :)
    StarWind iSCSI SAN & NAS

  • Problem with air read and write smb shared directory of file

    hi, everyone.
    I'm want to access smb directory of file,And to read and
    write operation, I would like to ask how I should do?
    Thanks!

    You can't access any OS facility nor execute arbitrary command.
    So the best solution is to mount samba directory BEFORE run your AIR application; you eventually can create a script that mount samba (and asks password) and then run you AIR application.
    see
    http://www.mikechambers.com/blog/2008/01/17/commandproxy-net-air-integration-proof-of-conc ept/
    for a more complex solution.

  • Multithreaded problem in read and write thread

    This is a producer consumer problem in a multi-threaded environment.
    Assume that i have multiple consumer (Multiple read threads) and a
    single producer(write thread).
    I have a common data structure (say an int variable), being read and written into.
    The write to the data sturcture happens occasionally (say at every 2 secs) but read happens contineously.
    Since the read operation is contineous and done by multiple threads, making the read method synchronized will add
    overhead(i.e read operation by one thread should not block the other read threads). But when ever write happens by
    the write thread, that time the read operations should not be allowed.
    Any ideas how to achive this ??

    If all you're doing is reading an int, then just use regular Java synchronization. You'll actually get a performance hit if you're doing simple read operations, as stated in the ReadWriteLock documentation:
    Whether or not a read-write lock will improve performance over the use of a mutual exclusion lock depends on the frequency that the data is read compared to being modified, the duration of the read and write operations, and the contention for the data - that is, the number of threads that will try to read or write the data at the same time. For example, a collection that is initially populated with data and thereafter infrequently modified, while being frequently searched (such as a directory of some kind) is an ideal candidate for the use of a read-write lock. However, if updates become frequent then the data spends most of its time being exclusively locked and there is little, if any increase in concurrency. Further, if the read operations are too short the overhead of the read-write lock implementation (which is inherently more complex than a mutual exclusion lock) can dominate the execution cost, particularly as many read-write lock implementations still serialize all threads through a small section of code. Ultimately, only profiling and measurement will establish whether the use of a read-write lock is suitable for your application.

  • Alert: Logical disk transfer (reads and writes) latency is too high Resolution state

    Hi 
    We are getting following errors for my 2 virtual servers. We are getting this alert continuously. My setup Windows 2008 R2 SP1 2 node Hyper V cluster. Which is hosted 7 guest OS out of am facing this problem with to guest os. Once this alert started
    my backup running slow.  
    Alert: Logical disk transfer (reads and writes) latency  is too high
    Source: E:
    Path: Servername.domain.com
    Last modified by: System
    Last modified time: 4/23/2013 4:15:47 PM Alert description: The threshold for the Logical Disk\Avg. Disk sec/Transfer performance counter has been exceeded.
    Alert view link: "http://server/OperationsManager?DisplayMode=Pivot&AlertID=%7bca891ba3-e9f2-421f-9994-7b4d6e867b33%7d"
    Notification subscription ID generating this message: {F71E01AF-0BE6-8377-7BE5-5CB6F5C037A1}
    Reagrds
    Mahesh

    Hi,
    Please see if following helps
    Disk transfer (reads and writes) latency is too high
    The
    threshold for the Logical Disk\Avg. Disk sec/Transfer performance counter has been exceeded
    If they are of no help, try asking this question in Operations Manager - General forum since alerts are generated by SCOM.
    Regards, Santosh
    I do not represent the organisation I work for, all the opinions expressed here are my own.
    This posting is provided AS IS with no warranties or guarantees and confers no rights.

  • In iCal and Notifications on a notebook, is it possible to expand the notes window to make it easier to read and write notes?

    In iCal and Notifications on a notebook, is it possible to expand the notes window to make it easier to read and write notes? In the past, I have used Outlook calendar and tasks and I was able to expand the windows which allowed me to put a great amount of details into either the notes section in events and tasks. It would be great to be able to do this in iCal and Notifications as well. I am using a Macbook Pro with OS X 10.8. Thank you very much for assistance with this.

    HI,
    Try Spaces for a virtual desktop.
    http://www.ehow.com/how2189851use-spaces-mac-os-x.html
    Carolyn

  • Modbus Ethernet read and write to a Eurotherm 6180XIO Modbus server using LV8.2 shared variables

    I am having EXTREME difficulty trying to establish communications with a Modbus device using LV8.2 shared variables.  The device is a Eurotherm 6180XIO Datalogger configured as a Modbus master.  The PC and a cFP-1804 are slaves.  All IP addresses are set correctly.  This approach using shared variables would seem simple, but I can't find any examples or proper guidance on how to get it working.  I am trying to avoid having to mess around with TCP/IP, OPC, or any other old-fashioned method.
    I have read many threads on related topics but none directly apply to this situation.  I have created a library containing a Modbus I/O server and shared variables bound to read and write holding registers.  I have followed all recommended tips for creating such variables but I can neither read or write data.  All data types are U16 due to Modbus protocol limitations.  I have also applied the LV x10 factor in the most significant digit in the register offset (6 digits instead of 5).
    I have a cFP-1804 on the same network which reads into the datalogger OK.  The registers I use are 31000 (for CH0 on module 0, 31002 for CH1, etc) and the data can be read as FLOAT32.  I have updated the firmwate on the 1804 to the latest level.  I cannot even get shared variables to read SGL values.  Using registers 301001 for CH0 and 301002 for CH1 I can only read U16 values, and not a 2-word SGL.
    Third party Modbus simulation software is able to write to and read from registers very easily, but not LabVIEW.
    Some questions are:
    - do I use a Modbus master or slave as an I/O server in the library as a target for binding the shared variables?
    - is there some other wierd translation in register offsets between LabVIEW and traditional Modbus?
    - is this actually possible using shared variables or am I wasting my time?

    Sending the whole 60-character string using a string or array would be the most efficient.  I have tried both methods, and these only cause the datalogger to flag a message log but no text is displayed.
    For a string variable, I have used the following binding "My Computer\Modbus Test.lvlib\ModbusServer6180\442305", where ModbusServer6180 is a Modbus I/O server configured with the logger IP address, and 42304 is the register offset at the start of the text block in the logger.  I need to write to 30 consecutive registers starting with this one.  I am not using buffering and have not enabled single writer.
    Can anyone confirm whether this method should work in 8.2?
    Does the string need a special termination character?

  • How to read and write data from json file from windows phone7 app

    Hi
    I am developing wp7 app for the use of students my questions are
    How can i write a code to read and write the json/text file for the wp7.
    I am using windows 7 OS, VS 2010 Edition.
    This is my code below:
    xaml:
    <Grid>
                        <TextBlock Height="45" HorizontalAlignment="Left" Margin="7,18,0,550" Name="textBlock1" Text="Full
    Name: " />
                        <TextBox Width="350" Height="70" HorizontalAlignment="Left" Margin="108,1,0,0" Name="txtName"
    Text="Enter your full name" VerticalAlignment="Top" />
                        <TextBlock Height="45" HorizontalAlignment="Left" Margin="6,75,0,0" Name="textBlock2" Text="Contact
    No: " VerticalAlignment="Top" />
                        <TextBox Width="350" Height="70" HorizontalAlignment="Left" Margin="108,61,0,480" Name="txtContact"
    Text="Enter your contact number" MaxLength="10" />
                        <Button Content="Register" Height="72" HorizontalAlignment="Left" Margin="10,330,0,0" Name="btnRegister"
    VerticalAlignment="Top" Width="190" Click="btnRegister_Click" />
                    </Grid>
    xaml.cs:
    private void btnRegister_Click(object sender, RoutedEventArgs e)
                string name, contact;
                name = txtName.Text;
                contact = txtContact.Text;
                try
                    if (name != "" && contact != "")
                        string msg = name + " " + contact;
                        MessageBox.Show(msg);
                        Student stud = new Student
                            Name= name,
                            Contact = contact,
                        string jsonString = JsonConvert.SerializeObject(stud);
                        MessageBox.Show(jsonString);
                    else
                        MessageBox.Show("Input Proper Information", MessageBoxButton.OK);
                catch (Exception ex)
                    MessageBox.Show(ex.Message);
    I have download NewtonSoft.json version 5.0.8.
    So, I am able to convert input data into json format, but how can I able to write and read this data from a json/text file.
    How can I do?
    Thank you in adv and please, reply soon.

    We don't have many samples left for Windows Phone 7 + Azure, the closest one to what you want to do is probably:
    Using Local Storage with OData on Windows Phone To Reduce Network Bandwidth
    this sample uses the local database feature: 'LINQ to SQL', available to Windows Phone 7.1 and 8.0 Silverlight applications, instead of simple file storage but even if you choose to stick with simple file storage I believe you should be able to adapt the
    networking related portions of the sample to your particular application.
    Eric Fleck, Windows Store and Windows Phone Developer Support. If you would like to provide feedback or suggestions for future improvements to the Windows Phone SDK please go to http://wpdev.uservoice.com/ where you can post your suggestions and/or cast
    your votes for existing suggestions.

  • Read and Write data in iphone app

    I want to build a simple offline mobile app which reads and writes or stores info to a file, xml, json, txt or whatever.
    What is the best way to do this?
    I am currently learing jQuery and PHP, so I don't want to have to learn some other million things that does one thing but not another.
    Can this be achived without objective C? Can i do this with Jquery and phone Gap?
    Thanks

    Do a search for 'HTML 5 Storage' and you'll find a bunch of resources that will help. Here's a few
    http://www.w3schools.com/HTML/html5_webstorage.asp
    http://diveintohtml5.info/storage.html
    http://htmlpad.wordpress.com/2010/03/10/html-5-data-storage-javascript-api-on-ipad-and-iph one/

  • How to increase disk read and write speed after installing new SSD (2009 Macbook Pro)? Why not as fast as advertised?

    Hi everyone,
    I just installed a Crucial MX10512 GB SSD into my 2009 Macbook Pro. It's definitely much faster, but the read and write disk speed is around 200 Mb/s for both versus the 300-500 Mb/s that the SSD advertised. Any ideas as to why? And is there anything I can do to make it faster? Before I installed it, it was between 80-90 Mb/s.
    Specs:
    - currently have about 460 of 511 GB of storage available
    - am using 2GB of memory
    - running on 10.10.2 Yosemite
    Thanks!

    nataliemint wrote:
    Drew, forgive me for being so computer-incompetent but how would I boot from another OS? And shouldn't I be checking the read speeds on my current OS (Yosemite) anyways because I want to know how the SSD is performing on the OS I use? And finally, what kind of resources would it be using that would be slowing down my SSD?
    Sorry for all the questions - I'm not a Macbook wiz by any means!
    You could make a clone of your internal OS onto an external disk. Hopefully you already have a backup of some form
    A clone is a full copy, so you can boot from it. It makes a good backup as well as being useful to test things like this.
    Carbon Copy Cloner will make one or you can use Disk Utility to 'restore' your OS from the internal disk to an external one.
    Ideally the external disk is a fast disk with a fast 'interface' like Thunderbolt, Firewire 800 or USB3. USB2 can work, but it is slow & may effect the test.
    You connect the clone, hold alt at startup & select the external disk in the 'boot manager'. When the Mac is finished booting run the speed tester.
    Maybe this one…
    https://itunes.apple.com/gb/app/blackmagic-disk-speed-test/id425264550
    Test the internal & compare to the previous tests
    A running OS will do the following on it's boot disk…
    Write/ read cache files from running apps
    Write/ read memory to disk if memory is running low
    Index new files if content is changing or being updated
    Copy files for backing up (Time Machine or any other scheduled tasks)
    Networking can also trigger read/ write on the disk too.
    You may not have much activity that effects a disk speed test, but you can't really be sure unless that disk is not being used for other tasks.
    Disk testing is an art & science in itself, see this if you want to get an idea …
    http://macperformanceguide.com/topics/topic-Storage.html
    Simply knowing that it's about twice the speed would be enough to cheer me up

Maybe you are looking for