Extremely bad read/write latency on iSCSI datastore

Hello,
I have a single host in my test lab which is having very bad read/write latency to an iSCSI datastore.  All of my hosts have 1G ethernet, other hosts in the lab are not having this issue.  What can I check to help isolate this issue?  Are there any steps I can take to optimize the performance?

I'm struggling with exactly the same problem, but on ESXi 4.1.
It seems that zfs inflate IO. When you check disk activity you can see that underline zfs trash the disks, while it results in a modest activity within ntfs.
I just cannot figure out how to cope with it.

Similar Messages

  • Formatted on  XP - EXTREMELY SLOW read/write on the Mac !!!!

    Man this iPod is wack!! Im using the iPod mini in Disk Mode to transfer bits (updaters since my XP machine is NEVER connected to the net) back and forth between my PowerBook and my XP Intel based machine. Obviously i set up and formatted the iPod Mini on the XP machine.
    Observation, file read/write times are unacceptably SLOW on Mac and the PC. Did Apple deliberately make this thing unusable in this situation? This thing has a transfer speed as if it were connected to a USB 1.1 port but it is NOT, on any of my machines. I format my USB Flash Drive (SanDisk Cruzer Titanium) on the same XP machine, use the same ports and it's read/write times are at least 5 times faster than the Mini !!!
    When i format and setup the iPod Mini on the Mac then install MacDrive on the PC astonishingly the read/write times significantly increase on BOTH Mac and the PC. As i suggested i think there is something drastically wrong with Apple's PC iPod setup, there is NO way my other devices formatted on the PC have such abysmal performance . Man i click on a folder in the Finder of the PC formatted iPod and it take about 3 seconds until the files inside are displayed...this is also the case on the PC.
    Anyone else experienced this?
    I was going to buy another iPod specifically for transferring data between my Mac's and my colleagues PC's, but there is no way i will with such severely degraded read/write performance.
    Best

    Apparently this is not a single isolated case.
    Scott Aronian Posts :
    "On Mac FAT32 Format Slower Than HFS "
    My 60GB iPod came in Windows FAT32 format which I liked since I could use it in disk mode on both Mac & Windows. However it seemed slow to start playing video and when switching between video files.
    I did some timing tests and on the Mac FAT32 was much slower than HFS.
    FAT32
    Start playing movie = 12s
    Stop movie and restart at same location = between 12-30s
    HFS
    Start playing movie = 5s
    Stop movie and restart at same location = between 6s
    If you have a Mac and your new 5G iPod is formatted for Windows, you may want to use the iPod Updated to restore and make the drive HFS.
    There is NO such performance problems with USB Flash drives when formatted to Fat32 on PC's i have tested many and they perform similar on BOTH platforms.
    Something fishy goin on here

  • Windows Server 2012 - Hyper-V - iSCSI SAN - All Hyper-V Guests stops responding and extensive disk read/write

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM, 2 teamed NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM, 1 NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched of.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, WMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with 2 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair
    Our Problem
    Our problem is that for some reason all of the VMs stops responding or responds very slowly and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    Symptoms (i.e. this happens, or does not happen, at the same time)
    I we look at resource monitor on the host then we see that there is often an extensive read from a VHDX of one of the VMs (40-60 Mbyte/s) and a combined write speed to many files in \HarddiskVolume5\System Volume Information\{<someguid and no file extension>}.
    See iamge below.
    The combined network speed to the iSCSI SAN is about 500-600 Mbit/s.
    When this happens it is usually during and after a VSS ShadowCopy backup, but has also happens during hours where no backup should be running (i.e. during daytime when the backup has finished hours ago according to the log files). There is however
    not that extensive writes to the backup file that is created on an external hard drive and this does not seem to happen during all backups (we have manually checked a few times, but it is hard to say since this error does not seem leave any traces in event
    viewer).
    We cannot find any indication that the VMs themself detect any problem and we see no increase of errors (for example storage related errors) in the eventlog inside the VMs.
    The QNAP uses about 50% processing Power on all cores.
    We see no dropped packets on the switch.
    (I have split the image to save horizontal space).
    Unable to recreate the problem / find definitive trigger
    We have not succeeded in recreating the problem manually by, for instance, running chkdsk or defrag in VM and Hosts, copy and remove large files to VMs, running CPU and Disk intensive operations inside a VM (for instance scan and repair a database file).
    Questions
    Why does all VMs stop responding and why is there such intensive Read/Writes to the iSCSI SAN?
    Could it be anything in our setup that cannot handle all the read/write requests? For instance the iSCSI SAN, the hosts, etc?
    What can we do about this? Should we use MultiPath IO instead of NIC teaming to the SAN, limit bandwith to the SAN, etc?

    Hi,
    > All VMs are using dynamic disks (as recommended by Microsoft).
    If this is a testing environment, it’s okay, but if this a production environment, it’s not recommended. Fixed VHDs are recommended for production instead of dynamically expanding or differencing VHDs.
    Hyper-V: Dynamic virtual hard disks are not recommended for virtual machines that run server workloads in a production environment
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx
    > This is the primary host and normaly all VMs run on this host.
    According to your posting, we know that you have Cluster Shared Volumes in the Hyper-V cluster, but why not distribute your VMs into two Hyper-V hosts.
    Use Cluster Shared Volumes in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/en-us/library/jj612868.aspx
    > 2 teamed NIC dedicated to iSCSI storage.
    Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices. (At least it’s not supported until Windows Server 2008 R2. Although
    Windows Server 2012 has built-in network teaming feature, I don’t article which declare that Windows Server 2012 network teaming support iSCSI connection)
    Understanding Requirements for Failover Clusters
    http://technet.microsoft.com/en-us/library/cc771404.aspx
    > I have seen using MPIO suggests using different subnets, is this a requirement for using MPIO
    > or is this just a way to make sure that you do not run out of IP adressess?
    What I found is: if it is possible, isolate the iSCSI and data networks that reside on the same switch infrastructure through the use of VLANs and separate subnets. Redundant network paths from the server to the storage system via MPIO will maximize availability
    and performance. Of course you can set these two NICs in separate subnets, but I don’t think it is necessary.
    > Why should it be better to not have dedicated wireing for iSCSI and Management?
    It is recommended that the iSCSI SAN network be separated (logically or physically) from the data network workloads. This ‘best practice’ network configuration optimizes performance and reliability.
    Check that and modify cluster configuration, monitor it and give us feedback for further troubleshooting.
    For more information please refer to following MS articles:
    Volume Shadow Copy Service
    http://technet.microsoft.com/en-us/library/ee923636(WS.10).aspx
    Support for Multipath I/O (MPIO)
    http://technet.microsoft.com/en-us/library/cc770294.aspx
    Deployments and Tests in an iSCSI SAN
    http://technet.microsoft.com/en-US/library/bb649502(v=SQL.90).aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • Alert: Logical disk transfer (reads and writes) latency is too high Resolution state

    Hi 
    We are getting following errors for my 2 virtual servers. We are getting this alert continuously. My setup Windows 2008 R2 SP1 2 node Hyper V cluster. Which is hosted 7 guest OS out of am facing this problem with to guest os. Once this alert started
    my backup running slow.  
    Alert: Logical disk transfer (reads and writes) latency  is too high
    Source: E:
    Path: Servername.domain.com
    Last modified by: System
    Last modified time: 4/23/2013 4:15:47 PM Alert description: The threshold for the Logical Disk\Avg. Disk sec/Transfer performance counter has been exceeded.
    Alert view link: "http://server/OperationsManager?DisplayMode=Pivot&AlertID=%7bca891ba3-e9f2-421f-9994-7b4d6e867b33%7d"
    Notification subscription ID generating this message: {F71E01AF-0BE6-8377-7BE5-5CB6F5C037A1}
    Reagrds
    Mahesh

    Hi,
    Please see if following helps
    Disk transfer (reads and writes) latency is too high
    The
    threshold for the Logical Disk\Avg. Disk sec/Transfer performance counter has been exceeded
    If they are of no help, try asking this question in Operations Manager - General forum since alerts are generated by SCOM.
    Regards, Santosh
    I do not represent the organisation I work for, all the opinions expressed here are my own.
    This posting is provided AS IS with no warranties or guarantees and confers no rights.

  • Read/Write speeds with Airport Extreme and USB Hard Drives

    Anybody know how fast the read/write speeds are with a USB hard drive and the Airport Extreme Base Station?
    My Setup
    I have a 17 MBP (2010) and a 13 MBP (2011) - no SSD's
    MBP generally connect via 802.11n 5Ghz with a very strong signal
    Gigabit connection on a Windows Desktop
    I have a variety of external drives (7200, 5400, Drobo) at varying capacity (320GB, 2TB, 5.6TB)
    Goals
    Backup: Time machine
    Backup: File copies of pictures and home videos (greater than 1GB files)
    Backup: Crashplan
    File sharing: Aperture libraries--not sure if that is possible or practical. That is something that I need to research further...but on the off chance people have experience with it.
    Thanks!

    ishjen wrote:
    Anybody know how fast the read/write speeds are with a USB hard drive and the Airport Extreme Base Station?
    Welcome to Apple's discussion groups.
    Apple's AirPort base stations aren't known for fast file serving, but for most purposed they're fast enough.
    I can't comment specifically on your Aperture plan, but with some software it's important to avoid simultaneous access by more than one user.

  • How do you create default Read/Write Permissions for more than 1 user?

    My wife and I share an iMac, but use separate User accounts for separate mail accounts, etc.
    However, we have a business where we both need to have access to the same files and both have Read/Write permissions on when one of us creates a new file/folder.
    By default new files and folders grant Read/Write to the creator of the new file/folder, and read-only to the Group "Staff" in our own accounts or "Wheel" in the /Users/Public/ folder, and read-only to Everyone.
    We are both administrators on the machine, and I know we can manually override the settings for a particular file/folder by changing the permissions, but I would like to set things up so that the Read/Write persmissions are assigned for both of us in the folder for that holds our business files.
    It is only the 2 of us on the machine, we trust each other and need to have complete access to these many files that we share. I have archiveing programs running so I can get back old versions if we need that, so I'm not worried about us overwriting the file with bad info. I'm more concerned with us having duplicates that are not up to date in our respective user accounts.
    Here is what I have tried so far:
    1. I tried to just set the persmissions of the containing folder with us both having read/write persmissions, and applied that to all containing elements.
    RESULT -> This did nothing for newly created files or folders, they still had the default permissions of Read/Write for the creating User, Read for the default Group, Read for Everyone
    2. I tried using Sandbox ( http://www.mikey-san.net/sandbox/ ) to set the inheritance of the folder using the methods laid out at http://forums.macosxhints.com/showthread.php?t=93742
    RESULT -> Still this did nothing for newly created files or folders, they still had the default permissions of Read/Write for the creating User, Read for the default Group, Read for Everyone
    3. I have set the umask to 002 ( http://support.apple.com/kb/HT2202 ) so that new files and folders have a default permission that gives the default group Read/Write permissions. This unfortunately changes the default for the entire computer, not just a give folder.
    I then had to add wife's user account to the "Staff" group because for some reason her account was not included in that. I think this is due to the fact that her account was ported into the computer when we upgraded, where as mine was created new. I read something about that somewhere, but don't recall where now. I discovered what groups we were each in by using the Terminal and typing in "groups username" where username was the user I was checking on.
    I added my wife to the "Staff" group, and both of us to the "Wheel" group using the procedures I found at
    http://discussions.apple.com/thread.jspa?messageID=8765421&#8765421
    RESULT -> I could create a new file using TextEdit and save it anywhere in my account and it would have the permissions: My Username - Read/Write, "Staff" or "Wheel" (depending on where I saved it) - Read/Write, Everyone - Read Only, as expected from the default umask.
    I could then switch over to my wife's account, open the file, edited it, and save it, but then the permissions changed to: Her Username - Read/Write, (unknown) - Read/Write, Everyone - Read Only.
    And when I switch back to my account, now I can open the file, but I can't save it with my edits.
    I'm at my wits end with this, and I can believe it is impossible to create a common folder that we can both put files in to have Read/Write permissions on like a True Shared Folder. Anyone who has used windows knows what you can do with the Shared folder in that operating system, ie. Anyone with access can do anything with those files.
    So if anyone can provide me some insight on how to accomplish what I really want to do here and help me get my system back to remove the things it seems like I have screwed up, I greatly appreciate it.
    I tried to give as detailed a description of the problem and what I have done as possible, without being to long winded, but if you need to know anything else to help me, please ask, I certainly won't be offended!
    Thanks In Advance!
    Steve

    Thanks again, V.K., for your assistance and especially for the very prompt responses.
    I was unaware that I could create a volume on the HD non-destructively using disk utility. This may then turn out to be the better solution after all, but I will have to free up space on this HD and try that.
    Also, I was obviously unaware of the special treatment of file creation by TextEdit. I have been using this to test my various settings, and so the inheritance of ACLs has probably been working properly, I just have been testing it incorrectly. URGH!
    I created a file from Word in my wife's account, and it properly inherited the permissions of the company folder: barara - Custom, steve - Custom, barara - Read/Write, admin - Read Only, Everyone - Read Only
    I tried doing the chmod commands on $TMPDIR for both of us from each of our accounts, but I still have the same behavior for TextEdit files though.
    I changed the group on your shared folder to admin from wheel as you instructed with chgrp. I had already changed the umask to 002, and I just changed it back to 022 because it didn't seem to help. But now I know my testing was faulty. I will leave it this way though because I don't think it will be necessary to have it set to 002.
    I do apparently still have a problem though, probably as a result of all the things I have tried to get this work while I was testing incorrectly with TextEdit.
    I have just discovered that the "unknown user" only appears when I create the a file from my wife's account. It happens with any file or folder I create in her account, and it exists for very old files and folders that were migrated from the old computer. i.e. new and old files and foders have permissions: barara - Read/Write, unknown user - Read Only, Everyone - Read Only
    Apparently the unknown user gets the default permissions of a group, as the umask is currently set to 022 and unknown user now gets Read Only permissions on new items, but when I had umask set to 002, the unknown user got Read/Write permissions on new items.
    I realize this is now taking this thread in a different direction, but perhaps you know what might be the cause of this and how to correct or at least know where to point me to get the answer.
    Also, do you happen to know how to remove users from groups? I added myself and my wife to the Wheel group because that kept showing up as the default group for folders in /Users/Shared
    Thanks for your help on this, I just don't know how else one can learn these little "gotchas" without assistance from people like you!
    Steve

  • Extremely long "reading" times in media browser

    We are running CC2014 on 3 edit systems with a shared 10Gbit 36TB Synology NAS.  Each workstation has approximately 12TB of local raid storage as well. We shoot P2 primarily and I have a little over 17TB archived on the Synology.  Lately, we have had extremely long "reading" times in Media Browser on all the machines.  I don't quite understand how Premiere is indexing and displaying as sometimes the "reading" time is relatively quick, ie. under 10 seconds and then other times it can take 3-5 minutes to display clips within a directory.
    My Directory Structure is:
    Media
         P2 footage
              (P2 folders/ie. cards are all saved with their shoot dates here)
              Our total archive of P2 footage is around 70 cards.
    It appears that when Media Browser is pointed to a P2 Card folder it is indexing the entire "P2 footage directory, ie. all 70 cards", rather than just the individual folder containing 1 P2 card's footage. Ist this the case?
    Would I get better read speeds in Media Browser if I organized my P2 footage into more directories, ie. by year and quarter or by year and month?  
    Really need to know how the indexing works.  Does premiere cache preview files for each clip and if so, how long are these previews valid.  It seems that when I media browse a new P2 Card/Folder, it has forgotten any previous preview files that have been made and cached.
    Any explanation or help would be appreciated.
    We have copied large numbers of cards/folders from the Synology to our local raids to see if the latency was due to the network but the results are the same when we media browse on the local raid copies.

    We are working on this issue. Could you open a support case and submit your
    project so we can have support see if it is fixed in a later release?
    br
    William Wheeler wrote:
    Is anyone experiencing long deploy times using Eclipse with the Portal/XML Beans facets...it takes approx 30 mins to deploy our ear file. The ear contains:
    3 large schemas projects (using XML Bean Builder facets)
    1 Control util project
    1 Portal Web Project
    1 Ear Project
    3 Util projects
    Is there some validation options that can be turned off or other actions to speed this up?
    Bill Wheeler

  • USB drive connected to Airport Extreme becomes read-only

    Hello
    I have a USB drive connected to my Airport Extreme 802.11n (2nd generation, firmware v7.6). To access the drive, I've set secure shared disks to "with accounts" then defined a user account with read/write access. I connect to this drive using two Windows 7 SP1 64-bit computers.
    Reading and writing to the disk is fine. However, after a day or so, I can read from but cannot write to the disk! I then have to restart the USB drive to write to it again. This has been happening for a while.
    Also connected to the Airport Extreme (using Ethernet) is a Lacie NAS drive (which runs a cut-down version of Linux). This works fine: I've never had any problems with it! It too is password protected.
    Please help!
    Thanks
    Praful

    Tried what you suggested last night - no change. What is odd, though, is that I can create a new folder on the external USB HDD (through the Hub) and copy a file into it. While I can then close Finder, re-open and see the Folder and file I copied, I still can't see all the data that is currently stored on the drive...Is this a delete and re-copy scenario?
    Thanks!

  • Can't Read/Write to External HDD-Permissions Problem

    I have external Seagate HDD, formatted as FAT32, connected to Airport Extreme Base Station. I am unable to Read/Write to this Drive because Finder says I "Don't have permissions" to do so.
    Following are details that may be important:
    1. I was able to read/write to this drive under Leopard but lost the right under Snow Leopard, after upgrade to SL.
    2. In Airport Utility, I set up (2) accounts for the External HDD, "Dad" with "Read/Write" access and "Mom" with "Read" access only. I have Guest Account privileges set to "Read Only". It has been this way for (2) years.
    3. Under "Get Info" in Finder, the only account listed is "everyone" with "read" permission only (this is same as Guest Acount in Airport Utility). I am unable to add new accounts because I get error message "The account name is not valid" even though I am using the account name that I am logged in under.
    4. I can change the Permissions on the "everyone" account (in Finder) by changing the permissions on the "Guest Account" in Airport Utility. When I do this, "everyone" in Finder now has "read/write" access but my two accounts, "Dad" and "Mom still don't show up in Finder.
    5. I also upgraded internal HDD to 1Tb and used Restore from Time Machine. I hhad problems with the restore and did new install of Snow Leopard.
    Any ideas on how to make Finder show the two accounts I have set up in Airport Utility?

    My apologies for bumping this but I still don't have a solution...any help would be greatly appreciated!

  • Is it possible to get a refund if i bought an album with extremely bad sound quality?

    As asked in the title, is it possible to get a refund if i bought an album with extremely bad sound quality?
    I am very unsatisfied with my latest purchase of the album "Never take ffriendship personal" by "Anberlin" ...
    I can find all the songs in much better quality, even on YouTube ...
    I bought two other albums by Anberlin at the same time, on which the sound quality is much much better and perfectly clear.
    is it possible to get some sort of refund ???

    http://support.apple.com/kb/HT1933
    Regards.

  • Why doesn't Photoshop support read/write of .mpo files?

    I am actually blown away that I cannot find a single Photoshop plugin that reads and/or saves .mpo files. Does somebody know why? And why isn't anyone talking about this format? I find it hard to believe that no one in the entire Photoshop Windows forum has ever wondered about Photoshop's complete lack of or interest in support for .mpo files.
    If nobody knows how to read/write .mpo files directly through Photoshop, perhaps someone could point me to some documentation for writing a plugin for it. It's a really really simple file format--there's no reason that a plugin shouldn't be available already.
    Thanks in advance for your support.
    Jase

    Well wave of the future or not. its beening pushed hard. That is in a way good for us the consumers.. choice is always good.
    3d tv's and monitors require at least 120hz in order to function. what does that mean for people who spend all their time in front of a monitor?
    Bringing about the availablilty of 120hz and beyond flat panels.
    Many are people are prone to getting migrains due to refresh rates 60 hz or less, Most people getting migrains from sitting at work in front of a 60 hz monitor all day long, and do not even realize it.
    I happen to be one of those people who get migrains, especially if I am doing artwork for 8 - 10 hours.
    I just picked up a 120 hz monitor like 2 weeks ago cause I am a techno junky and love my toys, it is the asus one that comes with the Nvidia glasses.
    There is nothing wrong with my other monitor, a 24 inch HP monitor. Its a 1920 x 1200 5ms responce time 60hz monitor.
    I didn't think i would really even notice the difference between a 120 hz and 60 hz monitor.
    Boy ohh boy was I ever mistaken about not Noticing. sitting side by side, to me the picture difference is amaizing. Now i have not had an oppertunity to do a real weekend artwork fest but that will come with time.
    The 3d features with the glasses on, I think are very cool in games its extreemly noticable since you have more control over what it is you are looking at. I am not sure i would play a game end to end for hours and hours in 3d. i think that would likely make my head explode but we'll see about that too and if not my be running into walls when i am done since you tend to loose perseption. not to mention 3d glassses make it so you see each picture at 60 hz in one eye 60 hz in the other putting you back to a 60 hz refresh and migrains.
    anyway way off subject, but to sum it up.
    Some good things can come from flavor of the day fads. Grant it some bad some times does too.. i mean look at bell bottoms....

  • Incorrect data type when writing to FPGA Read/Write Control

    I have run in to a problem this morning that is causing me substantial headache.  I am programming a CompactRIO chassis running in FPGA mode (not using the scan engine) with LabVIEW 2012.  I am using the FPGA Read/Write Control function to pass data from the RT Host to the FPGA Target.  The data the RT host is sending comes from a Windows host machine (acting as the UI) and is received by the RT Host through a network published variable.
    The network published shared variable (shared between the RT and Windows system) is a Type Def cluster containing several elements, one of which is a Type Def cluster of fixed point numerics.  The RT system reads this shared variable and breaks out the individual elements to pass along to various controls on the FPGA code's front panel.  The FPGA's front panel contains a type def cluster (the same type def cluster, actually) of fixed point numerics.
    The problem comes in the RT code.  After I read the shared variable I unbundle the cluster by name, exposing the sub-cluster of fixed point numerics.  I then drop an FPGA Read/Write Control on the RT block diagram and wire up the FPGA reference.  I left click on the FPGA Read/Write Control and select the cluster of fixed point numerics.  I wire these together and get a coercion dot.  Being a coercion dot hater, I hover over it the dot and see that the wire data type is correct (type def cluster of fixed point numerics), but the terminal data type is listed as a cluster containing a Boolean, code integer and source string, also known as an error cluster.  I delete the wire and check the terminal data type on the Read/Write Control, which is now correctly listed as a type def cluster of fixed point numerics.  Rewiring it causes the terminal to revert back to the error cluster.  I delete the wire again and right click on the terminal to add a control.  Sure enough, a type def cluster of fixed point numerics appears.  Right clicking and adding an indicator to the unbundle attached to the network shared variable produces the proper result.  So, until they are attached to each other, everything works fine.  When I wire these two nodes together, one spontaneously changes to a error cluster.
    Any thoughts would be appreciated.

    My apologies I never got back to responding on this.  I regret that now because I got it to work but never posted how.  I ran in to the exact same problem today and returned to this post to read the fix.  It wasn't there, so I had to go through it all over again.
    The manifestation of the problem this time was that I was now reading from the Read/Write FPGA front panel control and writing to a network published shared variable.  Both of these (the published shared variable and the front panel control) were based on a strict type defined cluster, just like in the original post.  In this instance, it was a completely different cluster in a completely different project, so it was not a one-off thing.
    In addition to getting the coercion dot (one instance becoming an error cluster, recall), LabVIEW would completely explode this time around.  If I saved the VI after changing type definition (I was adding to the cluster) I would get the following error:
    Compile error.  Report this problem to N.I. Tech Support.  Copy cvt,csrc=0xFF
    LabVIEW would then crash hard and shutdown without completing the save.  FYI, I'm running LabVIEW 12.0f3 32-bit.
    If I would then reopen the RT code, the same crash would occur immediately, ad nauseam.  The only way to get the RT code to open was to change the type defined cluster back to the way it was (prior to adding the new element).
    I didn't realize it last time around (what originally prompted this post), but I believe I was adding to a type def cluster when this occurred the first time.
    So, how did I fix it this time around? By this point I tried many, many different things, so it is possible that something else fixed it.  However, I believe that all I had to do was to build the FPGA code that the RT code was referencing.  I didn't even have to deploy it or run it... I just had to build it.  My guess is that the problem was the FPGA Reference vi (needed to communicate with the FPGA) is configured (in my case) to reference a bit file.  When the development FPGA Main.vi ceases to match the bit file, I think that bad things happen.  LabVIEW seems to get confused because the FPGA Main.vi development code is up and shows the new changes (and hence has the updated type def), but when you ask the RT code to do something substantial (Open, Save, etc), it refers to the old bit file that has not yet been updated.  That is probably why the error getting thrown was a compile error.
    I'm going to have to do an additional round of changes, so I will test this theory.  Hopefully I will remember to update this post with either a confirmation or a retraction.

  • Internet is extremely bad atm

    Ever since yesterday my internet has been absolutely dire. Yesterday when I came home from college and try to connect I was at 20k+ ping for around 4 hours. It then went down and after that it was perfect for around 4-5 hours. The next morning it again was extremely bad. Later that afternoon it was perfect for a good few hours and now it's turned really **bleep** again.
    I have never experienced problems like this before. Before either the internet was down completely or working perfect. For now it's working, but my ping is unbearable. I am not downloading or anything, it's definately the internet.
    I live in the Fife area in Scotland - is there at all any problems in this area?

    check your exchange here  http://usertools.plus.net/exchanges/mso.php
    http://usertools.plus.net/exchanges/?
    http://btbusiness.custhelp.com/app/service_status
    If you like a post, or want to say thanks for a helpful answer, please click on the Ratings star on the left-hand side of the post.
    If someone answers your question correctly please let other members know by clicking on ’Mark as Accepted Solution’.

  • Muti-channel read/write through FP data socket posible?

    Hello,
    I'm trying to talk to cFP-1808 module from desktop PC real-time host over Ethernet. I did not find any Fieldpoint driver for desktop RT so I'm forced to use data sockets to communicate with cFP-1808. It seems that one can only read or write to one FP data channel per socket. With typical latency of tens of milliseconds per such read/write and couple of hundreds channels present, I'm looking at SECONDS per cycle time, which is really slow.
    Is there any workaround for this problem?

    You can also read and write to channels in LabVIEW.  First you have to configure your 1808 in MAX and save the IAK file.  You then load the IAK file in LabVIEW by selecting browse in the FieldPoint IO Point In (constant or control).
    In the FieldPoint Device Selection Dialog Windows select the View Configurations Tab.
    In this tab navigate to your IAK file and name it in the text box next to where the full file path is displayed.
    In the Browse FieldPoint Tab of the same window you should now be able to see all of your FieldPoint modules and channels. 
    Message Edited by BLAQmx on 03-06-2008 01:12 PM
    Mark
    LabVIEW R&D

  • SYSERROR -9026 BD Bad datapage,write/check count

    Hi,
    we're running a MaxDB 7.5.38 on linux and we had an immediate shutdown last friday.
    After checking the knldiag.err we found that an index was leading to a "Bad page - checksums not matching" error, resulting in this immediate shutdown.
    We found the index via "select * from roots where root=xxx" and dropped/recreated this index afterwards. After this the database was running again (and is doing so currently).
    But one problem remains. We're not able to create a backup anymore since this crash. Every backup-attempt fails with the message
    2008-06-22 21:00:23  1372 ERR 20004 Data     Bad page - checksums not matching
    2008-06-22 21:00:23  1372 ERR 20005 Data     Bad page - calculated checksum [ 193943944 ] checksum found in page [ 207442492 ]
    2008-06-22 21:00:23  1372 ERR 52015 SAVE     write/check count mismatch 1459521
    2008-06-22 21:00:24  1371 ERR 52012 SAVE     error occured, basis_err 300
    2008-06-22 21:00:24  1371 ERR 51080 SYSERROR -9026 BD Bad datapage,write/check count
    I already recreated the table, to which the faulty index belonged and I also made a "check database structure extended" for this one table. The result was "checking of table xxxx successfully finished".
    What can we do to create a backup again ???
    thanks..::GERD::..

    Hello Melanie,
    also thanks for answering.
    I got this two message blocks from knldiag.err, the first one from the crash and the second one from the failed backup:
    db crash------
    2008-06-20 15:47:34 23919 ERR 20013 IOMan    Bad page on data volume 2 blockno 381754
    2008-06-20 15:47:35 23919 ERR 20004 Data     Bad page - checksums not matching
    2008-06-20 15:47:35 23919 ERR 20005 Data     Bad page - calculated checksum [ 196750527 ] checksum found in page [ 207442492 ]
    2008-06-20 15:47:35 23919 ERR 20013 IOMan    Bad page on data volume 2 blockno 381754
    2008-06-20 15:47:37 23919 ERR 20025 IOMan    Bad data page - Requested pageno 1459521 (perm) read pageno 1459521
    2008-06-20 15:47:37 23919 ERR 20020 Data     Bad data page 1459521 belongs to root 426184 which is of filetype 'Index'
    failed backup-------
    2008-06-22 21:00:23  1372 ERR 20004 Data     Bad page - checksums not matching
    2008-06-22 21:00:23  1372 ERR 20005 Data     Bad page - calculated checksum [ 193943944 ] checksum found in page [ 207442492 ]
    2008-06-22 21:00:23  1372 ERR 52015 SAVE     write/check count mismatch 1459521
    2008-06-22 21:00:24  1371 ERR 52012 SAVE     error occured, basis_err 300
    2008-06-22 21:00:24  1371 ERR 51080 SYSERROR -9026 BD Bad datapage,write/check count
    So it seems that's the same page causing the two different problems. This led me to the assumption that we can get rid of the problem by renaming the table with the dropped index and drop it after the data has been copied to the newly created table.
    I thought if the table is no longer there the database will no longer use this bad page. But why does the backup wants to access this page ?
    Do I have to restart the database (I never did this since the db crash on friday) ?
    ...GERD...

Maybe you are looking for