UDP Parallel Read/Write

I have an application where I am trying to receive and transmit data using UDP on the same port.  Ideally, I'd like to have this functionality separated out into separate loops so that sending out data is not dependent on receiving data and vice versa.  The code I have written works fine on my RT system (cRIO VXWorks) and is able to execute these task in parallel but fails on my Windows XP machine.  I have attached a highly simplified version to illustrate the issue.
On the Windows machine, the UDP Read VI will terminate execution (even with a timeout of -1) if the UDP write VI is called and return error 66.  On my RT system, the UDP Read VI will continue to wait until it receives data (as expected), even if the UDP write VI is called.  Is this expected?  Anyway to get the code to execute the same way on my Windows machine?
Thanks,
Damien
PS. I've tried it on both LabVIEW 8.6.1 and 2011.
Attachments:
UDP Parallel Read Write.vi ‏14 KB

Hey Guys,
In case any of you are interested or if anyone else is experiencing the same issue, I finally had an oppurtunity to dig into this issue a little deeper.  The issue is more basic than just trying to read and write data in parallel using the UDP primitives.  See attached VI, run with highlight execution.
The attached VI represent a client application which uses "UDP write" to send a message to a server and "UDP read" to read a response from that server (it is actuallu a slightly modified version of a NI example).  Now, lets say the server goes down and the client is still trying to communicate.  The client should still send out its message using the "UDP write".  This operation does not return an error since UDP is connectionless and it doesn't care if anyone is listening.  The next function, UDP read, should wait for the full timeout and then return error 56.  However, if you run the code, you will see that it does not wait for the full timeout but rather it immediately returns errror 66.  This is the cause of the bug.  I have confirmed that LabVIEW running on a realtime system (VXworks) and running on MacOS does not exhibit this behaviour but rather both function properly, returning error 56 only after waiting for the full timeout period.  This bug is what causes my original code to error out even though a -1 is wired to the timeout of the UDP read (it just happens to be in parallel with the UDP write).  There is something wrong with the UDP read function on Windows system.  NI support has acknowledged this error on Windows systems and they are filling a corrective action request.  I'll post that number once I get it.
(Just for fun, disable the write operation and watch that the read function will wait for the timeout?)
Attachments:
UDP Client_2 0.vi ‏12 KB

Similar Messages

  • How to avoid db file parallel read for nestloop?

    After upgraded to 11gr2, one job took more than twice as long as before on 10g and 11gr1 with compatibility being 10.2.0.
    Same hardware. (See AWR summary below). My analysis points to that Nestloop is doing index range scan for the inner table's index segment,
    and then use db file parallel read to read data from the table segment, and for reasons that I don't know, the parallel read is very slow.
    AVG wait is more than 300ms. How can I fluence optimier to choose db file sequential read to fetch data block from inner table by tweaking
    parameters? Thanks. YD
    Begin Snap: 13126 04-Mar-10 04:00:44 60 3.9
    End Snap: 13127 04-Mar-10 05:00:01 60 2.8
    Elapsed: 59.27 (mins)
    DB Time: 916.63 (mins)
    Report Summary
    Cache Sizes
    Begin End
    Buffer Cache: 4,112M 4,112M Std Block Size: 8K
    Shared Pool Size: 336M 336M Log Buffer: 37,808K
    Load Profile
    Per Second Per Transaction Per Exec Per Call
    DB Time(s): 15.5 13.1 0.01 0.01
    DB CPU(s): 3.8 3.2 0.00 0.00
    Redo size: 153,976.4 130,664.3
    Logical reads: 17,019.5 14,442.7
    Block changes: 848.6 720.1
    Physical reads: 4,149.0 3,520.9
    Physical writes: 16.0 13.6
    User calls: 1,544.7 1,310.9
    Parses: 386.2 327.7
    Hard parses: 0.1 0.1
    W/A MB processed: 1.8 1.5
    Logons: 0.0 0.0
    Executes: 1,110.9 942.7
    Rollbacks: 0.2 0.2
    Transactions: 1.2
    Instance Efficiency Percentages (Target 100%)
    Buffer Nowait %: 99.99 Redo NoWait %: 100.00
    Buffer Hit %: 75.62 In-memory Sort %: 100.00
    Library Hit %: 99.99 Soft Parse %: 99.96
    Execute to Parse %: 65.24 Latch Hit %: 99.95
    Parse CPU to Parse Elapsd %: 91.15 % Non-Parse CPU: 99.10
    Shared Pool Statistics
    Begin End
    Memory Usage %: 75.23 74.94
    % SQL with executions>1: 67.02 67.85
    % Memory for SQL w/exec>1: 71.13 72.64
    Top 5 Timed Foreground Events
    Event Waits Time(s) Avg wait (ms) % DB time Wait Class
    db file parallel read 106,008 34,368 324 62.49 User I/O
    DB CPU 13,558 24.65
    db file sequential read 1,474,891 9,468 6 17.21 User I/O
    log file sync 3,751 22 6 0.04 Commit
    SQL*Net message to client 4,170,572 18 0 0.03 Network

    Its not possible to say anything just by looking at the events.You must understand that statspacks and AWR actualy aggergate the data and than show the results.There may be a very well possibility that some other areas also need to be looked at rather than just focussin on one event.
    You have not mentioned any kind of other info about the wait event like their timings and all that.PLease provide that too.
    And if I understood your question corretly,you said,
    How to avoid these wait events?
    What may be the cause?
    I am afraid that its not possible to discuss each of these wait event here in complete details and also not about what to do when you see them.Please read teh Performance Tuning book which narrates these wait events and corresponding actions.
    Please read and follow this link,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/instance_tune.htm#i18202
    Aman....

  • User I/O and db file parallel read is high in AWR report

    Hi,
    We have one performance issue during a job execution.
    From the awr report we have identified one query with a table having millions of records causing problems and then we had also fine tuned that query by changing it's code and by using the optmizer hints. It is being executed in plsql batches. After fine tuning, On the first batch execution(first 5000 records) the query is taking only 5 mins, but on the consecutive batches it is consuming more time( more than 30 mins).
    From the awr report I got the statistics as
    Release : 11.2.0.2.0
    Instance Efficiency Percentages (Target 100%)
    Buffer Nowait %: 100.00 Redo NoWait %: 100.00
    Buffer Hit %: 85.44 In-memory Sort %: 99.98
    Library Hit %: 99.76 Soft Parse %: 99.15
    Execute to Parse %: 88.91 Latch Hit %: 100.00
    Parse CPU to Parse Elapsd %: 87.32 % Non-Parse CPU: 98.65
    The buffer hit % is good. On each batch execution it is taking different set of records.
    Top 5 Timed Foreground Events
    Event Waits Time(s) Avg wait (ms) % DB time Wait Class
    db file parallel read 120,485 42,540 353 89.60 User I/O
    DB CPU 3,794 7.99
    db file sequential read 145,074 606 4 1.28 User I/O
    db file scattered read 70,030 556 8 1.17 User I/O
    direct path write temp 12,423 21 2 0.04 User I/O
    So the I/O is our main concern since that query contains one table with millions of records.
    Host CPU (CPUs: 24 Cores: 24 Sockets: 4)
    Load Average Begin Load Average End %User %System %WIO %Idle
    1.40 1.45 0.6 0.3 3.7 99.0
    Load is also normal.
    From the Time model statistics , sql execute elapsed time is 98.27% of db time and only 7.99% is that of DB CPU.
    Memory Statistics
    Begin End
    Host Mem (MB): 64,318.0 64,318.0
    SGA use (MB): 30,720.0 30,720.0
    PGA use (MB): 488.2 497.1
    % Host Mem used for SGA+PGA: 48.52 48.54
    Both the size of sga_max_size and sga_target are 32,212,254,720(32gb) bytes and that of
    pga_aggregate target is 629,145,600(600mb)
    from this it is evident that the memory is still available(so increase in memory size is not an option).
    The sql statistics for that query shows like that
    Elapsed Time (s) Executions Elapsed Time per Exec (s) %Total %CPU %IO SQL Id SQL Text
    44,950.03 55 817.27 94.67 6.99 94.72 79dgmrxh4kv74 SELECT /*+ index(cdr_data cdr_...
    I can't understand whether the problem is in the database side or with the query?
    If the problem is with the query, then how it has been executed in 5 mins for the first batch ?
    (all the batches are having 5000 records each).
    And how can we reduce the db file parallel read ?
    Your valuable advice will be greatly appreciated.
    Thanks in advance
    Manoj Kumar N

    "db file parallel read" is likely to be associated with something like index prefetching.
    See:
    http://www.freelists.org/post/oracle-l/RE-Calculating-LIOs,11
    http://aprakash.wordpress.com/2012/05/29/index-range-scan-and-db-file-scattered-read-as-session-wait-event/
    http://jonathanlewis.wordpress.com/2006/12/15/index-operations/
    Tune the SQL.
    Review the execution plan.
    Check whether the statistics are accurate.
    Review whether the index hint (and others that we can't see) is appropriate.

  • FPGA Read/Write Control Issues

    Hello all!  Rather new to using FPGA, but I have an interesting issue that's popping up.
    Currently pulling in RAW voltage data from a set of sensors (Pressure Transducers, Load Cells, etc) through a cRIO DAQ.  Have the FPGA file setup to pull in that data already and have the main VI and all the sub-VIs working just fine.
    What I'm trying to do is save the raw voltage data (TDMS files) on the lower level and the convert and display on the upper level so that I don't have to convert and save (speed up saving data).  So that leaves 3 distinct levels/sections:
    On FPGA that pulls in the raw data
    On FPGA that saves the raw data
    Main VI that does all the controls/conversions/displays etc.
    Number two is where I'm having an issue.  I want to save all the data in parallel so I'm creating a save FPGA for each I/O device (8 Relays to command solenoid valves, 3 Pressure Transducers, 1 Load Cell, 4 Thermocouples).  To do this I want to create a separate VI for each device (not sure if that's a smart thing to do).
    The issue I'm having is when I use the FPGA Read/Write control to read in from the Target and save to the TDMS.  When I only use a single FPGA target reference the lines are broken, but as soon as I switch to two targets, it now works.
    Any reason why it might be doing this?  Any ideas/suggestions at all on how to go about setting this up in general?
    Thanks!

    Hi,
    HySoR wrote:
    The issue I'm having is when I use the FPGA Read/Write control to read in from the Target and save to the TDMS.  When I only use a single FPGA target reference the lines are broken, but as soon as I switch to two targets, it now works.
    Could you take a screenshot and post this part of your code? I'm having trouble understanding what you are describing.
    Craig H. | CLA | Systems Engineer | National Instruments

  • What is the best software programs that I can use to read, write and modify data / files on external HD (NTFS format i.e.  Windows) ?

    Hi guys,
    I’m new to Mac and have a MacBook Pro Lion OS (10.6.8 I think !!!) with Parallels 7 (Windows 7) installed. Can someone please tell me what is the best software program that I can use to read, write and modify data / files on external HD (NTFS format) from the Mac OS ? I heard of Paragon and Tuxera NTFS. Are they free ? Are they good ? Are there any other software programs out there ? I heard that some people have issues with Paragon.
    Thanks.

    Your best bet would be to take the drive to the oldest/compatible with that drive Windows PC and grab the files off, right click and format it exFAT (XP users can download exFAT from Microsoft) and then put the files back on.
    Mac's can read and write all Windows files formats except write to NTFS (and in some cases not read) so if you can change the format of the drive to exFAT (all data has to be remove first) then you will have a drive that doesn't require paid third party NTFS software (a license fee goes to Microsoft) for updates.
    Also it's one less hassle to deal with too.
    .Drives, partitions, formatting w/Mac's + PC's

  • FPGA Read/Write Control Function Issues

    Hello all!  Rather new to using FPGA, but I have an interesting issue that's popping up.
    Currently pulling in RAW voltage data from a set of sensors (Pressure Transducers, Load Cells, etc) through a cRIO DAQ.  Have the FPGA file setup to pull in that data already and have the main VI and all the sub-VIs working just fine.
    What I'm trying to do is save the raw voltage data (TDMS files) on the lower level and the convert and display on the upper level so that I don't have to convert and save (speed up saving data).  So that leaves 3 distinct "levels/sections":
    On FPGA that pulls in the raw data
    On FPGA that saves the raw data
    Main VI that does all the controls/conversions/displays etc.
    Number two is where I'm having an issue.  I want to save all the data in parallel so I'm creating a save FPGA for the I/O devices (8 Relays to command solenoid valves, 3 Pressure Transducers, 1 Load Cell, 4 Thermocouples).
    The issue I'm having is when I use the FPGA Read/Write control to read in from the Target and save to the TDMS.  When I only use a single FPGA target reference the lines are broken, but as soon as I switch to two targets, it now works.
    I've attached a screen cap of the current problem.  The set-up on the bottom (with only one target) doesn't work.  But the second I add more than one target, it works.
    Any reason why it might be doing this?  Any ideas/suggestions at all on how to go about setting this up in general?
    Thanks!
    Attachments:
    FPGAError.jpg ‏35 KB

    HySoR,
    You might check the documentation for "data" terminal of "TDMS Write" (http://zone.ni.com/reference/en-XX/help/371361H-01/glang/tdms_file_write/). One DBL element is not accepted, but 1D DBL array is accepted.
    data is the data to write to the.tdmsfile. This input accepts the following data types:
    Analog waveform or a 1D array of analog waveforms
    Digital waveform
    Digital table
    Dynamic data
    1D or 2D array of:
    Signed or unsigned integers
    Floating-point numbers
    Timestamps
    Booleans
    Alphanumeric strings that do not contain null characters

  • How do you create default Read/Write Permissions for more than 1 user?

    My wife and I share an iMac, but use separate User accounts for separate mail accounts, etc.
    However, we have a business where we both need to have access to the same files and both have Read/Write permissions on when one of us creates a new file/folder.
    By default new files and folders grant Read/Write to the creator of the new file/folder, and read-only to the Group "Staff" in our own accounts or "Wheel" in the /Users/Public/ folder, and read-only to Everyone.
    We are both administrators on the machine, and I know we can manually override the settings for a particular file/folder by changing the permissions, but I would like to set things up so that the Read/Write persmissions are assigned for both of us in the folder for that holds our business files.
    It is only the 2 of us on the machine, we trust each other and need to have complete access to these many files that we share. I have archiveing programs running so I can get back old versions if we need that, so I'm not worried about us overwriting the file with bad info. I'm more concerned with us having duplicates that are not up to date in our respective user accounts.
    Here is what I have tried so far:
    1. I tried to just set the persmissions of the containing folder with us both having read/write persmissions, and applied that to all containing elements.
    RESULT -> This did nothing for newly created files or folders, they still had the default permissions of Read/Write for the creating User, Read for the default Group, Read for Everyone
    2. I tried using Sandbox ( http://www.mikey-san.net/sandbox/ ) to set the inheritance of the folder using the methods laid out at http://forums.macosxhints.com/showthread.php?t=93742
    RESULT -> Still this did nothing for newly created files or folders, they still had the default permissions of Read/Write for the creating User, Read for the default Group, Read for Everyone
    3. I have set the umask to 002 ( http://support.apple.com/kb/HT2202 ) so that new files and folders have a default permission that gives the default group Read/Write permissions. This unfortunately changes the default for the entire computer, not just a give folder.
    I then had to add wife's user account to the "Staff" group because for some reason her account was not included in that. I think this is due to the fact that her account was ported into the computer when we upgraded, where as mine was created new. I read something about that somewhere, but don't recall where now. I discovered what groups we were each in by using the Terminal and typing in "groups username" where username was the user I was checking on.
    I added my wife to the "Staff" group, and both of us to the "Wheel" group using the procedures I found at
    http://discussions.apple.com/thread.jspa?messageID=8765421&#8765421
    RESULT -> I could create a new file using TextEdit and save it anywhere in my account and it would have the permissions: My Username - Read/Write, "Staff" or "Wheel" (depending on where I saved it) - Read/Write, Everyone - Read Only, as expected from the default umask.
    I could then switch over to my wife's account, open the file, edited it, and save it, but then the permissions changed to: Her Username - Read/Write, (unknown) - Read/Write, Everyone - Read Only.
    And when I switch back to my account, now I can open the file, but I can't save it with my edits.
    I'm at my wits end with this, and I can believe it is impossible to create a common folder that we can both put files in to have Read/Write permissions on like a True Shared Folder. Anyone who has used windows knows what you can do with the Shared folder in that operating system, ie. Anyone with access can do anything with those files.
    So if anyone can provide me some insight on how to accomplish what I really want to do here and help me get my system back to remove the things it seems like I have screwed up, I greatly appreciate it.
    I tried to give as detailed a description of the problem and what I have done as possible, without being to long winded, but if you need to know anything else to help me, please ask, I certainly won't be offended!
    Thanks In Advance!
    Steve

    Thanks again, V.K., for your assistance and especially for the very prompt responses.
    I was unaware that I could create a volume on the HD non-destructively using disk utility. This may then turn out to be the better solution after all, but I will have to free up space on this HD and try that.
    Also, I was obviously unaware of the special treatment of file creation by TextEdit. I have been using this to test my various settings, and so the inheritance of ACLs has probably been working properly, I just have been testing it incorrectly. URGH!
    I created a file from Word in my wife's account, and it properly inherited the permissions of the company folder: barara - Custom, steve - Custom, barara - Read/Write, admin - Read Only, Everyone - Read Only
    I tried doing the chmod commands on $TMPDIR for both of us from each of our accounts, but I still have the same behavior for TextEdit files though.
    I changed the group on your shared folder to admin from wheel as you instructed with chgrp. I had already changed the umask to 002, and I just changed it back to 022 because it didn't seem to help. But now I know my testing was faulty. I will leave it this way though because I don't think it will be necessary to have it set to 002.
    I do apparently still have a problem though, probably as a result of all the things I have tried to get this work while I was testing incorrectly with TextEdit.
    I have just discovered that the "unknown user" only appears when I create the a file from my wife's account. It happens with any file or folder I create in her account, and it exists for very old files and folders that were migrated from the old computer. i.e. new and old files and foders have permissions: barara - Read/Write, unknown user - Read Only, Everyone - Read Only
    Apparently the unknown user gets the default permissions of a group, as the umask is currently set to 022 and unknown user now gets Read Only permissions on new items, but when I had umask set to 002, the unknown user got Read/Write permissions on new items.
    I realize this is now taking this thread in a different direction, but perhaps you know what might be the cause of this and how to correct or at least know where to point me to get the answer.
    Also, do you happen to know how to remove users from groups? I added myself and my wife to the Wheel group because that kept showing up as the default group for folders in /Users/Shared
    Thanks for your help on this, I just don't know how else one can learn these little "gotchas" without assistance from people like you!
    Steve

  • An alert message pops up upon opening saying could not initiate application security component, and it says to check to see if profile has no read/write restrictions.

    An alert message pops up upon opening saying could not initiate application security component, and it says to check to see if profile has no read/write restrictions. Than when it opens all of my saved passwords are gone, I use a master password and its disabled. When I try to enter in a new on e it says can't change password. I can't even open yahoo e-mail says that my ssl security is down but when I check it its clicked. I'm just very confused as to whats going on.
    == This happened ==
    Every time Firefox opened
    == 5/14/2010 ==
    == User Agent ==
    Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1064 Safari/532.5

    See [[Could not initialize the browser security component]]
    Rename (or delete) secmod.db (secmod.db.old) in the [http://kb.mozillazine.org/Profile_folder_-_Firefox Profile Folder] in case there is a problem with the file.

  • System Folder errors after I changed all permissions on HD to read & write

    Hi,
    Two things may have caused probs on my new 2010 iMac (Snow Leopard), and Applecare is shut so I would really appreciate some help as I have urgent work.
    1) INCORRECT PERMISSIONS
    I have been stupid. I clicked on Macintosh HD and changed all permissions to read & write because I wanted to be sure I could open and edit all documents on other computers.
    I ran Disk Utility Repair Permissions from the install disc, but I am still getting system error messages, and my HP printer won't work.
    The first message, in Repair Permissions, said: Warning: SUID file System/Library/Cores has been modified and will not be repaired. I have read a support doc on this which says no need to worry but I don't like it and would like to fix this.
    More importantly, my HP printer won't work, displays error beside the document in print dialogue box.
    Deleting the printer and readding it didn't work, so I downloaded new drivers and tried to install them, which is when I got the second system error message: System extension System/Library/Extensions/BJUSBLoad.kext was installed improperly and cannot be used. Please try reinstalling it or contact product's vendor for an update.
    I checked the permissions on the file and they were still wrong despite Repair Permissions, allowing everyone to read & write. So I have now clicked on the entire System folder and changed the permissions to: System read & write, admin read only, everyone read only.
    Will this fix it or do I need to do something else, such as check ownership, to make sure all permissions on the computer are now correct?
    2) MEMORY STICK SHUT DOWN MY IMAC
    Additionally (though I don't think this had anything to do with my problems), I inserted a Sandisk USB memory stick the other day and it immediately shut down the computer. When I inserted it into my Macbook it initially rejected it and gave me a message saying the device wanted too much power so it had ejected it to prevent damage to my computer. When I tried again it was OK. I totally reformatted the stick in case there was something harmful on it, but should I now bin the stick as faulty? Scared to use it again.
    3) IS IT BEST TO REINSTALL ENTIRE SOFTWARE?
    If I do a reinstall of all the software from the install disc, will it wipe out all my data, such as Mail, documents, bookmarks and other apps?
    I would back-up, but if I try and back up files on my external drive it will automatically do a Time Machine back-up and I don't want to do that in case it backs-up all the corrupted files. Otherwise, I wouldn't mind starting again just to be sure all is well.
    Expert advice would be very much appreciated.
    Thank you
    Sarah

    Oh, silly really. I was in a hurry and working on docs that I needed to take to the office and open on another computer there.
    But when I checked the permissions on the doc it said I could read & write but everyone else was read only.
    I thought if I opened it on another machine I'd be stuck with read only access and not be able to work on there. I think I couldn't change it, so I thought to avoid any future problems like that I would change everything on machine!
    Yikes. Won't do that again
    Sarah

  • Windows Server 2012 - Hyper-V - iSCSI SAN - All Hyper-V Guests stops responding and extensive disk read/write

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM, 2 teamed NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM, 1 NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched of.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, WMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with 2 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair
    Our Problem
    Our problem is that for some reason all of the VMs stops responding or responds very slowly and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    Symptoms (i.e. this happens, or does not happen, at the same time)
    I we look at resource monitor on the host then we see that there is often an extensive read from a VHDX of one of the VMs (40-60 Mbyte/s) and a combined write speed to many files in \HarddiskVolume5\System Volume Information\{<someguid and no file extension>}.
    See iamge below.
    The combined network speed to the iSCSI SAN is about 500-600 Mbit/s.
    When this happens it is usually during and after a VSS ShadowCopy backup, but has also happens during hours where no backup should be running (i.e. during daytime when the backup has finished hours ago according to the log files). There is however
    not that extensive writes to the backup file that is created on an external hard drive and this does not seem to happen during all backups (we have manually checked a few times, but it is hard to say since this error does not seem leave any traces in event
    viewer).
    We cannot find any indication that the VMs themself detect any problem and we see no increase of errors (for example storage related errors) in the eventlog inside the VMs.
    The QNAP uses about 50% processing Power on all cores.
    We see no dropped packets on the switch.
    (I have split the image to save horizontal space).
    Unable to recreate the problem / find definitive trigger
    We have not succeeded in recreating the problem manually by, for instance, running chkdsk or defrag in VM and Hosts, copy and remove large files to VMs, running CPU and Disk intensive operations inside a VM (for instance scan and repair a database file).
    Questions
    Why does all VMs stop responding and why is there such intensive Read/Writes to the iSCSI SAN?
    Could it be anything in our setup that cannot handle all the read/write requests? For instance the iSCSI SAN, the hosts, etc?
    What can we do about this? Should we use MultiPath IO instead of NIC teaming to the SAN, limit bandwith to the SAN, etc?

    Hi,
    > All VMs are using dynamic disks (as recommended by Microsoft).
    If this is a testing environment, it’s okay, but if this a production environment, it’s not recommended. Fixed VHDs are recommended for production instead of dynamically expanding or differencing VHDs.
    Hyper-V: Dynamic virtual hard disks are not recommended for virtual machines that run server workloads in a production environment
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx
    > This is the primary host and normaly all VMs run on this host.
    According to your posting, we know that you have Cluster Shared Volumes in the Hyper-V cluster, but why not distribute your VMs into two Hyper-V hosts.
    Use Cluster Shared Volumes in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/en-us/library/jj612868.aspx
    > 2 teamed NIC dedicated to iSCSI storage.
    Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices. (At least it’s not supported until Windows Server 2008 R2. Although
    Windows Server 2012 has built-in network teaming feature, I don’t article which declare that Windows Server 2012 network teaming support iSCSI connection)
    Understanding Requirements for Failover Clusters
    http://technet.microsoft.com/en-us/library/cc771404.aspx
    > I have seen using MPIO suggests using different subnets, is this a requirement for using MPIO
    > or is this just a way to make sure that you do not run out of IP adressess?
    What I found is: if it is possible, isolate the iSCSI and data networks that reside on the same switch infrastructure through the use of VLANs and separate subnets. Redundant network paths from the server to the storage system via MPIO will maximize availability
    and performance. Of course you can set these two NICs in separate subnets, but I don’t think it is necessary.
    > Why should it be better to not have dedicated wireing for iSCSI and Management?
    It is recommended that the iSCSI SAN network be separated (logically or physically) from the data network workloads. This ‘best practice’ network configuration optimizes performance and reliability.
    Check that and modify cluster configuration, monitor it and give us feedback for further troubleshooting.
    For more information please refer to following MS articles:
    Volume Shadow Copy Service
    http://technet.microsoft.com/en-us/library/ee923636(WS.10).aspx
    Support for Multipath I/O (MPIO)
    http://technet.microsoft.com/en-us/library/cc770294.aspx
    Deployments and Tests in an iSCSI SAN
    http://technet.microsoft.com/en-US/library/bb649502(v=SQL.90).aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • When I open the browser, it says as an alert regarding the profile directory and says that read/write should be changed and it doesn't work then.i uninstalled and re-installed firefox6.0.2.how can i overcome this problem

    could not initialize the application's security component .the most likely cause is problems with files in your application's probile directory.please check that this directory had no read/write restrictionsand ur hard disk is not full or close to full .It is recommended that you exit the application and fix the problem.if you continue to use this session ,you mightsee incorrect application behaviour when accessing security features....THIS IS THE ALERT I GET WHENEVER I TRY TO OPEN FIREFOX.

    See:
    *https://support.mozilla.com/kb/Could+not+initialize+the+browser+security+component

  • How can I make sure files I transfer stay at read/write instead of read?

    My employer recently got six brand new Mac Pros for our a/v guys. Well I wasn't able to get migrate to work (it wouldn't detect the slave drives, only the master) so I had to put files onto an external drive (a LACIE 1TB) and put them on the new Mac Pro OS 10.5.
    We're finding out that files that we're moving, their permissions are being changed to read only instead of remaining at read/write or so it seems.
    We've also got these machines on a closed network so we can share files between different areas of the office. Each computer has an administrator account and allows sharing and gives those it is sharing with, read/write permissions. But if I transfer a file from machine a to machine b, it comes out as read only one the file.
    I'm thinking we jacked up the file sharing somewhere somehow. Curious if anyone might be able to take a crack at this. We're all new to leopard.
    Screenshots:
    Thanks

    usually though; they are techs; and so geeky; that in windows they set themselves as admins with unlimited privaleges. So what they do is go to account setting and just delete your account; and when they delete; they will delete everything; and all your person files.
    Most though; find that too troublesome; so they just reformat the drive with a backup they built; for that computer.
    Most of the things you can do is deleting temporary internet files; and cookies; and any saved passwords on your account. Usually all found in options/tools in most web browsers. For windows; most files are on desktop; my music; my downloads; my documents; and etc.

  • How to read, write and save a file?

    Hello everybody,
    I have a question about reading , writing and save something into a file (etc text file)
    For example: I want to create a file of football in which there are many key words as: "football", "touch down", "quarterback", "ball", "linebacker", "coach", "player"...
    Now i want to open to read, write one more word "head" ( synonym with "coach") and store it for using next time. How could i do that?
    Could anybody help me about that?
    Thank you very much in advance.
    still_learn

    take a look in the API about FileInputStream and FileOutputStream

  • SharePoint read/write list item to another DB/file

    Hi All,
    I am working on MOSS 2007. I have the following requirement.
    There are around 50 document librararies on a site collection. I need to store all the items from these libraries in a locally (it could be DB/CSV or excel or access) and read/write them periodically for some calculations/comparisions.
    Note: 1) the VM that I am working doesnt have MS Office installed to try for Access or Excel.
             2) Is there a way to use access without needing to install MS Office.
    Please let me know the best approach for the above task.
    Any help is greatly appreciated.
    Regards.

    Hi All,
    Can any one help me with this..?
    Thanks.

  • Sharing Mac iTunes library with Windows PC (I want to be able to read/write)

    Greetings from a new Mac user. I recently purchased a 27" iMAC running OS X 10.7.3. This iMac is now connected to my home network/workgroup.
    - I want to build a new iTunes Library on my 1TB internal Mac HD.
    - I want my 3 non Mac Machines (1 XP Prof and 2 Windows 7 Home) to be able to view the contents of my new (not yet built) iTunes Library resident on my iMac and of course be able to
    - play that content.
    - have play counts from my Windows-based be saved back to my iTunes Library.
    - be able to rate a song from any computer (Mac or Windows) within this iTunes Library and have that rating saved back to this iTunes Library.
    I don't want to have to rely on iTunes "Shared Library" function. In other words I want all PCs to be able to read/write to the same iTunes Library file.
    Before I purchased my iMac, my iTunes Library existed on one of my Windows-file system external drives. The pre-requisites for allowing multiple Windows PCs to access/share/write to the same iTunes Library was to:
    - share the Windows folder on the external drive in which the iTunes Library file was stored (allowing network users to be able to change the files), and
    - make sure all Windows-based PCs ran the same iTunes software version.
    Assume: I don't want to change the version of Windows iTunes software from the currently installed 10.5.0 version.
    Question 1: Can my Windows PCs access and utilize the same iTunes library file that I plan to build on my iMac hard drive?
    Question 2: What older version of Mac iTunes software should I install on my iMac in order to allow the Windows PCs to access and use the iMacs iTunes library file?
    Any help would be appreciated. Thanks.

    Hmmm...  thanks for the reply.  This of course, yields another question.  When you say 'you could format to FAT32 which both can use'; do you mean format the iTunes Hard drive on which my iTunes music now resides?  If so, this would mean I would need to copy the music off the internal iMac hard drive, then format the drive to FAT32, then copy the music back.
    Is this what you meant?

Maybe you are looking for