Execution/exit of DIO single read/write.vi

Hi all,
System - Windows NT4.0, LabView 7.0, PCI-DIO-96 card.
Info -
I am using the "DIO single read/write.vi" to update ports on a custom board. The Labview code has an outer For Loop that executes 64 times (to update the ports 64 times). Inside the loop is a sequence structure. The first sequence frame uses PPI B Port A of the DIO-96 in output Mode 0 (no handshake) to update one target port. The next frame uses PPI A Port A in output Mode 1 (handshake) to update a different target port. The DIO PPIs are always used in the same Mode and the same data direction.
Question 1 -
When the vi executes, does it look at the iteration input and leave the PPI config register alone if the iteration input is nonzero? Specifical
ly, during my system init routine, the input is wired to the iteration terminal of the For Loop. Later in the code (after exiting init) the vi is used to do updates in other structures. I want all subsequent executions of the vi to be the same configuration as initialized. If I tie a nonzero numeric constant to the iteration input of the vi during subsequent executions, will the vi leave the 8255 in the same configuration as when the init routine was exited?
Question 2 -
I need to wait for the handshake in the second sequence frame to complete before the sequence is exited. I assume that when the vi is used inside of a sequence, it is analagous to a subroutine call. Does the vi wait for the handshake to complete before returning? In other words, will the vi complete the handshake with my target port before the sequence frame is exited? If not, any suggestions on how to wait for the completion?
TIA - Charlie

Charlie,
Yes, if the iteration input to DIO Single Read/Write.vi is greater than zero, configuration will not take place. Furthermore, the VI will only return once execution has completed.
Good luck with your application.
Spencer S.

Similar Messages

  • Oracle coherence first read/write operation take more time

    I'm currently testing with oracle coherence Java and C++ version and from both versions for writing to any local or distributed or near cache first read/write operation take more time compared to next consecutive read/write operation. Is this because of boost operations happening inside actual HashMap or serialization or memory mapped implementation. What are the techniques which we can use to improve the performance with this first read/write operation?
    Currently I'm doing a single read/write operation after fetching the NamedCache Instance. Please let me know whether there's any other boosting coherence cache techniques available.

    In which case, why bother using Coherence? You're not really gaining anything, are you?
    What I'm trying to explain is that you're probably not going to get that "micro-second" level performance on a fully configured Coherence cluster, running across multiple machines, going via proxies for c++ clients. Coherence is designed to be a scalable, fault-tolerant, distributed caching/processing system. It's not really designed for real-time, guaranteed, nano-second/micro-second level processing. There are much better product stacks out there for that type of processing if that is your ultimate goal, IMHO.
    As you say, just writing to a small, local Map (or array, List, Set, etc.) in a local JVM is always going to be very fast - literally as fast as the processor running in the machine. But that's not really the focus of a product like Coherence. It isn't trying to "out gun" what you can achieve on one machine doing simple processing; Coherence is designed for scalability rather than outright performance. Of course, the use of local caches (including Coherence's near caching or replicated caching), can get you back some of the performance you've "lost" in a distributed system, but it's all relative.
    If you wander over to a few of the CUG presentations and attend a few CUG meetings, one of the first things the support guys will tell you is "benchmark on a proper cluster" and not "on a localised development machine". Why? Because the difference in scalability and performance will be huge. I'm not really trying to deter you from Coherence, but I don't think it's going to meet you requirements when fully configured in a cluster of "1 Micro seconds for 100000 data collection" on a continuous basis.
    Just my two cents.
    Cheers,
    Steve
    NB. I don't work for Oracle, so maybe they have a different opinion. :)

  • Single-statement 'write consistency' on read committed?

    Please note that in the following I'm only concerned about single-statement read committed transactions. I do realize that for a multi-statement read committed transaction Oracle does not guarantee transaction set consistency without techniques like select for update or explicit hand-coded locking.
    According to the documentation Oracle guarantees 'statement-level transaction set consistency' for queries in read committed transactions. In many cases, Oracle also provides single-statement write consistency. However, when an update based on a consistent read tries to overwrite changes committed by other transactions after the statement started, it creates a write conflict. Oracle never reports write conflicts on read committed. Instead, it automatically handles them based on the new values for the target table columns referenced by the update.
    Let's consider a simple example. Again, I do realize that the following design might look strange or even sloppy, but the ability to produce a quality design when needed is not an issue here. I'm simply trying to understand the Oracle's behavior on write conflicts in a single-statement read committed transaction.
    A valid business case behind the example is rather common - a financial institution with two-stage funds transfer processing. First, you submit a transfer (put transfer amounts in the 'pending' column of the account) in case the whole financial transaction is in doubt. Second, after you got all the necessary confirmations you clear all the pending transfers making the corresponding account balance changes, resetting pending amount and marking the accounts cleared by setting the cleared date. Neither stage should leave the data in inconsistent state: sum (amount) for all rows should not change and the sum (pending) for all rows should always be 0 on either stage:
    Setup:
    create table accounts
    acc int primary key,
    amount int,
    pending int,
    cleared date
    Initially the table contains the following:
    ACC AMOUNT PENDING CLEARED
    1 10 -2
    2 0 2
    3 0 0 26-NOV-03
    So, there is a committed database state with a pending funds transfer of 2 dollars from acc 1 to acc 2. Let's submit another transfer of 1 dollar from acc 1 to acc 3 but do not commit it yet in SQL*Plus Session 1:
    update accounts
    set pending = pending - 1, cleared = null where acc = 1;
    update accounts
    set pending = pending + 1, cleared = null where acc = 3;
    ACC AMOUNT PENDING CLEARED
    1 10 -3
    2 0 2
    3 0 1
    And now let's clear all the pending transfers in SQL*Plus Session 2 in a single-statement read-committed transaction:
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null;
    Session 2 naturally blocks. Now commit the transaction in session 1. Session 2 readily unblocks:
    ACC AMOUNT PENDING CLEARED
    1 7 0 26-NOV-03
    2 2 0 26-NOV-03
    3 0 1
    Here we go - the results produced by a single-statement transaction read committed transaction in session 2, are inconsistent � the second funds transfer has not completed in full. Session 2 should have produced the following instead:
    ACC AMOUNT PENDING CLEARED
    1 7 0 26-NOV-03
    2 2 0 26-NOV-03
    3 1 0 26-NOV-03
    Please note that we would have gotten the correct results if we ran the transactions in session 1 and session 2 serially. Please also note that no update has been lost. The type of isolation anomaly observed is usually referred to as a 'read skew', which is a variation of 'fuzzy read' a.k.a. 'non-repeatable read'.
    But if in the session 2 instead of:
    -- scenario 1
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null;
    we issued:
    -- scenario 2
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null and pending <> 0;
    or even:
    -- scenario 3
    update accounts
    set amount = amount + pending, pending = 0, cleared = sysdate
    where cleared is null and (pending * 0) = 0;
    We'd have gotten what we really wanted.
    I'm very well aware of the 'select for update' or serializable il solution for the problem. Also, I could present a working example for precisely the above scenario for a major database product, providing the results that I would consider to be correct. That is, the interleaving execution of the transactions has the same effect as if they completed serially. Naturally, no extra hand-coded locking techniques like select for update or explicit locking is involved.
    And now let's try to understand what just has happened. Playing around with similar trivial scenarios one could easily figure out that Oracle clearly employs different strategies when handling update conflicts based on the new values for the target table columns, referenced by the update. I have observed the following cases:
    A. The column values have not changed: Oracle simply resumes using the current version of the row. It's perfectly fine because the database view presented to the statement (and hence the final state of the database after the update) is no different from what would have been presented if there had been no conflict at all.
    B. The row (including the columns being updated) has changed, but the predicate columns haven't (see scenario 1): Oracle resumes using the current version of the row. Formally, this is acceptable too as the ANSI read committed by definition is prone to certain anomalies anyway (including the instance of a 'read skew' we've just observed) and leaving behind somewhat inconsistent data can be tolerated as long as the isolation level permits it. But please note - this is not a 'single-statement write consistent' behavior.
    C. Predicate columns have changed (see scenario 2 or 3): Oracle rolls back and then restarts the statement making it look as if it did present a consistent view of the database to the update statement indeed. However, what seems confusing is that sometimes Oracle restarts when it isn't necessary, e.g. when new values for predicate columns don't change the predicate itself (scenario 3). In fact, it's bit more complicated � I also observed restarts on some index column changes, triggers and constraints change things a bit too � but for the sake of simplicity let's no go there yet.
    And here come the questions, assuming that (B) is not a bug, but the expected behavior:
    1. Does anybody know why it's never been documented in detail when exactly Oracle restarts automatically on write conflicts once there are cases when it should restart but it won't? Many developers would hesitate to depend on the feature as long as it's not 'official'. Hence, the lack of the information makes it virtually useless for critical database applications and a careful app developer would be forced to use either serializable isolation level or hand-coded locking for a single-statement update transaction.
    If, on the other hand, it's been documented, could anybody please point me to the bit in the documentation that:
    a) Clearly states that Oracle might restart an update statement in a read committed transaction because otherwise it would produce inconsistent results.
    b) Unambiguously explains the circumstances when Oracle does restart.
    c) Gives clear and unambiguous guidelines on when Oracle doesn't restart and therefore when to use techniques like select for update or the serializable isolation level in a single-statement read committed transaction.
    2. Does anybody have a clue what was the motivation for this peculiar design choice of restarting for a certain subset of write conflicts only? What was so special about them? Since (B) is acceptable for read committed, then why Oracle bothers with automatic restarts in (C) at all?
    3. If, on the other hand, Oracle envisions the statement-level write consistency as an important advantage over other mainstream DBMSs as it clear from the handling of (C), does anybody have any idea why Oracle wouldn't fix (B) using well-known techniques and always produce consistent results?

    I intrigued that this posting has attracted so little interest. The behaviour described is not intuitive and seems to be undocumented in Oracle's manuals.
    Does the lack of response indicate:
    (1) Nobody thinks this is important
    (2) Everybody (except me) already knew this
    (3) Nobody understands the posting
    For the record, I think it is interesting. Having spent some time investigating this, I believe the described is correct, consistent and understandable. But I would be happier if Oracle documented in the Transaction sections of the Manual.
    Cheers, APC

  • DIO - Single Line vs Port (U8) Write

    I have a wierd situation:
    I am using a NI 6509 Digital I/O card, with 4x digital outputs connected to P2.3 thru P2.6.
    1. I setup a task to write to these lines using a group of global virtual channels - all done in MAX.
    2. I have used this task/setup on previous occassions with good success.
    3. Yesterday it stopped working.
    4. Now the old task does not work.
    5. I have reset the device (and PC) on multiple occassions
    6. I created a "NEW" task which writes to the whole port at once (i.e. U8) and I can measure the output voltage changing fine (using  DMM) when I run this VI.
    7. So I know that the digital outputs are working... but when I try the old task again, it still does not work.
    Can anyone please tell me of a reason why this behaviour might occur. I know that some DIO products can do port read/writes only.
    I am suspicious that the device (our product) the DIO card is connected to might have been accidentally connected in such a way that there were two competing output signals/pins connected to the same wire.
    Does this sound like a hardware fault?
    Any suggestions muchly appreciated.
    Thanks!
    Solved!
    Go to Solution.

    Don't bother, I found the fault - the external hardware had been changed without me knowing.

  • Read/Write using single adapter

    How can i move the files from one location to another location using single file adapter?

    Adapter can have only one operation either of the below.
    Read, Write, Synchronous and Listing
    To move the files you can refer the below URL and navigate to the below topic
    http://docs.oracle.com/cd/E23943_01/integration.1111/e10231/adptr_file.htm#CIACJFHF
    *4.5.11 Copying, Moving, and Deleting Files*
    - It is considered good etiquette to reward answerers with points (as "helpful" - 5 pts - or "correct" - 10pts).
    Thanks,
    Vijay

  • Is it a bad idea to use a single read only Connection?

    I am developing a client/server application, where each client request executes in a separate thread. Currently i create a new Connection object for ANY database access. I am wondering if there is any advantage changing this, to use a single Connection object for read only access, but still creating new Connection objects for read/write access.
    Does anyone have any opinions on this matter?
    I am aware that if i go with this approach, and someone causes a SQL exception on the read only connection that it will be closed. Right now this appears to be the main disadvantage to this approach.

    I don't like the single, read-only connection idea.ack
    A single connection throughout the execution of the
    application has several security disadvantages. It is
    also less scalable. I've read other problems
    associated with it but I none come to mind at the
    moment.What are the security implications that you speak of?
    I think for the type of application i am developing,database security is not a concern, as all users are able to access the same tables.
    Basically in our application, the security is in the application domain, not the database.

  • Windows Server 2012 - Hyper-V - iSCSI SAN - All Hyper-V Guests stops responding and extensive disk read/write

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM, 2 teamed NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM, 1 NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched of.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, WMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with 2 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair
    Our Problem
    Our problem is that for some reason all of the VMs stops responding or responds very slowly and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    Symptoms (i.e. this happens, or does not happen, at the same time)
    I we look at resource monitor on the host then we see that there is often an extensive read from a VHDX of one of the VMs (40-60 Mbyte/s) and a combined write speed to many files in \HarddiskVolume5\System Volume Information\{<someguid and no file extension>}.
    See iamge below.
    The combined network speed to the iSCSI SAN is about 500-600 Mbit/s.
    When this happens it is usually during and after a VSS ShadowCopy backup, but has also happens during hours where no backup should be running (i.e. during daytime when the backup has finished hours ago according to the log files). There is however
    not that extensive writes to the backup file that is created on an external hard drive and this does not seem to happen during all backups (we have manually checked a few times, but it is hard to say since this error does not seem leave any traces in event
    viewer).
    We cannot find any indication that the VMs themself detect any problem and we see no increase of errors (for example storage related errors) in the eventlog inside the VMs.
    The QNAP uses about 50% processing Power on all cores.
    We see no dropped packets on the switch.
    (I have split the image to save horizontal space).
    Unable to recreate the problem / find definitive trigger
    We have not succeeded in recreating the problem manually by, for instance, running chkdsk or defrag in VM and Hosts, copy and remove large files to VMs, running CPU and Disk intensive operations inside a VM (for instance scan and repair a database file).
    Questions
    Why does all VMs stop responding and why is there such intensive Read/Writes to the iSCSI SAN?
    Could it be anything in our setup that cannot handle all the read/write requests? For instance the iSCSI SAN, the hosts, etc?
    What can we do about this? Should we use MultiPath IO instead of NIC teaming to the SAN, limit bandwith to the SAN, etc?

    Hi,
    > All VMs are using dynamic disks (as recommended by Microsoft).
    If this is a testing environment, it’s okay, but if this a production environment, it’s not recommended. Fixed VHDs are recommended for production instead of dynamically expanding or differencing VHDs.
    Hyper-V: Dynamic virtual hard disks are not recommended for virtual machines that run server workloads in a production environment
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx
    > This is the primary host and normaly all VMs run on this host.
    According to your posting, we know that you have Cluster Shared Volumes in the Hyper-V cluster, but why not distribute your VMs into two Hyper-V hosts.
    Use Cluster Shared Volumes in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/en-us/library/jj612868.aspx
    > 2 teamed NIC dedicated to iSCSI storage.
    Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices. (At least it’s not supported until Windows Server 2008 R2. Although
    Windows Server 2012 has built-in network teaming feature, I don’t article which declare that Windows Server 2012 network teaming support iSCSI connection)
    Understanding Requirements for Failover Clusters
    http://technet.microsoft.com/en-us/library/cc771404.aspx
    > I have seen using MPIO suggests using different subnets, is this a requirement for using MPIO
    > or is this just a way to make sure that you do not run out of IP adressess?
    What I found is: if it is possible, isolate the iSCSI and data networks that reside on the same switch infrastructure through the use of VLANs and separate subnets. Redundant network paths from the server to the storage system via MPIO will maximize availability
    and performance. Of course you can set these two NICs in separate subnets, but I don’t think it is necessary.
    > Why should it be better to not have dedicated wireing for iSCSI and Management?
    It is recommended that the iSCSI SAN network be separated (logically or physically) from the data network workloads. This ‘best practice’ network configuration optimizes performance and reliability.
    Check that and modify cluster configuration, monitor it and give us feedback for further troubleshooting.
    For more information please refer to following MS articles:
    Volume Shadow Copy Service
    http://technet.microsoft.com/en-us/library/ee923636(WS.10).aspx
    Support for Multipath I/O (MPIO)
    http://technet.microsoft.com/en-us/library/cc770294.aspx
    Deployments and Tests in an iSCSI SAN
    http://technet.microsoft.com/en-US/library/bb649502(v=SQL.90).aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • When I open the browser, it says as an alert regarding the profile directory and says that read/write should be changed and it doesn't work then.i uninstalled and re-installed firefox6.0.2.how can i overcome this problem

    could not initialize the application's security component .the most likely cause is problems with files in your application's probile directory.please check that this directory had no read/write restrictionsand ur hard disk is not full or close to full .It is recommended that you exit the application and fix the problem.if you continue to use this session ,you mightsee incorrect application behaviour when accessing security features....THIS IS THE ALERT I GET WHENEVER I TRY TO OPEN FIREFOX.

    See:
    *https://support.mozilla.com/kb/Could+not+initialize+the+browser+security+component

  • How to: Create a Program with the rs232 Device -Magcard Reader Writer

    Hi Guys!. 
    Im New in using VB.net 2010 express 
    and it is my first time to do a project with a device needed to incorporate with it.
    I have a device Magnetic Card Reader writer and a i want to create a connection and UI 
    that interacts with the device alone without using the default application and process.start command.
    the main problem i want to be solve is to perform the connection and commands on a single form by allowing the user to read and write the data on a single form. 
    what i want to do is to create a main form that executes the commands needed to activate the event or allows the user to use the device w/o using the software. with the use of text box and button, while the read executes automatically if the card is swipe
    to it end fill out the focus textbox given.
    i have here a document that discuss commands for the device and i think it is needed to successfully connect all the process from device to the system
    can you help me to do this project? tnx.. :)

    Hi,
    Welcome to MSDN.
    I am afraid that this is not the proper forum for this issue, since each  Magnetic Card Reader writer has its API for developers.
    You could consider getting supports by connecting with the publisher of that Magnetic Card Reader writer which should have the sample about using its API.
    In addition, I did a research, you could refer to
    Build a .NET Class for Serial Device Communications with P/Invoke to get how to communicate with that serial device.
    Thanks for your understanding.
    Regards.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How can I read/write data files (text file) from PL/SQL Script

    I had an oracle forms pl/sql program to read/write a data file (text file). When this code is run on a command line as a PL/SQL script using the SQL*Plus I am getting an error:
    -- sample.sql
    DECLARE
      vLocation                 VARCHAR2(50)  := 'r:\';
      vFilename                 VARCHAR2(100) := 'sample.dat';
      vTio                   TEXT_IO.FILE_TYPE;
      vLinebuf               VARCHAR2(2000);
      vRownum               NUMBER        := 0;
      -- use array to store data FROM each line of the text file     
      TYPE           array_type IS VARRAY(15) OF VARCHAR2(100);
      vColumn      array_type := array_type('');
      PROCEDURE prc_open_file(p_filename IN VARCHAR, p_access IN VARCHAR2) is
      BEGIN
        vTio := TEXT_IO.FOPEN(vLocation||p_filename,p_access);
      EXCEPTION
        WHEN OTHERS then
          --  raise_application_error(-20000,'Unable to open '||p_filename);
          message(sqlerrm);pause;
      END;
      PROCEDURE prc_close_file is
      BEGIN
        IF TEXT_IO.IS_OPEN(vTio) then
           TEXT_IO.FCLOSE(vTio);
        END IF;
      END;
    BEGIN
      --extend AND initialize the array to 4 columns
      vColumn.EXTEND(4,1);
      prc_open_file(vFilename,'r');
      LOOP
          LTEXT_IO.GET_LINE(vTio,vLinebuf);
          vColumn(1)  := SUBSTR(vLineBuf, 1, 3);
          vColumn(2)  := SUBSTR(vLineBuf, 5, 8);
          vColumn(3)  := SUBSTR(vLineBuf,10,14);     
          Insert Into MySampleTable
          Values
            (vColumn(1), vColumn(2), vColumn(3));
          EXIT WHEN vLinebuf IS NULL;
       END LOOP;
       prc_close_file;
    END;
    SQL> @c:\myworkspace\sql\scripts\sample.sql;
    PLS-00201: identifier 'TEXT_IO.FILE_TYPE' must be declaredIt works on the oracle forms but not on the SQL*Plus. Is there an alternative method using a PL/SQL script? A simple sample would help. Thanks.

    Did you ever noticed the search box at the right side of the forum?
    A quick search (limited to this years entries) brought up this thread for example
    Re: UTL_FILE Examples

  • Lion server file sharing issue with windows API read/write ini file (GetPrivateProfileString)

    Hello,
    I try to config lion server as file server for a windows application we use at work. All other computers are windows 7 or XP, lion server is the only mac. I choose lion server because it's size, quality and personal love of apple products.
    10.7.2 lion server's samba file sharing works almost perfectly with all my windows machines, I can copy, delete, modify any text files or office files without any issue, but the most important windows application for my business doesn't work with samba file sharing. After some digging, I found it is because windows program can't read or write INI file stored on lion share. Windows API GetPrivateProfileString always returns empty if the INI file is store on lion share.
    You can download a small application for read/write windows INI file from codeproject.com to test this problem:
    http://www.codeproject.com/KB/files/ini.aspx
    I can open/edit the in file using any text editor without any problem. The only problem is with those windows APIs. ACL is turned on for my lion share and assigned "delete" rights to samba users.
    I install samba3 on the same server; it works perfectly with windows API. My windows program also works. Looks like there is something wrong with lion server's sambax.
    I'd prefer to use built-in samba even I have samba3 working. Built-in samba is very immature right now, but considered how young it is, I will give apple some time to make it mature.
    Does anyone have same issue or knows how to fix it?
    Thanks,
    Michael.

    All the memory is fine. The server rarely if ever goes down when there are only around 10-12 users connected. When there are 20+ users connected and working heavily it goes down often. When I say working heavily, I mean they are transferring huge files to the SAN (100GB+), sometimes 5 at a time per user, and there are a bunch of others who are reading large video files at a minimum of 220MB/sec from the SAN.
    Though this worked on Snow Leopard without any issues, Lion just doesn't seem to be able to handle it. The odd thing is, on Snow Leopard there was only a single 1GB ethernet connection to a NAS system, whereas with Lion we have a much more powerful machine with a 6-port 10GB ethernet card and a 4 lane 8GB fiber card to a true SAN. You would think that the newer scenario with Lion would handle far more users with ease.
    So far, very disappointing with regards to Lion's file serving performance.

  • To export query from Access to Excel in Read/Write mode in VBA

    Below is the code which exports the query named 'LatestSNR' from Access to Excel;
    Public Sub Expdata()
    Dim rst As DAO.Recordset
    Dim Apxl As Object
    Dim xlWBk, xlWSh As Object
    Dim PathEx As String
    Dim fld As DAO.Field
    PathEx = Forms("Export").Text14 'path comes from the directory given in form
    Set Apxl = CreateObject("Excel.Application")
    Set rst = CurrentDb.OpenRecordset("LatestSNR")
    Set xlWBk = Apxl.Workbooks.Open(PathEx)
    'xlWBk.ChangeFileAccess xlReadWrite
    Set xlWBk = Workbook("PathEx")
    Apxl.Visible = True
    Set xlWSh = xlWBk.Worksheets("Metadatasheet")
    xlWSh.Activate
    xlWSh.Range("A2").Select
    For Each fld In rst.Fields
    Apxl.ActiveCell = fld.Name
    Apxl.ActiveCell.Offset(0, 1).Select
    Next
    rst.MoveFirst
    xlWSh.Range("A2").CopyFromRecordset rst
    xlWSh.Range("1:1").Select
    ' selects all of the cells
    Apxl.ActiveSheet.Cells.Select
    ' selects the first cell to unselect all cells
    xlWSh.Range("A2").Select
    rst.Close
    Set rst = Nothing
    ' Quit excel
    Apxl.Quit
    End Sub
    After the execution of code, the query is transferred to excel sheet and is viewed in 'Read only' mode. If I try to save it, a copy of the excel file is produced. Can the Excel be opened in Read/Write mode ? so as to save the workbook and also to transfer
    the query to same workbook repeatedly.
    If in case the change of mode is not possible, then is there any alternative  method?

    Try this version:
    Public Sub Expdata()
    Dim rst As DAO.Recordset
    Dim Apxl As Object
    Dim xlWBk As Object, xlWSh As Object
    Dim PathEx As String
    Dim i As Long
    PathEx = Forms("Export").Text14 'path comes from the directory given in form
    Set Apxl = CreateObject("Excel.Application")
    Set xlWBk = Apxl.Workbooks.Open(PathEx)
    Set xlWSh = xlWBk.Worksheets("Metadatasheet")
    Set rst = CurrentDb.OpenRecordset("LatestSNR")
    For i = 1 To rst.Fields.Count
    xlWSh.Cells(1, i).Value = rst.Fields(i - 1).Name
    Next i
    rst.MoveFirst
    xlWSh.Range("A2").CopyFromRecordset rst
    xlWBk.Close SaveChanges:=True
    Apxl.Quit
    rst.Close
    Set rst = Nothing
    End Sub
    or else
    Public Sub Expdata()
    Dim PathEx As String
    PathEx = Forms("Export").Text14 'path comes from the directory given in form
    DoCmd.TransferSpreadsheet TransferType:=acExport, _
    SpreadsheetType:=acSpreadsheetTypeExcel12Xml, _
    TableName:="LatestSNR", _
    Filename:=PathEx, _
    HasFieldNames:=True, _
    Range:="Metadatasheet!"
    End Sub
    Regards, Hans Vogelaar (http://www.eileenslounge.com)

  • External Hard drive suddenly will only read! How can I make it read/write again?

    Hi folks,
    I'm working with a Seagate FreeAgent GoFlex 320gb External Hard Drive on the imacs here at my university. I have it formatted to ExFAT so that I could use it with my pc as well but I've never had to so far so it's only ever been used on the uni imacs.
    It's been fine for working on media projects until suddenly when i came back to uni after the Christmas break I went to continue working on a film project and realised the external hard drive now said read only??
    One thing I did was to put all of my work from last semester (which is everything that was on the hard drive into a single folder) could that have caused the problem?
    And does anyone know how i can reset my permissions to read/write without causing damage or risk to the files on the hard-drive as I can't clear the disk (i have no where else to store the 185gb of video footage).
    I'd really appreciate it if anyone could tell me how i can fix it? I know how to do all of this on my pc but i've no idea with the mac as i only use it for my media projects here in the university.
    Thanks!
    DrF ;-)

    Out of curiosity , will this method also work for an external drive formatted to NTFS.. I have a lot of data that I have nowhere to Float while re-formatting the drive , so If this method would allow me to keep the data and still fix the problem that would be great !!  At the moment the drive is formatted to NTFS and permissions say Read/Write , but I can only Read.. Very Frustrating as my mac drive is running low on space and performance is being affected.. Any help appreicated !!

  • Why doesn't Photoshop support read/write of .mpo files?

    I am actually blown away that I cannot find a single Photoshop plugin that reads and/or saves .mpo files. Does somebody know why? And why isn't anyone talking about this format? I find it hard to believe that no one in the entire Photoshop Windows forum has ever wondered about Photoshop's complete lack of or interest in support for .mpo files.
    If nobody knows how to read/write .mpo files directly through Photoshop, perhaps someone could point me to some documentation for writing a plugin for it. It's a really really simple file format--there's no reason that a plugin shouldn't be available already.
    Thanks in advance for your support.
    Jase

    Well wave of the future or not. its beening pushed hard. That is in a way good for us the consumers.. choice is always good.
    3d tv's and monitors require at least 120hz in order to function. what does that mean for people who spend all their time in front of a monitor?
    Bringing about the availablilty of 120hz and beyond flat panels.
    Many are people are prone to getting migrains due to refresh rates 60 hz or less, Most people getting migrains from sitting at work in front of a 60 hz monitor all day long, and do not even realize it.
    I happen to be one of those people who get migrains, especially if I am doing artwork for 8 - 10 hours.
    I just picked up a 120 hz monitor like 2 weeks ago cause I am a techno junky and love my toys, it is the asus one that comes with the Nvidia glasses.
    There is nothing wrong with my other monitor, a 24 inch HP monitor. Its a 1920 x 1200 5ms responce time 60hz monitor.
    I didn't think i would really even notice the difference between a 120 hz and 60 hz monitor.
    Boy ohh boy was I ever mistaken about not Noticing. sitting side by side, to me the picture difference is amaizing. Now i have not had an oppertunity to do a real weekend artwork fest but that will come with time.
    The 3d features with the glasses on, I think are very cool in games its extreemly noticable since you have more control over what it is you are looking at. I am not sure i would play a game end to end for hours and hours in 3d. i think that would likely make my head explode but we'll see about that too and if not my be running into walls when i am done since you tend to loose perseption. not to mention 3d glassses make it so you see each picture at 60 hz in one eye 60 hz in the other putting you back to a 60 hz refresh and migrains.
    anyway way off subject, but to sum it up.
    Some good things can come from flavor of the day fads. Grant it some bad some times does too.. i mean look at bell bottoms....

  • Multithread read write problem.

    This is a producer consumer problem in a multi-threaded environment.
    Assume that i have multiple consumer (Multiple read threads) and a
    single producer(write thread).
    I have a common data structure (say an int variable), being read and written into.
    The write to the data sturcture happens occasionally (say at every 2 secs) but read happens contineously.
    Since the read operation is contineous and done by multiple threads, making the read method synchronized will add
    overhead(i.e read operation by one thread should not block the other read threads). But when ever write happens by
    the write thread, that time the read operations should not be allowed.
    Any ideas how to achive this ??

    Clearly the consumer has to wait for a value to become available and then take it's own copy of that value before allowing any other thread to access it. If it doesn't copy the value then only one consumer can act at any time (since if another value could be added while the consumer thread was accessing the common value then the value would change, affecting the consumer at random).
    In general what you're doing is using a queue, even in the special case where the maximum number of items queued is restricted to one the logic is the same.

Maybe you are looking for