Failure modes in TCP WRITE?

I need help diagnosing an issue where TCP communications breaks down between my host (Windows) and a PXI (LabVIEW RT 2010).
The bottom-line questions are these:
1...Are there circumstances in which TCP WRITE, given a string of say, 10 characters, will write more than zero and fewer than 10 characters to the connection? If so, what are those circumstances?
2...Is it risky to use a timeout value of 1 mSec?  Further thought seems to say that I won't get a 1000 uSec timeout if we're using a 1-mSec timebase, but I don't know if that's true in the PXI.
Background:
On the PXI, I'm running a 100-Hz PID loop, controlling an engine.  I measure the speed and torque, and control the speed and throttle.  Along the way, I'm measuring 200 channels of misc stuff (analog, CAN, TCP instruments) at 10 Hz and sending gobs of info to the host (200 chans * 8 = 1600 bytes every 0.1 sec)
The host sends commands, the PXI responds.
The message protocol is a fixed-header, variable payload type: a message is a fixed 3-byte header, consisting of a U8 OpCode, and a U16 PAYLOAD SIZE field. I flatten some structure to a string, measure its size, and prepend the header and send it as one TCP WRITE.  I receive in two TCP READs: one for the header, then I unflatten the header, read the PAYLOAD SIZE and then another read for that many more bytes.
  The payload can thus be zero bytes: a TCP READ with a byte count of zero is legal and will succeed without error.
A test starts with establishing a connection, some configuration stuff, and then sampling starts. The 10-Hz data stream is shown on the host screen at 2-Hz as numeric indicators, or maybe selected channels in a chart.
At some point the user starts RECORDING, and the 10-Hz data goes into a queue for later writing to a file. This is while the engine is being driven thru a prescribed cycle of speed/torque target points.
The recording lasts for 20 or in some cases 40 minutes (24000 samples) and then recording stops, but sampling doesn't.  Data is still coming in and charted. The user can then do some special operations, related to calibration checks and leak checks, and those results are remembered.  Finally, they hit the DONE button, and the whole mess gets written to a file.
All of this has worked fine for several years, but as the system is growing (more devices, more channels, more code), a problem has cropped up: the two ends are occasionally getting out of synch. 
The test itself, and all the configuration stuff before, is working perfectly. The measurement immediately after the test is good.  At some point after that, it goes south.  The log shows the PXI sending results for operations that were not requested. The data in those results is garbage; 1.92648920e-299 and such numbers, resulting from interpreting random stuff as a DBL.
After I write the file, the connection is broken, the next test re-establishes it, and all is well again.
In chasing all this, I've triple-checked that all my SENDs are MEASURING the size of the payload before sending it.  Two possibilities have come up:
1... There is a message with a payload over 64k.  If my sender were presented with a string of length 65537, it would convert that to a U16 of value 1, and the receiver would expect 1 byte. The receiver would then expect another header, but this data comes instead, and we're off the rails.
  I don't believe that's happening. Most of the messages are fewer than 20 bytes payload, the data block is 1600 or so, I see no evidence for such a thing to happen.
2... The PXI is failing, under certain circumstances, to send the whole message given to TCP WRITE.  If it sent out a header promising 20 more bytes, but only delivered 10, then the receiver would see the header and expect 20 more. 10 would come immediately, but whatever the NEXT message was, it's header would be construed as part of the payload of the first message, and we're off the rails.
Unfortunately, I am not checking the error return from TCP write, since it never failed in my testing here (I know, twenty lashes for me).
It also occurs to me that I am giving it a 1-mSec timeout value, since I'm in a 100-Hz loop. Perhaps I should have separated the TCP stuff into a separate thread.  In any case, maybe I don't get a full 1000 uSec, due to clock resolution issues.
That means that TCP WRITE cannot get the data written before the TIMEOUT expires, but it has written part of it.
I suspect, but the logs don't prove, that the point of failure is when they hit the DONE button.  The general CPU usage on the PXI is 2-5% but at that point there are 12-15 DAQ domain managers to be shutting down, so the instantaneous CPU load is high.  If that happens to coincide with a message going out, well, maybe the problem crops up.  It doesn't happen every time.
So I repeat the two questions:
1...Are there circumstances in which TCP WRITE, given a string of say, 10 characters, will write more than zero and fewer than 10 characters to the connection? If so, what are those circumstances?
2...Is it risky to use a timeout value of 1 mSec?  Further thought seems to say that I won't get a 1000 uSec timeout if we're using a 1-mSec timebase, but I don't know if that's true in the PXI.
Thanks,
Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com
Blog for (mostly LabVIEW) programmers: Tips And Tricks
Solved!
Go to Solution.

There are a couple of issues at play here, and both are working together to cause your issue(s).
1) LV RT will suspend the TCP thread when your CPU utilization goes up to 100%. When this happens, your connection to the outside world simply goes away and your communications can get pretty screwed up. (More here)
Unless you create some form of very robust resend and timeout strategy your only other solution would be to find a way to keep your CPU from maxing out. This may be through the use of some scheduler to limit how many processes are running at a particular time or other code optimization. Any way you look at it, 100% CPU = Loss of TCP comms.
2) The standard method of TCP communication shown in all examples I have seen to date uses a similar method to transfer data where a header is sent with the data payload size to follow.
<packet 1 payload size (2 bytes)><packet 1 payload..........><packet 2 payload size (2 bytes)><packet 2 payload.......................>
On the Rx side, the header is read, the payload size extracted then a TCP read is set with the desired size. Under normal circumstances this works very well and is a particularly efficent method of transferring data. When you suspend the TCP thread during a Rx operation, this header can get corrupted and pass the TCP Read a bad payload size due to a timeout on the previous read. As an example the header read expects 20 bytes but due to the TCP thread suspension only gets 10 before the timeout. The TCP Read returns only those 10 bytes, leaving the other 10 bytes in the Rx buffer for the next read operation. The subsequent TCP Read now gets the first 2 bytes from the remaining data payload (10 bytes) still in the buffer. This gives you a further bad payload read size and the process continues OR if you happen to get a huge number back, when you try to allocate a gigantic TCP receive buffer, you get an out of memory error.
 The issue now is that your communications are out of sync. The Rx end is not interpeting the correct bytes as the header thus this timeout or bad data payload behavior can continue for quite a long time. I have found that occasionally (although very rare) the system will fall back into sync however it really is a crap shoot at this point.
I more robust way of dealing with the communication issue is to change your TCP read to terminate on a CRLF as opposed to the number of bytes or timeout (The TCP Read has an enum selctor for switching the mode. In this instance, whenever a CRLF is seen, the TCP Read will immediately terminate and return data. If the payload is corrupted, then it will fail to be parsed correctly or would encounter a checksum failure and be discarded or a resend request issued. In either case, the communications link will automatically fall back into sync between the Tx and Rx side. The one other thing that you must do is to encode your data to ensure that no CRLF characters exist in the payload. Base64 encode/decode works well. You do give up some bandwith due to the B64 strings being longer, however the fact that the comm link is now self syncing is normally a worthwhile sacrifice.
When running on any other platform other than RT, the <header><payload> method of transmitting data works fine as TCP guarantees transmission of the data, however on RT platforms due to the suspension of the TCP thread on high CPU excursions this method fails miserably.

Similar Messages

  • One TCP Write to send both Length & Data?

    I noticed that the "Flatten to string" VI has a (default) option to prepend the array/string size to the binary output string.  This got me thinking that I could use this rather than the typecast vi to convert my arbitrary data to a string, prepend the length and call TCP Write one time.  On the other side of the connection I would then do your standard two TCP Read calls, the first one being 4bytes for the length, the 2nd one being whatever the length that is returned.  However, no matter what I do, the 2nd TCP Read does not return anything.  Is there no way to do two TCP Reads on one TCP Write?  I thought 'buffered mode' might work, but it didn't.  If there is no way to do this, what is the point of the prepend length option on the Flatten to string? Apparently it has no network application if you have to send the length on its own TCP Write.

    I'm going to second what smercurio said. There is no problem with using two (or fifty) tcp reads to read data from one send. See attached example where I use two reads; you could modify it so that you read bytes one at a time using tcp read.
    More likely your problem is that tcp write is not sending what you think, and that's probably due to Flatten To String's "prepend length" option. Honestly, I recommend you NEVER use this option. If you want to send the length, just measure the string length and prepend it. At least then you'll always know what you're sending.
    The reason to avoid the auto-prepender is that does different things depending on the kind of input, and often that isn't what you want. It was introduced to LabView at the same time as the ill-considered "prepend array or string size" option in the LV 8+ Write To Binary File function (which also only fires in certain contexts, and which also defaults to true.) It just isn't an appropriate default in either case; not everyone out there programs in LabView and you shouldn't have to set "optional" arguments to false just to write binary files without corruption.
    -Rob
    The attached example is in LV 8.6 
    Message Edited by Rob Calhoun on 11-17-2008 02:07 PM
    Message Edited by Rob Calhoun on 11-17-2008 02:09 PM

  • Which table shows the tablespace in read-only mode or read-write mode.

    HI All,
    Can someonoe help me out to find which table shows whether the a tablespace is in Read-only mode or Read-write.
    Thanks,
    naveen

    Try this:
    select tbl.table_name, tb.tablespace_name, TB.STATUS from dba_tablespaces tb, dba_tables tbl
    where TB.TABLESPACE_NAME = tbl.TABLESPACE_NAME and TB.TABLESPACE_NAME = 'YOUR_TABLESPACE_NAME' and tbl.table_name = 'TABLE_NAME'

  • TCP Write loosing packages

    Dear,
    When I use TCP Write in fast succession (immediate one after the other) TCP packages are lost.
    When I program 3 time TCP Write just one after the other the 3rd packages is lost.
    Doing it in the same way with a timed loop of 1ms it works propperly.
    So what is going on in LabVIEW?
    Any Ideas?
    Kind regards
    Martin
    Attachments:
    TCP.png ‏47 KB

    Are you running wireshark on the sender or in the receiver end?
    TCP is designed to run un unreliable networks, and packet loss always occurs at some level, at which case the missing packet is retransmitted.
    A TCP connection is initiated with a threeway handshake and both sides keep track of every single packet. Packets can arrive out of order, some missing, even some in duplicate, and the receiver will make sense of it, reassemble, and request retransmissions.
    Your third message is part of the same TCP connection so it is difficult to image it go missing unless one of the sides resets the connection prematurely.
    Kunze wrote:
    The receiver is allwas ready since it is a peer to peer with 100Mbit/s
    What does network speed have to do with readiness? What program is receiving the packets and who wrote it?
    Did you update the network drivers?
    Can you attach filtered wireshark traces, one recorded on each side, showing all communication between these two nodes?
    LabVIEW Champion . Do more with less code and in less time .

  • TCP Write Problem Sending "FF"

    Hello people! I'm having a weird problem with TCP Write on sending "FF" (hex). I will explain some tests I have already done and I hope you guys can help me somehow.
    My whole system is something like this:
    [PC 1] --- [MUX/DMX nº1] --- [RF Transmission] --- [MUX/DMX nº2] --- [Serial/Ethernet Converter] --- [Switch] --- [PC 2]
    My objective is to send a message from [PC 2] to [PC 1]. In [PC 2] I use TCP Write and in [PC 1] I use Visa Read.
    When I send anything that doesn't contain "FF" from [PC 2], the [PC 1] receives everything correctly. For example, if I send "12345678" from [PC 2], the [PC 1] will receive "12345678".
    However, when I send a single "FF", the [PC 1] stops receiving from [PC 2]. Being more accurate, it's [PC 2] that stops to send data. If I send "1234FF5678" from [PC 2], the [PC 1] only receives "1234". Then the [PC 2] stops sending any kind of data and it will only send again if I restart the software. [PC 1] continues to receive data from [PC 2] after this restart. If I send "FF" again, [PC 2] stops once more to send data.
    I also tried the other way. [PC 1] used Visa Write and [PC 2] used TCP Read. It worked correctly even sending "FF". I started to think that the problem was in TCP Write.
    Then I tried the following scheme: [PC 1] --- [Switch] --- [PC 2]
    [PC 1] used TCP Read and [PC 2] used TCP Write. I used LabVIEW's example of TPC Active and Passive and it worked correctly to send "FF". I got more confused.
    Then I tried: [PC 1] --- [Serial/Ethernet Converter] --- [Switch] --- [PC 2]
    It occurred the same problem as before. [PC 2] sent correctly to [PC 1] every data that didn't contain "FF". When I sent "1234FF5678", [PC 1] only received "1234" and [PC 2] stopped to send data.
    Well... now I'm out of ideas of tests. I think the problem is with TCP Write or with my serial/ethernet converter. Do you guys have any sugestion or solution?
    Thank you very much. Best regards,
    Solved!
    Go to Solution.

    Yahoou! Finally found the problem. The problem was inside serial/ethernet converter. It had an option "Use NVT (RFC2217)". I searched over the internet and found that "FF" for this kind of protocol has a meaning and probably it was affecting my message.
    I disabled it and then I was able to send "FF" (hex) without any trouble!
    Thanks for your attention. Best regards,

  • Need TCP Write Stream. vi to run fp_ftp.vi

    Hi,
    I my application, I want a Fieldpoint RT module to put files on a ftp server on our network. The ft_ftp.vi could be helpful, but while loading it stops claiming that the TCP Write Stream.vi is lacking on my system. I did not find this vi, neither on my PC or elsewhere on the internet.
    Please help me to locate this vi,
    Thanks,
    Frederik

    Dear sir,
    To use the TCP Write Stream VI you need to have the LabVIEW Internet Toolkit installed.
    This toolkit is an additional function librabry for LabVIEW which need to be bought seperately.
    Please refer to following article for more info.
    Additional information and pricing about the LabVIEW Internet Toolkit is available here.
    If you LabVIEW license is part of a Developer Suite, than the Internet Toolkit is included with this software bundle.
    Best regards,
    Joeri
    Best regards,
    Joeri
    National Instruments
    Applications Engineering
    http://www.ni.com/ask
    Make our forums great:
    If you like the answer, don't forget to "Kudos!".
    "Accept the Solution" if your question is answered!

  • WTR54GS Intermittent failure mode

    I have a WTR54GS and have used it successfully for many years (about 5 years) as a travel router.  For the most part it has worked like a charm.
    Recently it has exhibited a strange problem.  It will suddenly *stop* working as a wireless router, and the lights flash in a rhythmic pattern with about a 2 second cycle time.  I can't figure out what causes this, or what to do about it.  The big security button, which is normally an amber color in normal use will flash white every two seconds, and the other lights flash at different, regular times.  It is clearly displaying some sort of distress code, but I don't know what it means.
    It will, however, sometimes fix itself.  I left it running all night without problem, but then about 8 in the morning, it suddenly goes into this mode.  At that time it becomes undetectable as a wireless network -- it no longer appears in the windows wireless lan list.  I sat and wateched, and about 10 minutes later it stops doing that, and becomes a wireless lan again.  At other times I have unplugged it, and plugged it back in.  Sometimes when you plug it back in, it works, and sometimes it continues in the failure mode.  It is not overheated ... I have unplugged it for long periods (e.g. 1 hour) and found it still failing when plugging it back in.
    Right now, however, it is working fine, and I am submitting this message using it in wireless mode.
    I have firmware version 1.0.15 which is the latest version listed on the Cisco site.
    Does anyone have any idea what the flashing lights means?  OR what to do about it when it happens?

    Sounds like it's dying. You can try upgrading/reflashing the firmware and resetting the router.
    The box said windows xp or better... So I installed Linux!

  • FileAdapter write mode - Dynamic filename, write down simple text structure

    Hi,
    I tried to use the FileAdapter in a BPEL Process to write down and existing structure as String, comming from Database (CLOB), which has a defined structure.
    My first problem is, that I assign an output filename through the output message header variable, but it does not work. Instead always the name from the Filename convention is taken (wizard like po_%SEQ%.txt).
    My second problem is, that the FileAdapater destroyes the format during the write procedure. E.g.: The String structure is as follows:
    MKKOPF EF585773 07.05.2009 1 XXX Spielapparate u. Restaurant- betriebsgmbH 40031356 Brehmstraße 21 1110 Wien 900000585773 EUR ML
    MKMAIL [email protected]
    MKEINZ EF4428643 28.06.2009 Test Card Casino XXX Gesamtausgabe Textteil Allgemein Großanzeige 4C 1 ST 1 489,00 489,00 GES
    MKPOS. EF4428644 26.07.2009PRVB 948,66 / 10 / +5% WA
    a.s.o, . So you can see, its quite a simple format, which has to stay like that!
    First I tried to use the opaque mode for the fileoutput format, but I was stopped, because of the '/' characters in the source data. So I used the native builder to create a xml schema (I used deliminated file, multiple records, deliminated by white spaces, tabs and spaces). To identify the multiple records, I have chosen the end of line (eol). Except EOF and EOL there is no other choice, like CRLF,\r,\n!
    However, testing my native format, result in writting a file with containing a single line with the original data. So the original line breaks are gone, and therefore the system who should process it further gets an parsing error!
    The simple question is, how can I simple write down string data (containing German special chars, like Öäü --> ISO8859-1 and also XML reserved values like /) through the file adapter without destroying its structure?
    Second, how can I pass dynamically the filename, as using the output header values does not work?
    I hope some specialists are out there.
    Thanks & BR,
    Peter

    Hi James,
    here what I am doing.
    I read some records from the database throught the database adapter. The returned records representing strings, which should be written, line by line separated by 0xA (\n) to a single file.
    Also for the file write we use a Oracle SOA Adapter --> the FileAdapter.
    We discussed quiet some time about opaque schema and native builder. So on my understanding only the native builder version works. Otherwise I get an error, because of the contained slashes in the string record.
    What I am searching for is an easy way for aggregating this database row strings and append it with 0xA (\n).
    With XPath functions, like create-deliminated-string(node, '#xA') I had no success. Adhering to XML the signs #xA should represent 0xA, but it does not work.
    Also in using the encodeLineTerminators &#10; is not working.
    So I had really write a long bpel construct, combined with a Java Code Emeeding to do such a simple thing:
    Here is the interessting part (hiden the while loop, index handling and final fileadapter write call details):
    <while name="While_1"
    condition="($VariableSapBillingJobs > 0) and ($VariableIndex &lt;= $VariableSapBillingJobs)">
    <sequence name="Sequence_4">
    <sequence name="Sequence_4">
    <assign name="Assign">
    <copy>
    <from expression="concat(bpws:getVariableData('VariableBuffer'), bpws:getVariableData('InvokeSapDataSelection_eBillingSapDB4Select_OutputVariable','eBillingSapDB4SelectOutputCollection','/ns8:eBillingSapDB4SelectOutputCollection/ns8:eBillingSapDB4SelectOutput[$VariableIndex]/ns8:LINE'))"/>
    <to variable="VariableBuffer"/>
    </copy>
    </assign>
    *<bpelx:exec name="JavaAppendCRLF" language="java"*
    version="1.5">
    <![CDATA[String buffer = (String)getVariableData("VariableBuffer");    
    buffer += "\n";     
    setVariableData("VariableBuffer", buffer);]]>
    </bpelx:exec>
    <assign name="AggregateStructure">
    <copy>
    <from expression="$VariableIndex + 1"/>
    <to variable="VariableIndex"/>
    </copy>
    </assign>
    </sequence>
    </sequence>
    </while>
    There must be a simpler way like that!
    Do you or somebody else a simpler way? Doing it with the FileAdapter native builder, or at least with a XPath function?
    Furthermore my next homework is, the inverse operation. I will read a file, this time terminated with 0xD 0xA, and must translate it to XML to use it further in my bpel process.
    Also here I have not the idea, where the fileadapter supports me.
    Thanks & BR,
    Peter

  • Select in read only transaction mode / insert in write mode

    hello,
    i have a following question: i have 2 db, one is rdb and one oracle
    im extracting data out of table out of rdb and inserting them into oracle via OWB;
    however this is run in read-write mode and causes locks in rdb; the only way to prevent locks is to run a select statement that accesses rdb in 'read only' transaction mode;
    my question is: is this possible to split up the select and insert statements of oracle into 2 different transaction modes, so that select statement is run in 'read-only' mode and insert in 'write' mode?
    i appreicate tips on how this could be achieved
    thx
    rgds

    Hello,
    is this something like:
    insert into oracle_table (select * from rdb_table@rdb_link);
    Then it is easy, you just need to use an sql init file for your OCI Service. Create a file, e.g. sql_init.ini that contains this line:
    declare transaction read only;
    Then alter your service so that it sees the sql init file, e.g.:
    SQLSRV> alter service <your OCI service> sql_init_file sql_init.ini
    The service owner must have the privileges to read the file. You need to restart the OCI service.
    Then a statement of the above kind results in a read only transaction on Rdb side, and the insert on Oracle side is done.
    I hope your SQL/Services version is a recent one (actual version is 7.3.1), because of this (from the SQL/Services Release Notes 7.2.0.1):
    5.4.25 Declare Transaction in SQL Init File Being Overridden
    In releases of OCI Services for Oracle Rdb prior to 7.2.0.1, if a DECLARE
    TRANSACTION statement was executed in the SQL initialization file of a service, it would
    be overridden by a DECLARE TRANSACTION statement executed later by OCI Services
    for Oracle Rdb. Toward the end of the connection setup, OCI Services for Oracle Rdb would
    execute a DECLARE TRANSACTION statement to set the default transaction
    characteristics to be close to Oracle default transaction characteristics. This would supersede
    any DECLARE TRANSACTION statement in the SQL initialization file. Starting with
    release 7.2.0.1, OCI Services for Oracle Rdb recognizes that a DECLARE TRANSACTION
    statement has been executed and will not execute another one.
    Regards
    Wolfgang
    P.S.: It is always better to place Rdb related questions in our communities at https://communities.oracle.com/portal/server.pt/community/rdb_product_family_on_openvms . Those are watched by Rdb Engineering and Support. It was by pure chance that I saw this forum thread.

  • How to call RFC in Async Mode using TCP/IP RFC Destination ?

    Hi experts,
         Can anybody tell me how to call an Async RFC using TCP/IP RFC Destination ?
    Regards,
    Umesh

    Check the link
    http://help.sap.com/saphelp_nw04/helpdata/en/80/09680289c751429ab3b07ad2a61c10/content.htm
    It says
    <b> For asynchronous calls, no connection to external systems is possible (TCP/IP connections in transaction SM59).</b>
    Regards,
    Abhishek

  • FTP Failure: Lots of TCP Retransmission & Duplicate ACK

    Hi Friends,
    I am having some serious problem with FTP transfer between 2 sites. When ever trying to send files more than 10 15 Mb its failing or less files receiving.
    I put a sniffer into one of access switch between and capture some data. I can see a lot of retransmissions and DUP ACK happening.
    Capture File :- https://www.cloudshark.org/captures/03975d64a70e
    Can some expert on packet capture help me understand this :-
    1. Everything was fine till Line-69 i.e the ACK send from receiver(10.21.1.58) to server(10.29.1.199).
    2. On Line-70 Server send a packet with SEQ-42901.
    3. Next the receiver replies it with ACK-44201.
    4. The server notice that the previous SEQ-42901 is not ACK, so it mentioned it as lost frame.
    5. On the exact next trasmit the server directly move to a SEQ-78145 WHY ???...
    6. Next there are 12 DUP-ACK on ACK-44201 from client to server and then there is a FAST Retransmision from server for that segment.
    7. And further a lot of similar problems through out the capture,which is the cause of failure.
    Can someone help me understanding the cause of this.

    Hello,
    Did you find a solution to your problem ?
    I am facing a similar situation and I cannot find the root cause.
    Thank you

  • What are the RMI Failure modes?

    I was surprised today when I pulled the network. My client was working with my server. Between calls I disconnected the connection. Made a call which failed. Then reconnected. The next call performed full speed. I expected a 'reconnection' delay.
    Why was the first call slow, but all subsequent calls quick, as well as calls after I reconnected the network were quick? It leads me to believe that exporting an object the first time takes some time. But subsequent calls to the object that was already exported went quicker. Or perhaps my client downloaded some classes?
    So the question is, when does the connection die in such a way that the client needs to re-download classes it has downloaded? Is this only after the client JVM exits? Also, where are the downloaded classes kept?
    Second question is, how long are my client held remote objects valid? If the connection dies, how long will they remain valid for use, and how can I detect when they are no longer valid? How can I know the difference between an invalid reference and a broken connection? Is there a difference?

    Its not a DNS problem. I am using localhost through ssh. It seems to be the first call that takes the time. Im thinking its the class downloading thats hurting me. Any way to validate that?
    If the network cable gets pulled, the clients references to not become invalid. This only happens when the DGC kick in AFAICT. So these references will remain good for a time. I could hold on to them and next attempt, use the same ones again. As it stands I dump everything but its not always necessary. Its not that taxing to dump everything so its no big deal, I just wanted to know though.

  • T60 Disk Drive Failure After Trying to Write Recovery Disk

    Hello,
    After backing up my computer for a reformat, I decided to create system recovery disks. I inserted a CD, and got an "unspecified error" message with the code 0x80004005. My computer no longer recognizes any disks I put in it, including my Windows 7 disk, and makes a sound I've never heard before, that sounds like the drive starting to run, then failing before it gets to speed.
    Is it possible that Windows has somehow deactivated my disk drive, or could it have happened to fail minutes before a reformat due to the strain of having to burn a CD? Is there some way for me to salvage the optical drive without taking physical measures? If the optical drive is toast, is it possible for me to copy files from the Windows 7 CD to USB, and boot it from there?
    Thanks for any help.
    Edit: I found a guide for reformatting from USB, and will attempt it once I find a flash drive large enough for the Windows 7 ISO I made.

    Hello Axeman89 and welcome to Lenovo community,
    As I understand the optical drive does not read any disc now.
    Uninstall the optical drive driver from device manager and restart the system and check,
    If the issue persist the update the optical drive firmware from Lenovo website and also try making the anti virus program disable or uninstall.
    Best Regards,
    Tanuj
    Did someone help you today? Press the star on the left to thank them with a Kudo!
    If you find a post helpful and it answers your question, please mark it as an "Accepted Solution".! This will help the rest of the Community with similar issues identify the verified solution and benefit from it.
    Follow @LenovoForums on Twitter!

  • Help how to write ARP command on TCP

    Hi,
    Help or advice needed...
    In my application I'm using an Access Point /Router that is connected to the computer with the LV application (exe compiled) running on it. At a precise moment I need to release a specific IP address and it's associated MAC address from the TCP connection...
    The "arp -d nnn.nnn.nnn.nnn" (n=IP address) has the right action and releases the wanted IP address when send from the dos screen , but when I want to build this command in my labview application via a TCP Write VI, I have nothing happend. Did someone faced that kind of problem ?
    Thanks for help...
    HaemoPhil

    It's on the Communication palette. Anytime you're trying to find a function, you could search for it. On the functions palette, there's a search button.

  • Problem with TCP/IP communicat​ion: error ni-448 when I want to write on a TCP/IP-con​nection

    Hi folks
    I have a really annoying problem: I want a computer (Win XP, Ethernet board, Labview 7.1) to communicate with my robot controller (Win 2k, Epson SPEL+ language). The robot controller is the client, the LabViewPC the server. now reading messages sent by the robot controller are no problem to read. But when I want to send messages from the LabViewPC to the robot controller I get an error message: error ni-448, make sure that the GBIP-controller is the primary controller.
    For the communication I open a TCP-listener and a TCP wait listener in a while loop. So far this works. Then I use the TCP-write function to send a string to the robot controller. Here the error described above occurs. So the communication doesn´t work.
    Can anybody help me?
    Please. Thank you in advance!

    hi there,
    are there several network adapters on your host? if so pass the IP of the network adapter to the "TCP listen.vi". i'm not sure but i think i remember that in former versions this had no effect. then you could use "TCP Listen.vi" on your slave and "TCP Open Connection" function on your host.
    you can use the "Simple Data Server" and "Simple Data Client" examples for a quick test.
    Best regards
    chris
    CL(A)Dly bending G-Force with LabVIEW
    famous last words: "oh my god, it is full of stars!"

Maybe you are looking for