Tcp/ip unicast protocol between clustermembers

hi,
i want to understand what's happen while synchronizing the cluster members in the unicast case. anybody an idea? i captured the packages between the members,
but there are a proprietary protocol i don't understand.
regards

I am co-worker of BOD and like to specify an other aspect of the same problem. Often the transactions get cancelled with the following exception:
javax.transaction.SystemException: SubCoordinator 'managedServer_a_xx_01+a-xx-appl01:31370+a_xx+t3+' not available
    at weblogic.transaction.internal.TransactionImpl.abort(TransactionImpl.java:1142)
    at weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(ServerSCInfo.java:173)
    at weblogic.transaction.internal.ServerSCInfo.startPrePrepareAndChain(ServerSCInfo.java:163)
    at weblogic.transaction.internal.ServerTransactionImpl.localPrePrepareAndChain(ServerTransactionImpl.java:1310)
    at weblogic.transaction.internal.SubCoordinatorImpl.startPrePrepareAndChain(SubCoordinatorImpl.java:90)
    at weblogic.transaction.internal.CoordinatorImpl_WLSkel.invoke(Unknown Source)
    at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:589)
    at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:477)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:147)
    at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:473)
    at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:118)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)      
We suspect that the clusters are not synchronised like they should. Interestingly the same code ran well on weblogic 9.2.3 with multicast. Now with weblogic 10.3.1 with unicast enabled we encounter these problems. Furthermore we should mention that the replication occurs on the default channel.

Similar Messages

  • Communication protocol between Admin Server and Managed Server

    Hello - I am hoping someone can help me here to understand the communication protocols used in my setup.
    Here is my understanding of the protocol that are used between each component.
    End User <--->HTTPS<--->LoadBalancer Device<--->HTTPS<--->Web Server<---->HTTPS<--->WebLogic Server(Managed Server)<--->LDAP/JDBC<-->Data tier components
    AdminServer<--->T3<--->Managed Server i.e. The communication protocol between Admin Server and Managed server is T3
    The communication protocol between Managed Servers running in one cluster on two seperate machine is What?
    Thank you.

    Hello, interesting question.
    In the documentation " [cluster multicast communication|http://download.oracle.com/docs/cd/E13222_01/wls/docs90/cluster/features.html#1007001] " don't specify the used protocol to pack the information for example in a session replication.
    Although in a session replication all objects must be serializable i don't think rmi protocol is used.
    I hope some expert give us some light in this issue :-)
    http://download.oracle.com/docs/cd/E13222_01/wls/docs90/cluster/features.html#1007001

  • Hardware CEF entry usage is at 95% capacity for IPv4 unicast protocol

    I have problem TCAM on my Router, Hardware CEF entry usage is at 95% capacity for IPv4 unicast protocol,
    i use both IPv4 and IPv6, so i cannot customize the use of Route.
    what memory should i upgrade if i want to increase TCAM ? 
    MSFC ? 
    should i upgrade: 
    SP Bootflash/bootdisk
    RP Bootflash
    SP DRAM
    RP DRAM
    Or i should upgrade my hardware?
    now, i use WS-SUP32-GE-3B

    You need to upgrade the hardware.
    Check out this thread - it is relevant to your issue.
    https://supportforums.cisco.com/thread/2113775
    HTH
    Regards
    Inayath

  • Live unicast streaming between macs?

    Afternoon all
    I am looking to set up a live unicast stream between two Macs for video relay of a live event between locations on a college campus, so that a live event one location can be displayed on a big screen on another as an overflow space.
    We have Macs running a variety of Lion and Mountain Lion and we have a licence for Lion Server but haven't installed it on anything yet but we could put it on a Mac Mini if needed.
    On paper I should be able to do this with VLC at both ends but I can't get it working. Seems Quicktime Broadcaster was the software of choice but the server is outdated and not in Lion Server. There are various other software options available but a lot of the web info on this seems to be a few years old.
    We are running off local LAN between buildings and have gigabit ethernet between. The distance is less than 400M so if we can't get this working we'll just run a cable.
    We are using camera(s) at the remote location and input to the broadcasting mac will be a Firewire capture device like Canopus ADVC or a direct DV input from a DV camera.
    Thanks in advance
    T

    Just to clarify, when I say multiple cameras we don't require vision mixing in the software, we'll be mixing externally so we're looking at 1x video input and audio

  • Interface of Allen Bradley PLC with LabVIEW using TCP\IP modbus protocol

    Hello.....
    I want to connect a Allen Bradley PLC with LabVIEW using TCP\IP modbus protocol.
    The PLC which I am using is a series of 1766BWA (Allen Bradley) please can you help me for the same or please give me suggestion.
    Also I want to know the resistance values of this PLC.
    Thank you.

    Hello
    I was trying for connection of a Allen Bradley PLC with LabVIEW using TCP\IP modbus protocol.
    Here I am attaching the snapshot of PLC interfacing and communication vi using modbus but it is not working
    For reading we use setting shown in CHANNEL 1-modbus but it gives error regarding modbus addressing
    and for writing it shows error illegal address please help me for the same.
    Thank you.
    Attachments:
    PLC_SCREEN.zip ‏261 KB
    Modbus_(Read-Write).zip ‏14 KB

  • How to determe Streams tcp/ip network traffic between database nodes.

    We are needing to determine network bandwidth requirements for optimizing streams replication performance between database instances that are NOT located within the same data center.
    How can I determine the rough # of bits per sec of tcp/ip traffic generated by the streams replication propogation process?

    This is the information I got from our dev environment, not too much activity here, so is total time in seconds? so if I take total_bytes/total_time I get bytes per second right?
    QNAME : VERITYUSR_CAP_Q
    DESTINATION : NEWS.J513.BLOOMBERG.COM
    TOTAL_TIME : 11487
    TOTAL_NUMBER : 400329
    TOTAL_BYTES : 5290
    MAX_NUMBER : 400329
    MAX_BYTES : 5290
    AVG_NUMBER : 400329
    AVG_SIZE : 0.01321413

  • Design for a semi protocol between a server and client

    Hi!
    I'm building quite a large client/server application that communicate using Serializable Objects.
    There is about 20 actions done between the client and server ranging from
    - the client requesting objects from the server
    - to the client sending a range of different values to the server.
    Does anyone have a design of a protocol that handles a large amount of actions that I could get design ideas off?

    Hi!
    I'm building quite a large client/server application
    that communicate using Serializable Objects.
    There is about 20 actions done between the client and
    server ranging from
    - the client requesting objects from the server
    - to the client sending a range of different values to
    the server.HOPP / Half-Object plus Protocol / Half-Object part Protocol
    This is the pattern EJB's follow.
    Does anyone have a design of a protocol that handles a
    large amount of actions that I could get design ideas
    off?Protocol ? There is SOAP, RTSP, SMTP or HTTP.
    To implement a Protocol Handler, checkout the Command & Interpreter Pattern. Command is most useful if you simply know the commands you need to handle, and Interpreter is most useful if you have a grammer you wish to implement.

  • Do you know a unicast protocol for the Ethernet control plane?

    Hi,
    Does anyone know a protocol for the Ethernet control plane which has a unicast destination address?
    MVRP, MMRP, MSTP, RSTP, all these protocols have a multicast reserved destination address.
    Perhaps we have to look non-802.1Q control plane protocols.
    Best regards,
    Michel

    Hi Peter,
    > I wonder if any of the OAM protocols, especially the one providing the loopback/ping test is unicast-based.
    In G.8013 (07/2011) section 7.3:
    "The Ethernet loopback function (ETH-LB) is used to verify connectivity of a MEP with a MIP or
    peer MEP(s). There are two ETH-LB types:
    • Unicast ETH-LB.
    • Multicast ETH-LB".
    > In any case, think of LOOP frames sent by Catalyst switches to detect  self-looped ports. In these frames,
    > the source and destination MAC  address are set to the unicast MAC of the egress port.
    As I said above, it's a good case for my little study.
    The LOOP frame, from Cisco, was certainly interesting and important before 2004.
    Since 802.3ah-2004 we have the OAM remote loopback (in link OAM, and not network OAM as ETH-LB).
    Best regards,
    Michel

  • TCP Connection lost in between TCP read operation

    Hi all,
                    Is there any method to check whether TCP connection is lost ? Actually in my program server may need to read the data for 44 minutes & i want to know that while reading say after 20 minutes connection is lost for unknown reasons. In that condition i should make sure that connection is estalished / lost. If in between this happens than i should inform user the same.
    Any implementation of this type. 
    Kudos are always welcome if you got solution to some extent.
    I need my difficulties because they are necessary to enjoy my success.
    --Ranjeet

    Ranjeet_Singh wrote:
    Hi all,
                    Is there any method to check whether TCP connection is lost ? Actually in my program server may need to read the data for 44 minutes & i want to know that while reading say after 20 minutes connection is lost for unknown reasons. In that condition i should make sure that connection is estalished / lost. If in between this happens than i should inform user the same.
    Any implementation of this type. 
    There is no direct method other than the results of transmitting/receiving data. If you get an error 66 that would indicate that the connection has been closed by the other side. However, you will not get this until you try to send/receive data. If you are receiving data continuously you can looki for a timeout error (error 56). You may also get an error 52. The end result is that you need to check the error codes and then take whatever action you deem necessary.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • What are the supported secure protocols between BizTalk 2013 R2 and Iguana HL7 interface

    We are working on project where we want to transmit HL7 messages securely from BizTalk 2013 R2 to Iguana HL7 interface and receive Acks/Nacks. We came to know that MLLP adapter doesn't support SSL and the other option we are thinking
    is HTTPS.  Are there any better options available other than these 2 options which are supported by both BizTalk and iguana.
    Also I read the following article
    http://help.interfaceware.com/kb/164

    Have you gone through above post .It clearly state 
    ACK’s are published when the Messaging Engine successfully transmits a message over the ‘wire’ and the system context property “AckRequired” is written on the message that was sent and it is set to true.
    Both ACK’s and NACK’s have the following system context properties promoted which can therefore be used in filter expressions for routing:
    AckType:
    set to ACK or NACK
    AckID:
    set to the message ID of the message that this ACK/NACK is for
    AckOwnerID:
    set to the instance ID that this ACK/NACK is for
    CorrelationToken:
    flowed from the message to the ACK/NACK
    I would again suggest to go through below link to design your solution
    http://blogs.msdn.com/b/kevinsmi/archive/2004/07/03/172574.aspx
    Thanks
    Abhishek

  • 294.What is the difference between TCP and UDP

    What is the difference between TCP and UDP

    The difference between TCP and UDP is that UDP throws datagram packets over the wire not concerned whether it arrives at its destination or not. TCP attempts to facilitate the notion of a connection by requiring an acknowledgement by the recipient and continually resending the packets until it receives the acknowledgement or gets a timeout. In a way TCP can be thought of as a layer over UDP but it's more than that. A good analogy is the beginning of the school day vs the end of the school day. In the beginning children enter school in a line and in an orderly fashion. Each child is accounted for. Each child has a defined classroom destination. Likewise TCP packets are sequenced, accounted for, and have a certain destination. At the end of the day all the kids pour out of the building randomly with no regard of whether their buddies are on detention or not. Any particular kid could be heading straight home or to the mall and no accounting is taken. In the same way UPD packets are dispersed from the sender without regard for order. Any packet to make a pit stop at a particular router on the network and no accounting is done to ensure it makes it all the way to the recipient.

  • Wifi video capture using TCP/IP protocol in LabView

    i have designed a robot with a camera and i am trying to control it using LabView. I am using TCP/IP communication protocol (My robot has wifi card for communication, IP: 169.254.0.20)... i am able to control the motor and make my robot moves in any specified direction but i am not able to acquire video from the camera. i have installed NI vision drivers but still i am not able to aquire video. is there any IMAQ wifi driver available. if not, is there any other logic to aquire continious video from a camera using tcp/ip?
    i am using LabView 8.6
    wifi: private IP 169.254.0.20
    Attachments:
    robo.vi ‏9 KB

    Without knowing the specifics of the camera you are working with it is impossible to give you advice on what your code needs to do. For instance, to obtain an image do you connect to the camera or does the camera expect to connect to a server? If you connect to the camera is there a specific protocol you need to use to request the image? Generally in networking there is some application layer protocol required for two devices to communicate with each other.
    I don't have much expereince with IMAQ itself to answer your question as to whether there are VIs that already do this or not.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • Failure modes in TCP WRITE?

    I need help diagnosing an issue where TCP communications breaks down between my host (Windows) and a PXI (LabVIEW RT 2010).
    The bottom-line questions are these:
    1...Are there circumstances in which TCP WRITE, given a string of say, 10 characters, will write more than zero and fewer than 10 characters to the connection? If so, what are those circumstances?
    2...Is it risky to use a timeout value of 1 mSec?  Further thought seems to say that I won't get a 1000 uSec timeout if we're using a 1-mSec timebase, but I don't know if that's true in the PXI.
    Background:
    On the PXI, I'm running a 100-Hz PID loop, controlling an engine.  I measure the speed and torque, and control the speed and throttle.  Along the way, I'm measuring 200 channels of misc stuff (analog, CAN, TCP instruments) at 10 Hz and sending gobs of info to the host (200 chans * 8 = 1600 bytes every 0.1 sec)
    The host sends commands, the PXI responds.
    The message protocol is a fixed-header, variable payload type: a message is a fixed 3-byte header, consisting of a U8 OpCode, and a U16 PAYLOAD SIZE field. I flatten some structure to a string, measure its size, and prepend the header and send it as one TCP WRITE.  I receive in two TCP READs: one for the header, then I unflatten the header, read the PAYLOAD SIZE and then another read for that many more bytes.
      The payload can thus be zero bytes: a TCP READ with a byte count of zero is legal and will succeed without error.
    A test starts with establishing a connection, some configuration stuff, and then sampling starts. The 10-Hz data stream is shown on the host screen at 2-Hz as numeric indicators, or maybe selected channels in a chart.
    At some point the user starts RECORDING, and the 10-Hz data goes into a queue for later writing to a file. This is while the engine is being driven thru a prescribed cycle of speed/torque target points.
    The recording lasts for 20 or in some cases 40 minutes (24000 samples) and then recording stops, but sampling doesn't.  Data is still coming in and charted. The user can then do some special operations, related to calibration checks and leak checks, and those results are remembered.  Finally, they hit the DONE button, and the whole mess gets written to a file.
    All of this has worked fine for several years, but as the system is growing (more devices, more channels, more code), a problem has cropped up: the two ends are occasionally getting out of synch. 
    The test itself, and all the configuration stuff before, is working perfectly. The measurement immediately after the test is good.  At some point after that, it goes south.  The log shows the PXI sending results for operations that were not requested. The data in those results is garbage; 1.92648920e-299 and such numbers, resulting from interpreting random stuff as a DBL.
    After I write the file, the connection is broken, the next test re-establishes it, and all is well again.
    In chasing all this, I've triple-checked that all my SENDs are MEASURING the size of the payload before sending it.  Two possibilities have come up:
    1... There is a message with a payload over 64k.  If my sender were presented with a string of length 65537, it would convert that to a U16 of value 1, and the receiver would expect 1 byte. The receiver would then expect another header, but this data comes instead, and we're off the rails.
      I don't believe that's happening. Most of the messages are fewer than 20 bytes payload, the data block is 1600 or so, I see no evidence for such a thing to happen.
    2... The PXI is failing, under certain circumstances, to send the whole message given to TCP WRITE.  If it sent out a header promising 20 more bytes, but only delivered 10, then the receiver would see the header and expect 20 more. 10 would come immediately, but whatever the NEXT message was, it's header would be construed as part of the payload of the first message, and we're off the rails.
    Unfortunately, I am not checking the error return from TCP write, since it never failed in my testing here (I know, twenty lashes for me).
    It also occurs to me that I am giving it a 1-mSec timeout value, since I'm in a 100-Hz loop. Perhaps I should have separated the TCP stuff into a separate thread.  In any case, maybe I don't get a full 1000 uSec, due to clock resolution issues.
    That means that TCP WRITE cannot get the data written before the TIMEOUT expires, but it has written part of it.
    I suspect, but the logs don't prove, that the point of failure is when they hit the DONE button.  The general CPU usage on the PXI is 2-5% but at that point there are 12-15 DAQ domain managers to be shutting down, so the instantaneous CPU load is high.  If that happens to coincide with a message going out, well, maybe the problem crops up.  It doesn't happen every time.
    So I repeat the two questions:
    1...Are there circumstances in which TCP WRITE, given a string of say, 10 characters, will write more than zero and fewer than 10 characters to the connection? If so, what are those circumstances?
    2...Is it risky to use a timeout value of 1 mSec?  Further thought seems to say that I won't get a 1000 uSec timeout if we're using a 1-mSec timebase, but I don't know if that's true in the PXI.
    Thanks,
    Steve Bird
    Culverson Software - Elegant software that is a pleasure to use.
    Culverson.com
    Blog for (mostly LabVIEW) programmers: Tips And Tricks
    Solved!
    Go to Solution.

    There are a couple of issues at play here, and both are working together to cause your issue(s).
    1) LV RT will suspend the TCP thread when your CPU utilization goes up to 100%. When this happens, your connection to the outside world simply goes away and your communications can get pretty screwed up. (More here)
    Unless you create some form of very robust resend and timeout strategy your only other solution would be to find a way to keep your CPU from maxing out. This may be through the use of some scheduler to limit how many processes are running at a particular time or other code optimization. Any way you look at it, 100% CPU = Loss of TCP comms.
    2) The standard method of TCP communication shown in all examples I have seen to date uses a similar method to transfer data where a header is sent with the data payload size to follow.
    <packet 1 payload size (2 bytes)><packet 1 payload..........><packet 2 payload size (2 bytes)><packet 2 payload.......................>
    On the Rx side, the header is read, the payload size extracted then a TCP read is set with the desired size. Under normal circumstances this works very well and is a particularly efficent method of transferring data. When you suspend the TCP thread during a Rx operation, this header can get corrupted and pass the TCP Read a bad payload size due to a timeout on the previous read. As an example the header read expects 20 bytes but due to the TCP thread suspension only gets 10 before the timeout. The TCP Read returns only those 10 bytes, leaving the other 10 bytes in the Rx buffer for the next read operation. The subsequent TCP Read now gets the first 2 bytes from the remaining data payload (10 bytes) still in the buffer. This gives you a further bad payload read size and the process continues OR if you happen to get a huge number back, when you try to allocate a gigantic TCP receive buffer, you get an out of memory error.
     The issue now is that your communications are out of sync. The Rx end is not interpeting the correct bytes as the header thus this timeout or bad data payload behavior can continue for quite a long time. I have found that occasionally (although very rare) the system will fall back into sync however it really is a crap shoot at this point.
    I more robust way of dealing with the communication issue is to change your TCP read to terminate on a CRLF as opposed to the number of bytes or timeout (The TCP Read has an enum selctor for switching the mode. In this instance, whenever a CRLF is seen, the TCP Read will immediately terminate and return data. If the payload is corrupted, then it will fail to be parsed correctly or would encounter a checksum failure and be discarded or a resend request issued. In either case, the communications link will automatically fall back into sync between the Tx and Rx side. The one other thing that you must do is to encode your data to ensure that no CRLF characters exist in the payload. Base64 encode/decode works well. You do give up some bandwith due to the B64 strings being longer, however the fact that the comm link is now self syncing is normally a worthwhile sacrifice.
    When running on any other platform other than RT, the <header><payload> method of transmitting data works fine as TCP guarantees transmission of the data, however on RT platforms due to the suspension of the TCP thread on high CPU excursions this method fails miserably.

  • Reading a file via tcp/ip

    I have used the open application reference and the open vi reference with the invoke node to remotely activate a vi and this works fine. The vi on the remote comuter creates an output file which i then need to get onto the local computer and I am unsure how to do this, can anyone help, im sure it cant be too hard but i dont really understand the vi client and vi server tcp/ip examples shipped with labview. Im sure it cant be too hard but its just not clicking for me. Can anyone tell me how to do this

    Hi Ridge,
    I would like to take a stab at answering your questions while avoiding writing a book in the process.
    The questions you are asking go down below where you generally work with TCP/IP in the LabVIEW environment. When you are using a PC on a local area network (that may, or may not be connected to the internet) there are a number delivery mechanisms available for moving messages about between machines and the application running on them.
    I will try to give you a layman's tour through some of what is going on to move this information around by starting at the bottom and touching on just enough to answer some of your questions.
    About 20 years ago the OSI 7-layer model came out describing how people should organize their communications schemes such that many different applications from different vendors would play nice together. This 7-layer model broke the whole process down starting at the TOP which I believe they called the application interface layer down to the bottom which talked about the physical layer, things like wires and transcievers and the like.
    Looking back I would have to say that the 7-layer model has worked rather well as far as it has been adhered to. Getting back,
    The physical layer also talks about how data will be put on the wire and taken off. Examples of physical layers are things like
    Ethernet
    Token-Ring
    etc.
    In the case of ethernet you have any number of computers virtually all sharing the same wire to do the talking. No two can talk at the same time otherwise they would step on each other. This is called a collision. The whole thing works sort of like a party line with a buch of people taking turns having different conversations. Same thing happens on ethernet. Seperate device take turns saying what they have to say and then letting others talk.
    To make it fair for all there is a limit of how long your message can be. This keeps one device (person) from tying up the line. If I remeber correctly that was somthing like 4096 bytes or there abouts.
    So how does allow larger transferes? That is where the 7-layer model kicks in. In the next higher level, above ethernet and token ring and the like, are the transfer protocols. The transfer protocols are responcible for moving data using the physical layer. Examples of transfer are TCP, UDP, DECnet, LAT, etc.
    As in the case of TCP (Transfer Control Protocol ?) they have they trem Protocol buied in them. They all utilize the physical layer to provide what ever service they provide using a set of rules to interact with a conterpart on another part of the physical layer. (boy was that a bad sentence).
    Returning to the earlier anology of th ephone conversation. You could have many conversations taking place in different languages as long as the listener and the talker where speaking the same language. same way over ethernet. One talker could speak TCP (english) as long as there was a listner listening for TCP (english) and intermitantly another talker could be talking UDP (spanish) to a different listner listening in UDP (spanish). TCP is just the protocol, or set of rules that are used to interact.
    To take the next step up we have to introduce another complication to our party-line analogy. Let say Bill and Bob want to talk english while Sue and Sally want to talk english?
    New problem, new rule.
    Whenever talking, all conversations must consist of the following format.
    Talker,Listner,message.
    Eaves droping would sound like,
    Bob Bill How are you?
    Sue Sally The tangent of the circle would tell us that.
    etc
    This new rule is analogous to the IP part of TCP/IP. When you are asking for an IP address you are looking for a name that you can use when you are talking. You could use the IP you where born with (static IP), or you could pick up an alias and use it to do some random talking if you want.
    WHo decides? The network admisitrator. On my network at home, I set up all of the IP addresses. At work, I have to request one if I need one.
    One of the things a network administrator can do is set things up so that there is one machine on the network that gives out aliases to anyone that asks. This is a dynamic IP address. In this case, any to two individuals could talk using aliases, by contacting the the name server and finding out what is the current alias of any other machines.
    I have carried on here for quite a while so I wil call it here. I just wanted to explain what is going on below LbVIEW.
    Ben
    (Legal disclaimer:
    I was just talk of the top of my head here. I know TCP/IP are technically two levels tied together, and what ever happened to layers 4-6 anyway?)
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Can TCP/IP communication be used for 2 Labview processes on the same target?

    I am a newbie to all this LV stuff so bear with me.  We are trying to create a simulation tool that will run on the same machine as the client GUI [also LV].  Can we use TCP/IP to communicate between these 2 LV processes on the same machine?  If so, how?  Thanks for the help.

    Hi nathand,
    I have another issue while communicating with TCP/IP protocol.
    I am sending data from RT target to Host computer(Main UI ) using TCP . Also I have created EXE file of main main UI which one is running on Host computer. and ussing this exe file in three different PC  and want to send the control signal from all four PC (three exe installed and one host PC). I created four communication loop in RT with four different port for communication with four PC.
    When I am running the all VIs ,Its try to connect to the port of the listener and and showing keep waiting for a connection. Another problem is whatever value we sending from four PC , Target vi receiving random value with actual value. Also every time its getting reset.
    So please help me here.
    Thanks!

Maybe you are looking for

  • How to select all columns in a trigger?

    I add a "before delete" trigger on a table, in order to insert the records to a backup table before they are deleted. but I cannot write like this : insert into t_backup select * from :old; The table has more than 30 columns so I don't want to select

  • How can I import an XML DOM document into another?

    Dear all, I'm using the following bit of code:      public static Node replaceNode(Document replacedDocument,                     Document replacingDocument,                     Node replacedNode)      //Create a documentFragment of the replacingDocu

  • Condition record related

    Hi Gurus, I want to maintain the material price based on state for a material.If the customer is in Maharashtra then the price is Rs.1000,and if it is in Gujarat then price is 1200.Pls help me how to map this scenario in Pricing. Regards,

  • JMF - How to draw Circles over Video (JMF / Java2D / Swing)

    Hi there! I want to draw polygons over a playing video-panel, but they keep vanishing behind the video... Is there a way to bring them on top of the video canvas? The application which I'm going to develop should indicate ROIs on a playing video... T

  • Web Stats, hit tracking

    I am new to .mac and am still finding my way around. Can someone please tell me how I can track my web hits, unique visitors, key words used, etc. I have been emailing my site to lists of people, and I would like to see if people are visiting. Thanks