TCP Stream Reconstruction

Hi,
I'm working on a project based on the PCAP library on the Windows platform (ie, using WinPCap). I need a method (an algorithm, rather) to reconstruct TCP streams based on captured packets.
A spoon-fed library to do the job for me would be most appreciated! Does any such library exist? If not, how do I go about reconstructing TCP streams, primarily to decode HTTP/1.1 headers?
Thanks,
Sayan.

TCP does not lose data. Show us the code that reads from the socket.

Similar Messages

  • How can I know the end of TCP stream and/or SOAP over HTTP flow

    Hi all!
    I want to read SOAP messages over HTTP, from sniffed TCP fragments.
    How can I decide that the TCP and/or the HTTP fragmenst finished?
    For example there is a SOAP message sliced to 5 TCP packets, and the first packet contains the HTTP header and some SOAP content too. The last packet contains the SOAP XML's ending XML-tag </soap:Envelope>.
    I don't want to watch every packet's end like "if the end string is </soap:Envelope> then it is the last packet", I just simply want to know which is the last packet of that message.
    The TCP connection won't be closed after the message arrives, and let's say the HTTP header doesn't contain a Content-Length field.

    A TCP connection is just a stream of bytes. It doesn't care what those bytes are. HTTP is built on top of TCP and specifies the ability to make a request without closing the connection (HTTP 1.1 Keep Alive). So you need to understand the HTTP protocol, understand whether it's a Keep Alive connection or not, and then do the same thing a browser would to do understand when a reply has been completed and the connection is available for the next request. Otherwise, you'll just look like you're getting loads of unrelated data as you sniff the connection. Oh, and you'll probably need to understand the HTTP chunking protocol too.

  • TCP STREAMING ISSUE? Please help

    HI 
    Iam posting here as my last resort to see if one of the many experts can resolve the anomaly I have on my BT Infinity option 2 totally unlimted package.
    The issue is streaming my slingbox a device which in the past has been sold in the BT shop .The slingbox is located abroad and  I have had BT infinity a few years now.
    when I first changed to  BT I first had this problem connecting to the slingbox ,it used to buffers slightly  and the onscreen application give stats on the streaming speed which when streaming,The stats would show the speed to be going at 2.5meg speed and then have sudden drops to 1meg which interupted viewing now this did not happen all the time but was intermittent. At this point it is very relevent to point out all other appliations have  worked without any noticble issues.
    Iam bringing this issue foward now because I have recently changed my work/shift patterns so I have weekends off and find myself at home on more evenings and therefore using this device more now from my Home broadband connection what I have noticed is that at he times Iam home the device not only has sudden drops but also struggles with the streaming speed 900kbps like there is restrictions on the type of data.
    When I do the bt wholesale speed test my stats are Brilliant 73meg down      16meg up on further diagnostics the line ip profile is 75 meg and it all looks good.
    However my BT infinity struggles with this application and struggles with streaming tcp on my slingbox which only needs 2.5meg to work flawlessly.
    I have regulary connected to my slingbox through different ISP be it  where I worked {BE BROADBAND} .my freinds house {sky fibre optic} and my brothers house who is on adsl2 with o2 broadband and he still is and has not changed quite yet to the sky infestrucure because he requires static ip. At these premises the slingbox streaming works faultlessly.
    That proves at least one thing that the problem does not reside at the source end.
    So over the last month or so that has led me to investigate my connection and what could be the possible causes of this problem being with my connection only.
    I have changed over from my BT router to top of the range LINKSYS 6700 changed wifi channels switched off wifi , connected 3 different laptops /pc via ethernet, disconnected all other devices etc and connected 1 PC at  a time had a new connection to my DP pole 14 months ago aswell as new internal wireing done by openreach. And the problems still persists more so at the evening and weekends.
    The last thing I tried was Wireshark which gives information on whatever data is going through the system. I  ran the application on my brothers connection using slingbox and under the heading PROTOCOL it shows as TCP.
    When i use wireshark at my connection under the same heading PROTOCOL its says HTTP and it says under the heading  INFO continuation or non HTTP traffic.
     I understand there maybe many under lining factors to why the application does not perform to the desired quality distance from cab , line etc but the reality is the device performs as it should do everywhere i have tried and my brothers who lives a few streets away laughs at me that he is on adsl connection and pays half the cost for his broadband but has alll his services working without issues. 
    I have rang BT technical a few times and although the BT tech team have tried to help I have gone round in circles in basically restarting router or modem.
    I do feel that maybe some sort of settings change from BT side will resolve this
    My contract is due to end with BT at the end of March 2014 and with regret if this issue is not corrected then unfortunetly I will have to change ISP provider.
    Thanks

    Hi I followed tha link to GLASHOST and when you try to test it comes back that there has been a error ?
    Is there another website I could try?
    Also how do I contact the MODS, regardless to the website not allowing me to test, at certain times (evening) there is definately throttling occuring or some setting on my line is causing this issue ,Iam sorry but tried to phone the TECH team but apart from power down modem or router there has been no effort made to try and resolve the problem.
    I need some one at BT with expert experiance and Know how to look into this thoroughly.
    I phoned customer options team yesterday and requested my MAC as I have less thn 1 month contract remaining,however  I  do not want to leave BT its a repitable company that has been trading for many years and its time that they showed that to me. do they care if I LEAVE?

  • Signature 1315 - ACK w/o TCP Stream - why alerting?

    We upgraded one of our sensors to 6.0(1)E1 and now we are seeing extremely high alerts on this particular signature. The signature is NOT set to alert. Any ideas on what we can do to stop the alert other than filter something that should not need filtering?
    Thanks,

    Do you have an event action override installed on the system to generate an alert for a risk rating (RR) greater than some value? If so, then even signatures that are set to "no action" will get the override applied if their resultant RR satifies the override criteria.
    If this is the case, then you have several options...you can adjust the override to raise the minimum RR value that triggers the override, or, you can tune the signature to lower its effect RR. The later can be accomplished by lowering either its Severity level (info, low, medium, high etc) or lowering its Fidelity value.
    The signature helps address some covert channels used by some exploit software.

  • Can JMF stream via TCP

    Good day all. I am currently working on a project that allows users to record video over the internet. I initially used the flash rtmp protocol and the red5 streaming server. Unfortunately, most of my target clients use very slow connections. It was only after i had practically completed the project and rented a server online that i realised that MOST of the frames were getting lost and in many cases, the video became corrupt. In this special case, I can afford the overhead of TCP streaming, as long as the video remains intact. I've read many articles and tutorials on JMF, and I have been coding java for sometime. Unfortunately, I'm using JMF for the first time. I would just like to know if JMF can use TCP for streaming (and how), instead of the more widely used RTP, so that i don't hit another dead end. The project is already a week late.
    Thanks for all your help
    -Wale Ajiboye

    Just figured something out. If I set video bitrates down to 1K it streams fine. If I up it to 2.5K like the guide on here I have the problems reported above.
    Any Idea why I have to use lower setting? Considered it a wired connection I should be able to stream enough data to use 2.5K bitrate or am I missing something?

  • Getting the data from a TCP/IP packet

    I am dealing with an industrial network that sends and recieves data over TCP/IP between a sort of supervisory system running on Unix and some machines via a bridge that converts messages onto other non TCP/IP networks. This is all old legacy equipment and the bridge now need upgrading. However the original source code is not availiable and no-one is very sure of the messages being sent. I thought it was going to be easy knocking something together in Java to intercept these messages and test various things but have come up against big problems.
    The main problem being that all the data is binary meaning I cant use any of the reader or writer classes I am used too. I am trying to use either DataInputStream or BufferedInputStream to read data in but am struggling. Idealy I need to be able to read (once) the complete data content of each packet that is sent and I need to tell each time a new packet of data arrives so that I can process it as a complete packet. As far as I know there are no eof or eol or any other details that tell me how many bytes of data there are, and they do vary in length, but each packet is a seperate message or message reply.
    I was hoping that there might be someway of getting this information from the TCP/IP layers but cant see how to do it as that all seems to work invisibly. Nor can I see any methods to call on the stream classes that indicate how to tell the lenght of the latest packet or when a new packet has arrived. I am not sure how some of the methods like mark() and reset|() are supposed to be used so am not sure if I couuld use these but am desperate for any help or pointers in the right direction.

    The TCP/IP packets can represent complete messagesThere is no guarantee to this effect anywhere in TCP/IP. Consider the case where a single message requires multiple writes. Consider the case where a write contains the end of one message and the beginning of another. Consider the case where there are multiple messages in a single packet. Consider ... There are just too many of these cases.
    The TCP/IP takes care of numbering the packets so that they can be reasembled in the correct order. Each TCP/IP packet contains information about the size of data the packet contains ...Thank you, I do know how TCP works.
    so in theory if we could get at the TCP/IP layers we should be able to get this information.No. You can get all the packet information out of packets. What you can't get is message information, because it isn't in there. It's in the application protocol, which to TCP/IP is just a stream of bytes. You can get the stream of bytes that the application sent. What it means is up to you.
    I really need to be able to read each packet of data seperately to be able to do anything with itWhy? Given the lack of correlation between writes() and packets and reads() due to TCP streaming, what is the point? And if you want packets you already have them via your sniffer.
    From your first post:
    each packet is a seperate message or message reply.You can't rely on that. There is no guarantee of this anywhere in TCP/IP.
    I also direct your attention to the Nagle algorithm, which coalesces outgoing packets under common conditions.

  • My MBP has started to send out TCP packets larger than the MTU on the NIC - is there any place that this can be overriden?

    Got a very weird issue here and wondering if anyone has any other ideas. Basically over the wired NIC only, my Mac has started to send out large HTTP/HTTPS packets from the browser (> 1500 bytes) Captures show packet sizes from 2000 all the way to 4000 sometimes. This happens in Firefox and Chrome so doesn't appear to be application related.
    This causes fragmentation issues and traffic drops which basically causes most of my websites and  tools to crash and burn (and I get all sorts of SSL errors from applications, etc).
    It appears to be limited to just TCP packets as pings with the DF bit set will not send any larger than 1500 bytes.
    However if I switch to wireless, everything works fine and captures show the correct maximum packet size of 1500 for all packets leaving my client.
    The MTU on the  en0 interface is 1500 as per ifconfig and I made sure that it was set to 1500 in Network config panel (because there is an option for jumbo frames there which bumps up the MTU).
    A packet capture also shows that during the three way handshake the TCP MSS is successfully sent and negotiated as 1480, but then it appears to ignore that when sending packets later in the TCP stream.
    I've rebooted, upgraded to 10.7.4, checked the "sysctl" outputs and matched against a Mac not having the issue.
    This is the newest MBP 15 inch model.
    Any other ideas on things to check?

    Have you used any sort of "tuner" software? You are obviously an advanced user. Sometimes we hack things up and forget about it later. If you are sure you didn't do that, maybe poke around with IPv6 settings. Supposedly people are trying to enable that and it is going to be a disaster.

  • TCP/IP feature or bug?

    I have been using the basic TCP VIs (Data Communication palette) to implement multiple independent TCP streams with a real-time system.  In doing some tests, I found a strange behavior that I think may be a "bug".
    To form a connection, you need to both run both TCP Listen and TCP Open Connection with the same Service Name.  I assumed that when you closed these two TCP IDs (TCP Close Connection), a subsequent Listen alone, or Open Connection alone, would fail (because both need to be present).  However, I found that while I could not open the Listener a second time without it timing out, I could open the "Open Connection" (without re-opening the Listener) and no error would result.  This should not work (because you could, for example, "Send" via the "Open Connection" stream but not have any process "listening", and capable of receiving, the data).
    I've attached a VI that runs the following 6 tests:  1, open/close ("run") server; 2, run client; 3, run server + client; 4, run server+client, then run server+client again; 5, run server+client, then run just server again; 6, run server+client, then run just client again.  According to the principle that both a server and client need to be running at the same time, only tests 3 and 4 should succeed without error (the server is configured with a 2 second timeout, which generates an error if there's no client), but test 6 also succeeds!
    Note that the "Is a RefNum" VI on the Boolean palette correctly indicates "Not a RefNum" after the Close Connection VI runs.
    P.S. -- this VI was run and tested on LabVIEW 8.5.1.  I just ran it on LabVIEW 8.6, and it behaves the same way (tests 3, 4, and 6 run without error).
    Bob Schor
    Attachments:
    TEST TCP Open and Close.vi ‏46 KB

    I don't know all the internal TCP details (e.g. SYN and ACK) and who's supposed to send what and when. I'm assuming that the LabVIEW primitives follow it correctly.
    I should also point out that my claim earlier about the listen VI creating multiple listeners was wrong. I simply hadn't looked inside the IA VI, although I have done it in the past. Looking at it shows clearly that it has its own buffer for the listener references and only holds one for each port (makes sense, since you can't have two connections on the same port). Assuming that the primitives work correctly (which they probably do, or there would probably be a single primitive for listening instead of two), then I guess that NI does need to output the listener ref as you suggested, so that the user can disable the listener if they so choose.
    By the way, this was simply a pet annoyance of mine, since I was asked to debug a piece of code which was affected by this issue and it was annoying. Essentially, the system had a single client with multiple servers and each server would only allow one connection. The server code looked something like this:
    The client code had a similar timer for handling the errors, but it had a much shorter timeout, so what was happening was that if the user closed the client and reopened it while the server was still in the right loop, they would get stuck in a situation where the client opened the connection successfully but didn't get a response so it kept opening more connections. The server, on the other hand, got a connection each time (since the listener was always active) and a single message which was very stale. Then, it had to count all the errors again.
    Of course, once I realized this was the issue, the fix was simple. I created and destroyed the listener myself and that solved that. I could also probably have used an infinite timeout instead of the loop, but fixing it was enough. I didn't need to make it any better.
    Try to take over the world!
    Attachments:
    TCP Listen Example.png ‏5 KB

  • TCP socket I/O from the kernel

    We've been working on extending our filesystem support from UDP to TCP,
    and have run into an apparent roadblock. We'd like to know whether
    Sun is aware of the problem that we're seeing, and if so, if there
    is something else we should be doing to avoid it.
    We've been using the "sockfs" module to create sockets [where
    sockfs_xxx is found using modlookup("sockfs", "xxx")]:
    avp = sockfs_solookup(AF_INET, SOCK_STREAM, 0, "/dev/tcp", &rc);
    sop = sockfs_socreate(avp, AF_INET, SOCK_STREAM, 0,
    SOV_DEFAULT, NULL, &rc);
    binding, and setting some socket ops:
    sockfs_sobind(sop, (struct sockaddr *)&client_addr,
    sizeof(client_addr), 0, 0);
    sockfs_sosetsockopt(sop, SOL_SOCKET, ... );
    and connecting:
    sockfs_soconnect(sop, (struct sockaddr *)&server_addr,
    sizeof(server_addr), 0, 0);
    We're able to send data successfully with:
    sockfs_sosendmsg(sop, &nmsg, &uio);
    where uio.uio_segflg = UIO_SYSSPACE;
    But when we try to receive with a similar
    sockfs_sorcvmsg(sop, &nmsg, &uio);
    we regularly receive EFAULT errors.
    [This same sequence works fine for both send and receive if we use
    /dev/udp instead.]
    As we trace through the call, we see that TCP uses a special UIO
    function (uioipcopyout) that simply returns EFAULT for any
    UIO_SYSSPACE move. The use of this function appears to be driven by
    queue settings (STRUIOT_IP) in the TCP stream queue initializers.
    So, our questions are:
    Is there a way to override this behavior (safely)?
    Should we be using another interface to do socket I/O
    from the kernel? (At first glance, that wouldn't help
    us here, as this deeply rooted in the TCP implementation.)
    Are there any code examples (published or otherwise)
    that would help us here?
    Thanks for any help you can offer us!

    you can do this in Java 5.0 using jline (must be around version 0.9.91) it must be running on linux/unix/mac though. i think in the future jline may support windowsTerminal also. however with java 6.0 you may wonder what's the point
    public static String readPassword(String prompt) {
    try {
    Terminal t = Terminal.getTerminal();
    if(t instanceof jline.UnixTerminal){
    UnixTerminal ut = (UnixTerminal) t;
    ConsoleReader reader = new ConsoleReader();
    //ConsoleReaderInputStream cris = new ConsoleReaderInputStream(reader);
    Character mask = new Character((char) 0);
    //reader.setEchoCharacter(new Character('0'));
    String line = null;
    do {
    line = reader.readLine(prompt,mask) ;
    if (line != null){
    //reader.setEchoCharacter(null);
    reader.flushConsole();
    ut.restoreTerminal();
    return line;
    } while(line != null && line.length() > 0);
    } catch (Exception e){}
    return null;
    }

  • TCP out-of-order at IPS

    Dear All,
    We have a setup the IPS 4510 working inline mode with strict inspection turn on. we have detected some latency issue accessing the internal website. So we did some capture at the IPS interface. We found that there's a lot of out-of-order packet and DUP ACK detected by IPS which causing the normalizer engine buffer full and could not handle anymore request. As a work around we put the IPS in asymmetric mode where it turn off the IPS normalizer engine. 
    I need some opinion on possibilities why the Out of order and DUP ACK happen. 
    We are seeing quite a lot of Out-of-order, DUP ACK and TCP zero window in TCP stream that we captured. 
    The topology is quite straight forward:
    Internet ----WAN ROUTER ----- IPS4510 ----- ASA ----- Web server
    There's no redundancy or load balance for the ASA or WANROUTER. 
    Im hoping for some opinion and idea on how to tackle this issue.
    Thank you very much

    Hi
    bumping out an old thread since the issue still on going.
    I already discussed with TAC regarding the issue and 2 option that she gave
    + asymmetric mode (Which we rejected as permanent solution)
    + Event action filter
    I'm currently looking at this solution and plan to implement it in the IPS.
    I need to consider a few things and also suggestion
    + The signature engine involve is Normalizer engine (specifically sig 1330)
    + is it possible to customize this signature or should we just go for Event action filter?
    need opinion and pro and cons of this.
    Thanks a bunch

  • Sockets .. IO controlling TCP frame length and number of packets

    Here's the deal, I'm trying to develop a proxy for this application, an application wich is registred and can only be used from a specific host. I want to do it from another location, so I am writing a proxy.
    This should be trivial. But it has proven not to be!
    1. I am using java.net.Socet and ServerSocket
    2. I am using DataInput/OutputStream writeByte/readByte
    3. flushing after each byte OR not, same result
    It is not working, I GET the data just fine, and transmit it just fine, but the server doesnt respond.
    With a network sniffer, the only difference I can see is that with the original application (C++) only ONE TCP packet is transmitted over the network length = 52
    With my proxy TWO TCP packets is transmitted first LEN=1 and second LEN=51 for a grand total of the exact data ..
    So all other things being equal, im thinking that this must be my issue!
    Anyway I can deal with this ?
    Any ideas welcome!
    ps. tried .setBuffer .noTcpDelay etc. and gotten the same results .... perhaps nio gives more options ?
    Thanks.

    Your network "sniffer" operates at a lower level than you application. The differences in Ethernet frame that you are detecting with the network sniffer should not change the behavior of your TCP/IP application because, at the sockets programming level, the TCP stream has no boundaries.
    I have noticed the behavior in Java that you mention. I think I have seen it in the implementation of java.io.DataOutputStream.writeBytes(String), where they write the bytes one-byte-at-a-time and your get this side-effect. The source for that method looks like:
        public final void writeBytes(String s) throws IOException {
            int len = s.length();
            for (int i = 0 ; i < len ; i++) {
                out.write((byte)s.charAt(i));
            incCount(len);
        }What happens is that in that first call to out.write(), the something decides to send the one byte right away, and only on the second and third bytes does something (not sure if it is Java or TCP/IP) see that it is piling up and attempt to bunch up the bytes into a single frame.
    If you, the the Java programmer, wanted to avoid this, you would have to avoid all use of writeBytes(String s) and convert your String's to byte arrays explicitly, and then only call write(byte[] b, int off, int len), but all this should change it what you see in your network sniffer, not how the application behaves.

  • Oracle VM 2.2.2 - TCP/IP data transfer  is very slow

    Hi, i've encountered a disturbing problem with OVM 2.2.2.
    My dom0 network setup (4 identical servers):
    eth0/eth1 (ixbe 10gbit) -> bond0 (mode=1) -> xenbr0 -> domU vif's
    Besides bonding setup, it's default OVM 2.2.2 installation.
    Problem description:
    TCP/IP data dransfer speed:
    - between two dom0 hosts: 40-50MB/s
    - between two domU hosts within one dom0 host: 40-50MB/s
    - between dom0 and locally hosted domU: 40-50MB/s
    - between any single domU and anything outside it's dom0 host: 55KB/s -
    something is definitely wrong here.
    domU network config:
    vif = ['bridge=xenbr0,mac=00:16:3E:46:9D:F1,type=netfront']
    vif_other_config = []
    I have similar installation on Debian/Xen, and everything is running
    fine, e.g. i don't have any data transfer speed related issues.
    regards
    Robert

    There is also an issue with the ixgbe driver in the stock OVM2.2.2 kernel (bug:1297057 on MoS). We were getting abysmal results for receive traffic (measured in hundreds of kilobytes!!! per second at times) compared to transmit. It's not exactly the same as your problem, so don't blindly follow what I say below!!!
    ### "myserver01" is a PV domU on Oracle VM 2.2.2 server running stock kernel ###
    [root@myserver02 netperf]# ./netperf -l 60 -H myserver01 -t TCP_STREAM
    MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to myserver01.mycompany.co.nz (<IP>) port 0 AF_INET
    Recv Send Send
    Socket Socket Message Elapsed
    Size Size Size Time Throughput
    bytes bytes bytes secs. 10^6bits/sec
    87380 16384 16384 60.23 1.46
    ### Repeat the test in the opposite direction, to show TX is fine from "myserver01" ###
    [root@myserver01 netperf]# ./netperf -l 60 -H myserver02 -t TCP_STREAM
    MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to myserver02.mycompany.co.nz (<IP>) port 0 AF_INET
    Recv Send Send
    Socket Socket Message Elapsed
    Size Size Size Time Throughput
    bytes bytes bytes secs. 10^6bits/sec
    87380 16384 16384 60.01 2141.59
    In my case, a workaround as advised by Oracle Support is to run:
    ethtool -C eth0 rx-usecs 0
    ethtool -C eth1 rx-usecs 0
    against the slaves within your bond group. This will give you better performance (in my case, got up to ~1.2GBit/s), although there are some fixes coming out in the next kernel which get even better speeds (in my tests, ~2.2GBit/s):
    Edited by: user10786594 on 11/09/2011 02:22

  • TCP client communicating with UDP server

    Hello,
    I want to make a TCP client to communicate with an UDP server. Does anyone know a way of doing this? I am not interested in reliable data transfer. So I don't care if the datagram is lost. I want to make the UDP server accept some datagrams from the TCP client. Also, the datagram that I want to transmit is less than 65,536 octets so it is not devided to several datagrams. Therefore, only one exchange procedure occurs.
    I made an UDP server using DatagramSocket and DatagramPacket classes and a TCP client using Socket class, but the TCP client informs me that the connection was refused.
    Any ideas?

    Let's google for IP header and the first hit is http://www.networksorcery.com/enp/protocol/ip.htm (Whoa! Classic page! I must have seen that back when googling was called altavista.)
    There is a header field, a single byte, called protocol. For TCP/IP that field contains 6, for UDP/IP it contains 17.
    If you send a packet with protocol=17, the receiving host's kernel will check if it has an process listening to UDP (17) at the port specified in the packet header. No such process? Then simply discard the packet. So you can't send an UDP packet to a TCP socket because the protocol field is wrong.
    If you want to fake a TCP stream you could look into jpcap, which allows you to capture and send raw packets. Google for it; and pick the right jpcap, there are two, only one of which (AFAIK) can send packets. Attempting to write your own TCP implementation is highly advanced though, and not really practical.

  • How to stream a png image from a Servlet to an Applet?

    Hello,
    I'm trying to write to an output stream (in my servlet) a png image created with JFreeChart library and read it from an input stream (in my applet).
    So,i'm using
    response.setContentType("image/png");
          response.setBufferSize(1000000);
          ObjectOutputStream out = new ObjectOutputStream(response.getOutputStream());
          System.out.println("starting writing image object");
          this.genPNG(chart);
          ChartUtilities.writeChartAsPNG(out, chart, 729, 447);On the applet side i'm trying to read the input stream with ImageIO.read(InputStream)
    InputStream in = servletCon.getInputStream();
          //Image case
          ObjectInputStream objInputStream = new ObjectInputStream(in);
          Object o = objInputStream.readObject();
          Image img = ImageIO.read(objInputStream);
          im = new ImageIcon(img);But nothing happens.It returns me a null object.
    On the servlet side if i write the image on disk it's ok.(This means that the servlet is doing correctly it's job of writing the image).
    Do you have any ideas and possibly 2-3 lines of code on how to write a png image in a stream and how to read the stream +reconstruct the image.
    I searched a lot but not found anything (there's something out there for jsp only).
    Thanks you in advance
    Chris

    You're right.I forgot those Object Streams because i was passing another serialized object which wasnt serializable,then i changed by passing the complete image but forgotten those streams.
    It worked gracefully with simple Input Output Streams and ImageIO.
    Thank you very much
    Chris

  • Why is there no audio in my RTSP interleaved stream?

    I am trying to use an RTSP TCP stream for video playback, but the interleaved stream contains no audio. I saw no obvious way to specfiy a media type (audio vs video) in the setup request for the interleaved mode, so I assumed that the data would be interleaved together. I sense that this is wrong. Can anyone confirm my suspicion, and how I might specify the streams in separate SETUP requests?
    Using DESCRIBE, the server replies with a session descriptor that indicates the presence of both audio and video:
    Video maps to rtp type 96, and audio to type 97;
    m=video 0 RTP/AVP 96
    a=rtpmap:96 MP4V-ES/90000
    m=audio 0 RTP/AVP 97
    a=rtpmap:96 MP4V-ES/90000I don't see a way to I've set up an RTP/AVP/TCP connection with the RTSP streaming server using, so I am using:
    "SETUP rtsp://evdo/Sports/050825fantasygirls1v.3gp RTSP/1.0\r\n" +
    "CSeq: " + sequence + "\r\n" +
    "Transport: RTP/AVP/TCP;interleaved=0-1;mode=play\r\n" +
    "\r\n";Througout the RTP delivery, I only see 96-type RTP packets.
    The capture is below:
    RTSP     OPTIONS rtsp://69.46.111.250/evdo/Sports/050825fantasygirls1v.3gp RTSP/1.0
    RTSP     Reply: RTSP/1.0 200 OK
    RTSP     DESCRIBE rtsp://69.46.111.250/evdo/Sports/050825fantasygirls1v.3gp RTSP/1.0
    RTSP/SDP Reply: RTSP/1.0 200 OK, with session description
    RTSP     SETUP rtsp://69.46.111.250/evdo/Sports/050825fantasygirls1v.3gp RTSP/1.0
    RTSP     Reply: RTSP/1.0 200 OK
    RTSP     PLAY rtsp://69.46.111.250/evdo/Sports/050825fantasygirls1v.3gp RTSP/1.0
    RTSP     Reply: RTSP/1.0 200 OK
    RTP      Payload type=Unknown (96), SSRC=2687881226, Seq=0, Time=0
    RTCP     Sender Report
    RTP      Payload type=Unknown (96), SSRC=2687881226, Seq=1, Time=0
    RTP      Payload type=Unknown (96), SSRC=2687881226, Seq=2, Time=0, Mark
    RTP      Payload type=Unknown (96), SSRC=2687881226, Seq=3, Time=6000, Mark
    RTP      Payload type=Unknown (96), SSRC=2687881226, Seq=5, Time=18000, Mark
    RTP      Payload type=Unknown (96), SSRC=2687881226, Seq=9, Time=42000, Mark
    RTP      Payload type=Unknown (96), SSRC=2687881226, Seq=70, Time=348000
    RTP      Payload type=Unknown (96), SSRC=2687881226, Seq=71, Time=348000, Mark
    RTP      Payload type=Unknown (96), SSRC=2687881226, Seq=72, Time=354000, Mark
    RTCP     Sender Report (BYE)

Maybe you are looking for

  • Acrobat 9 PDF Printing Funny

    I have Acrobat 9 and the PDF files I get do not print correctly. They are either out of alignment or the biggest problem is some parts will print but the other parts are just a bunch of scattered around dots. Can you help me correct this problem?

  • Displayed Name of main Users folder is not correct.

    My current Users folder is displaying as Public yet the target is E:\Users even though no such directory is visible. I have shown hidden files. If I type e:\Users into explorer it directs me to the Public folder. If I try to create a folder called us

  • Cant download or save file sometimes

    When i try to download a file or save it, it starts to download normally till it finished and this pops up "www.xxxx.com could not be saved, because you cannot change the contents of that folder. Change the folder properties and try again, or try sav

  • How to launch the dialog with a wizard having train component

    Hi I have a page with a button and onclicking the button i need to show a dialog wizard( a wizard with 3 pages with the train component to move back and forth) I have created a taskflow as a train with 3 pages, but onclicking the button i am not able

  • My applescript has an error, and I can't fix it.

    Hello, I am new to applescript and need to know how to fix somethings. I cannot seem to fix this: display dialog "What feature of Bemin's Epic Script would you like to use?" buttons {"Quitter, "Bemin Quiz"} if the button returned of the result is "Qu