Binary data with JMF

I would like to send plain old binary data using RTP. How can I use the JMF to send binary data without binding it to a codec or media type?

TouringBMW wrote:
JMF transports data every 10 seconds, why not use a Webservice? JMF transports data every 10 seconds? That's actually news to me, can you present the source of this information?
I thought JMF sent out UDP packets as fast as it could / needed to...
Is this plausible?Not really, no.
I am looking for a way to stream large content from a server to a webclient to present it. What is the best way? I am using JavaFX as a cliënt.Depends on what kind of content you're talking about. If it's video, that's a different answer than if it's audio, which is different from powerpoint presentations. And your subject calls for an "image server", which doesn't even make sense in terms of "streaming". What does it mean to stream an image?
Tell us what exactly you're wanting to do, and I'm sure we can come up with some recommendations on how best to do it... :-)

Similar Messages

  • Data plug-in for binary data with byte streams of variable length

    Hi there,
    I would like to write a data plug-in to read binary data from file and I'm using DIAdem 10.1.
    Each data set in my file consists of binary data with a fixed structure (readable by using direct access channels) and of a byte stream of variable length. The variable length of each byte stream is coded in the fixed data part.
    Can anyone tell me how my data plug-in must look like to read such kind of data files?
    Many thanks in advance!
    Kind regards,
    Stefan

    Hi Brad,
    thank you for the very quick response!
    I forgot to mention, that the data in the byte stream can actually be ignored, it is no data to be evaluated in DIAdem (it is picture data and the picture size varies from data set to data set).
    So basically, of each data set I would like to read the fixed-structure data (which is the first part of the data set) and discard the variable byte stream (last part of the data set).
    Here is a logical (example) layout of my binary data file:
    | fixedSize-Value1 | fixedSize-Value2 | fixedSize-Value3 (=length of byte stream) | XXXXXXXXXXXXX (byte stream)
    | fixedSize-Value1 | fixedSize-Value2 | fixedSize-Value3 (=length of byte stream) | XXXXXX (byte stream)
    | fixedSize-Value1 | fixedSize-Value2 | fixedSize-Value3 (=length of byte stream) | XXXXXXXXXXXXXXXXXXXX (byte stream)
    What I would like to show in DIAdem is only fixedSize-Value1 and fixedSize-Value2.
    ´
    If I understood right, would it be possible to set the BlockLength of each data set by assigning Block.BlockLength = fixedSize-Value3 and to use Direct Access Channels for reading fixedSize-Value1 and fixedSize-Value2 ?
    Thank you!
    Kind regards,
    Stefan

  • How do I import binary data files (with header info) into DIAdem?

    We have lots of data acquired by nCode datagate. Files are binary data with header info. How do I import this into DIAdem? Typical (short) file attached. Also of interest, can I stream data from Sony PC208/SIR1000 recorders?
    Attachments:
    Propshft.dac ‏4 KB

    I have successfully imported your file into DIAdem using the nCode file filter included with DIAdem. You might have to make that file filter known to DIAdem by following these steps (this assumes you have version 8 of DIAdem):
    1. Go to the 'Window' menu in DIAdem and select 'Close all'
    2. Go to the 'Settings' menu and select 'GPI-DLL Registration'
    3. Click the 'Add' button in the window and select the 'GfSnCode' DLL in the 'DIAdem/AddInfo' directory
    4. The program will prompt you to restart.
    5. After you have restarted DIAdem, go to the DATA icon, select 'File - Open' from the DATA menu and in the dialog that appears, choose 'nSoft Data File Format' in the 'Files of type:' field.
    6. Select your data file and load
    That's it.
    With regards to your Sony reco
    rder data, I would need to get my hands on an actual data file to try it out. In general, DIAdem can import ANY binary file format by creating a header in the 'File - Import via header' section in DATA. If you have lots of data files from the SONY recorder, a DLL can be created for that specific file format with the free DLL toolkit included with DIAdem.
    Leave me a message here if you have any additional questions.
    Otmar
    Otmar D. Foehner
    Business Development Manager
    DIAdem and Test Data Management
    National Instruments
    Austin, TX - USA
    "For an optimist the glass is half full, for a pessimist it's half empty, and for an engineer is twice bigger than necessary."

  • Map binary data (PDF) to XML not possible due to non-printable chars

    Hi XI Gurus,
    we have the following issue.
    We send a PDF as binary data (as a hex string '25255044462D3' ) along with some other information from ERP within one RFC to XI , doing a message split 1:n for those two kind of messages (1. the PDF, 2. the XML) an send the split messages to file/ftp receiver adapter.
    The message split and sending of the messages is working well, but we encounter some problems with non-printable chars (hex code below 0x20) in the pdf binary. The PDF data and the dynamic file name is mapped into an XML. But the non printable characters are converted into '#' when mapped into the XML target field.
    (Due to the 1:n multi mapping, we cannot put the filename into dynamic configuration, so we have to map the pdf and the filename into an XML and extract the content with variable substitution in receiver file adapter....)
    My question is: how can binary data with non-printable chars be sent through XI and can be mapped into an XML without beeing replaced by '#' ?
    Any help will be greatly appreciated.
    Thanks and regards
    Holger

    Maybe I didn´t explained it clearly enough.
    We do not have the issue that the RFC puts the '#' into the string. We got from the RFC a hex string containing the pdf as visible hex values like:
    As you can see we have the pdf as hex string. During message mapping XI replaces some non-printable chars like '0x04' or '0x19' with '#'.
    My question is: how can we avoid those char replacement ?
    BTW: I grabbed a pdf with sender file adapter, routed through XI without any mapping and send it with ftp in binary mode. But the pdf contains more chars as the origional file and the pdf content is not visible when opening with a pdf reader like Acrobat. I guess the file adpter has problems with carriage return and linefeed chars. Some CR and LF are replaced inserted somehow.
    Best regards,
    Holger

  • Importing very special binary data into Diadem

    In order to use DIAdem I need to import binary data from transient recorders. The data is stored in block mode (CH after CH), as X-Y data pairs. All channels have different length but the structure and channel names are written into a special header block, preceeding the data. The X-Y data pairs are written into words (32-bit) with variable X/Y separation, that is that Y maybe e.g. 12-bit wide and thus X using the remaining 20 bits of the word. The X/Y separation position is coded in the header too.
    Can I define a very complex import rule directly in DIAdem or can I call a LabVIEW file read and decode driver? Or is it simply impossible, except I convert all of my 120000 data sets and have them using 4 times more space?
    Many thanks in advance to the experts!
    Marco Mailand
    ABB Switzerland Ltd.
    High Voltage Technology
    Solved!
    Go to Solution.

    Mr Mailand,
    generally, you can import binary data with the function "Import via header" which you find in the file menu (submenu DAT files) in the Navigator / Data Window. Within the dialog you can specify how your data is ordered in the file. But you are limited to some standard storage mode so you might not succeed importing the data in that way.
    Of course you can also call LabVIEW VIs and write the imported Data directly to DIAdem channels. The LabVIEW-DIAdem Connectivity VIs provide the functions you need for the data exchange.
    Another way would be to import the data with help of a VBScript.
    But there is also another method to expand the DIAdem features: With the GPI toolkit you can generate your own plugin DLLs for DIAdem. You can implement all the code you need
    to import the data in a c-program and import that function into DIAdem. In that way you will be able to load your data just as any other datatype using the standard file open nemu.
    GPI Toolkit for add-on DIAdem DLL creation
    http://digital.ni.com/softlib.nsf/websearch/D605AA96CF81760C86256C7600742EC5?opendocument&node=132070_US
    LabVIEW DIAdem Connectivity VIs Version 2.1
    http://digital.ni.com/softlib.nsf/websearch/D73B15862235486D86256D2D00798738?opendocument&node=132070_US
    Calling LabVIEW VIs interactively from DIAdem
    http://sine.ni.com/apps/we/niepd_web_display.display_epd4?p_guid=D18837DE23EE32F6E034080020E74861&p_node=DZ52246&p_source=External
    Ingo Schumacher
    Application Engineering
    National Instruments

  • Help needed with binary data in xml (dtd,xml inside)

    I am using the java xml sql utility. I am trying to load some info into a table.
    my.dtd:
    <!ELEMENT ROWSET (ROW*)>
    <!ELEMENT ROW (ID,JPEGS?)>
    <!ELEMENT ID (#PCDATA)>
    <!ELEMENT DESCRIPTION EMPTY>
    <!ATTLIST DESCRIPTION file ENTITY #REQUIRED>
    <!NOTATION INFOFILE SYSTEM "Files with binary data inside">
    <!ENTITY file1 SYSTEM "abc.jpg" NDATA INFOFILE>
    xml file:
    <?xml version="1.0" standalone="no"?>
    <!DOCTYPE ROWSET SYSTEM "MY.DTD">
    <ROWSET>
    <ROW>
    <ID>1272</ID>
    <DESCRIPTION file="file1"/>
    </ROW>
    </ROWSET>
    I am using the insertXML method to do this. However, the only value that gets loaded is the ID. abc.jpg is in the same directory where I ran the java program.
    Thanks in advance.

    Sorry! wrong dtd. It should read this instead:
    my.dtd:
    <!ELEMENT ROWSET (ROW*)>
    <!ELEMENT ROW (ID,DESCRIPTION?)>
    <!ELEMENT ID (#PCDATA)>
    <!ELEMENT DESCRIPTION EMPTY>
    <!ATTLIST DESCRIPTION file ENTITY #REQUIRED>
    <!NOTATION INFOFILE SYSTEM "Files with binary data inside">
    <!ENTITY file1 SYSTEM "abc.jpg" NDATA INFOFILE>
    null

  • I have written a binary file with a specific header format in LABVIEW 8.6 and tried to Read the same Data File, Using LABVIEW 7.1.Here i Found some difficulty.Is there any way to Read the Data File(of LABVIEW 8.6), Using LABVIEW 7.1?

    I have written a binary file with a specific header format in LABVIEW 8.6 and tried  to Read the same Data File, Using LABVIEW 7.1.Here i Found some difficulty.Is there any way to Read the Data File(of LABVIEW 8.6), Using LABVIEW 7.1?

    I can think of two possible stumbling blocks:
    What are your 8.6 options for "byte order" and "prepend array or string size"?
    Overall, many file IO functions have changed with LabVIEW 8.0, so there might not be an exact 1:1 code conversion. You might need to make some modifications. For example, in 7.1, you should use "write file", the "binary file VIs" are special purpose (I16 or SGL). What is your data type?
    LabVIEW Champion . Do more with less code and in less time .

  • Binary data problem with web services on JRockit but not Sun JDK

    I have a problem with binary data in SOAP and JRockit
    (jrrt-3.0.0-1.6.0-linux-x64.bin) . I have an set of web services based
    on EJB 3.0 which return images as byte arrays inside a SOAP envelope
    to be consumed by .NET 2 services. The host app server is Oracle
    Application Server 10.3.1 on RHEL Linux update 4, on 64 bit Xeon 5500
    series HP blade hardware.
    While most images are fine most of the time, one particular image
    gives this message when being consumed in the .NET client:
    The '■' character, hexadecimal value 0x1F, cannot be included in a
    name. Line 2, position 380038.
    The MSDN suggests that this is usually caused by non-escaping of reserved XML characters like < but this isn't one of those.
    The SOAP looks ok and for the life of me I can't see why this ought to
    be a problem, especially since the problem doesn't arise running with
    the SUN JDK 1.6_06 64 bit)
    When making the same call from the OAS Enterprise Manager, I can make the same call with no problem (but the data is just rendered as character data in a browser) which maybe suggests some incompatibility with how JRockit is serializing the data ?
    Any ideas, I would be very happy to hear - JRockit gives a 15% or so
    speed boost to the website that these services power so obviously we
    want to use it if possible.
    Edited by: RichLiv on Nov 14, 2008 4:54 AM

    Seems to be the case that using MTOM stops this problem with JRockit. Strange but apparently true (so far).

  • How to read a file with both text and binary data?

    For text data I use a BufferedReader,
    for binary data I use a DataInputStream.
    Since readLine is deprecated in DataInputStream, how can I read in a proper way a file that contains some lines of text followed by some binary data.
    Is there a way to do this without writing a new 'readLine' for DataInputStream (that has to take into account the different newlines for Unix en other OS's)?

    sorry about that ^
    NEW STRING str
    WHILE there is stuff in the file DO
        getByte()
        IF reading a string THEN
            WHILE byte is not a return character DO
                convert the byte to character/string
                append char to str
            WEND
        ELSE IF reading raw data THEN
            parse raw data
            do stuff with it
        END IF
    WEND

  • Why is performance so slow reading binary data from a SQL Azure DB with EF6.x

    I'm running a WPF client that hits a SQL Azure DB using EF 6.x. For the most part, everything seems to be working fine. The one exception is when I try to read a large binary column.
    I am storing files in the DB as a binary column.  When I test using the local DB, everything sings.  When I switch to the Azure DB, I get timeouts when I try to read the file contents.  I have no problem saving the binary data to the DB, just
    reading it.
    I don't know how to troubleshoot this.  I looked at the Query Performance page in the Azure portal, but it doesn't time stamp anything in there and you can't clear it, so I can't correlate what's running with the queries that show up there.
    I tried to start SQL profiler against the DB, but was denied because I'm not a member of the sysadmin fixed server role.
    If I query for the data directly, it comes back quickly.  So this seems to be an Azure via EF issue.
    Any help is appreciated.
    http://digitalcamel.blogspot.com/

    Hi Digital Camel,
    Since I don't know what your scenario is, I won't argue too much about not storing binaries in your SQL DB, but still: don't store binaries in your SQL DB :). The main reason is simple: first and foremost, in both the current and future pricing tier your
    levels are defines on the size of the DB. Basically, you pay way more by storing your binaries on your SQL layer rathern than storing them elsewhere, such as Azure Storage. Second, the protocol your binaries would be downloaded over the wire is prone to network
    connectivity issues: you could use HTTP(S) or FTP instead, if you'd use Azure Storage. Last but not least, when you download the binary from your DB, you keep a connection open which in the end is a connection other users might have used to query data instead.
    However, in regard to your question, how did you "query for the data directly"? Did you try to query the data using SSMS with the Client Statistics option on? This could tell you if the problem is network, server or client related.
    Hope this helps!
    Alex

  • Writing binary data in same format as with FORTRAN code

    Hello,
    I would like to write out data in a binary format that is used by FORTRAN unformatted direct access files. I have some code in FORTAN that produces the correct format; however, I have been unable to reproduce the output with a Java method.
    The FORTAN code to write an array Z is:
    REAL Z(72,46,16)
    OPEN(8,FILE='grads.dat',FORM='UNFORMATTED',
    & ACCESS='DIRECT',RECL=72*46)
    IREC=1
    DO 10 I=1,16
    WRITE (8,REC=IREC) ((Z(J,K,I),J=1,72),K=1,46)
    IREC=IREC+1
    10 CONTINUE
    The Java code I was able to come up with to write out an array arrTS produces binary data differently:
              FileOutputStream fosOut = new FileOutputStream(filename,false);
              for(int x=0; x<this.xmax; x++) {
                   for(int y=0; y<this.ymax; y++) {
                        ByteArrayOutputStream bosOut = new ByteArrayOutputStream(2);
                        DataOutputStream dos = new DataOutputStream(bosOut);
                        dos.writeFloat((float)this.arrTS[x][y]);
                        byte[] arrByte = bosOut.toByteArray();
                        fosOut.write(arrByte);
              fosOut.close();
    Any help is greatly appreciated.
    Best regards,
    Stefan

    I am using a software to read in the data that I am calculating in Java. I cannot easily change the format that software is reading; therefore, ASCII is not an option.
    The software that reads the data states something about "Little-endian". Do you which Java classes to use for that?
    Best regards,
    Stefan
    First, you need to find out the format of yourFORTRAN
    output. Is it ASCII? EBCDIC? BCD? IEEE?
    Big-endian? Little-endian?
    Until you know that you won't know which Javaclasses
    to use.Also I wouldn't be too sure different FORTRAN
    compilers use the same data format.
    In my view the best system interchange format still is
    the ASCII text file. Why don't you let the Fortran
    program use it to write its data to file. It should be
    no problem reading it from Java.

  • Working with Binary data from the Gpib Read Function

    I m using the gpib read command to recieve data from a gpib instrument. I already know the form of the binary data being sent back. I have programmed it in matlab before. I am haveing trouble parseing out the binary data I recieve. the basic form of the data is
    #I immediately followed by 401 64 bit ieee floating point numbers. When I programmed this in matlab the code I used was of the form:
    [a,count,msg]=fread(g2,2,'int8');
    [a,count,msg]=fread(g2,401,'float64');
    for those that don't know matlab code the command reads data from the instrument pointed to by the g2 handle. the number after the g2 above speicifies the number of values to read. And the last part the string in qoute specifies the way the binary data
    is to be interepeted. the output arguments a count and msg are
    a-your data
    count-number of items succesfully read in
    msg-error message
    In the above two lines I use the variable a twice to capture the data. The first time a is set equal to '#I' everything inside the qoutes. The second time a is set equal to the actual data, in this case 401 64bit floating point numbers. I m convince that the flatten to string and unflatten from string function are the set meant to accomplish this task. But I haven't been able to find a good example of how to use these functions. Especially with an array.
    Can some one please help me?
    Thanks
    Scourched

    I'm pretty sure what you're describing is a simple typecast in LabVIEW. You will want to strip off the #I first, using string manipulation functions. Then, you can wire the string into a "typecast" VI, and wire a double precision float constant to the top connector of the typecast (this tells the VI that you expect a 64 bit float output) and then you can read your resulting array of double precision floats on the output of the typecast. The typecast VI is found in All Functions >> Advanced >> Data Manipulation.
    I've attached an example in LabVIEW 7.0 and also 7.1 format.
    Scott B.
    Applications Engineer
    National Instruments
    Attachments:
    typecast.vi ‏11 KB
    typecast.vi ‏13 KB

  • How do you read in and work with I24 binary data into LabView?

    Hi there,
    I have a program that is reading binary data in as 16 bits per sample.  I have some binary input files that were saved as 24 bits per sample.  How do I convert my old program to handle this data.  I realize that LabView doesn't have built in I24 conversion - how do I do this myself? I found a post that referred to using a boolean array to create your own 24 bit number - but I don't understand what that means...  Any help would be appreciated!
    Thanks so much!

    Do you know the format of the saved 24-bit data?  Can you share some data with us, along with what you think the values are?  If it is I24, then every three bytes represents a 24-bit (signed) integer.  The first question to answer is "What is the order of the bytes"?  How do you want to represent these data?  The obvious choices are as an I32, but since the rest of your data is I16, you might choose I16, instead.
    To convert I24 into I16, you can simply return the high two bytes.  To convert I24 to I32, you need to set a high byte that replicates the sign (saved in the most significant bit of the third byte).  One way to do this is to consider the third byte as an I8 quantity -- if it is positive, set the high byte to 0, and if it is negative, set the high byte to -1.
    Bob Schor

  • Problems with Concatenating Binary Data

    Hi
    I am experiencing a problem with the unreliable concatenation of binary data.
    I have a stored procedure that I wrote to perform cell encryption on long text fields (> 4000 characters in length). This takes the string, splits it into chunks of a set length, encrypts and concatenates the encrypted chunks into a varbinary(max) variable.
    It then stores the length of the encrypted chunk at the beginning of the data (the length of output when encrypting strings of a set length is always be the same), e.g.
    DECLARE @output VARBINARY(MAX), @length INT
    SELECT @length = LEN(@encryptedchunk1)
    SELECT @output = CONVERT(binary(8), @length) + @encryptedchunk1 + @encryptedchunk2 + @encryptedchunk3 + ...
    So far so good, and when I decrypt the data, I can read the first 8 bytes of data at beginning of the binary data to get the length of the chunk:
    -- get the length of the encrypted chunk
    SELECT @length = CONVERT(INT, CONVERT(binary(8), LEFT(@encrypteddata, 8)))
    -- then remove the first 8 bytes of data
    SELECT @encrypteddata = CONVERT(VARBINARY(MAX), RIGHT(@encrypteddata, LEN(@encrypteddata) - 8))
    <snip> code to split into chunks of length @length and decrypt
    </snip>
    This is where I am experiencing an issue. 99.4% of the time, the above code is reliable. However, 0.6% of the time this fails for one of two reasons:
    the length is stored incorrectly. The length of the encrypted chunks is usually 4052 and substituting 4052 for the returned value (4051) allows the encrypted chunks to be decrypted.
    the encrypted data sometimes starts at offset 8 instead of 9. The @length variable is correctly read at length 8 but only 7 bytes should be removed from the start, e.g.
    SELECT @length = CONVERT(INT, CONVERT(binary(8), LEFT(@encrypteddata, 8)))
    SELECT @encrypteddata = CONVERT(VARBINARY(MAX), RIGHT(@encrypteddata, LEN(@encrypteddata) - 7))
    Has anyone any ideas on why this is happening and how the process can be made more reliable?
    Julian

    Use datalength, not len. Len() is designed for character strings and will ignore trailing bytes. And count characters. Datalength does exactly what you want: it counts bytes. Look at this:
    DECLARE @b varbinary(200)
    SELECT @b = 0x4142434420202020
    SELECT @b, len(@b), datalength(@b)
    Erland Sommarskog, SQL Server MVP, [email protected]

  • BufferedReader with both text and binary data

    I'm reading ascii data from a socket using the read(char[] cbuf, int off, int len) method for a BufferedReader. This data contains a mixture of text to be parsed and data for an image. The messege is of variable length and the text fields are delimited by markers such as "FOO" or "BAR". I have initialized the BufferedReader in two seperate ways. First is: new BufferedReader(new InputStreamReader(iplCommSocket.getInputStream()));
    The second is : new BufferedReader(new InputStreamReader(iplCommSocket.getInputStream(),"8859_1"));
    If I use the first, I don't think I am getting the data correctly, if I use the second, the string ( creating a new String(char[]) )I get back from the read operation has the text after the binary data clobbered. I've tried converting the char[] to a byte[] and then creating a new String with the byte array using the "8859_1" character encoding to no avail. Any help is greatly appreciated

    Is there a reason that you are using BufferedReader? 'cause, if I were you I wouldn't. It's geared toward text data, not binary data. I would look into some flavor of InputStream or possibly a FileChannel with some clever way to use the ByteBuffer.
    Good Luck
    Lee

Maybe you are looking for