Sending large array of bytes

I'm unable to send more than 4 MB byte array with JDeveloper 10g / OC4J. It seems to croak the OC4J. Is there a JVM setting or something else that I can increase that will fix this?

I'm unable to send more than 4 MB byte array with JDeveloper 10g / OC4J. It seems to croak the OC4J. Is there a JVM setting or something else that I can increase that will fix this?

Similar Messages

  • Read array of bytes

    using what do i read array of bytes that is on a stream...
    i send an array of bytes to the invoking stream using void write(byte buffer[])....
    how do i read it from the other side....
    uzair

    Attempts to read up to buffer.length bytes into
    buffer, returning the number of bytes that were
    sucessfully read. -1 is returned when the end of the
    file is encountered.I wasn't asking you to post what the API says it does. We know what it does. Now, do you understand how it reads the bytes into the buffer that you pass to it?

  • Managing bit flags in an array of bytes

    In my program, I have a large array of bytes. Some arbitrarily long groups of bytes in this array act as groups of bit flags. I need to be able to retrieve and manipulate these bit flags. I read that the best way to do this is with bitwise operations; something I have never learned before. I wrote methods that seem to work, but because I have never done anything like this before, can someone check my work?
    Here is an example program, where the only contents of the byte array is a single two-byte grouping of 16 bit flags:
    public class test
        static byte[] bytes = new byte[2];
        static byte[] pow2 = {1, 2, 4, 8, 16, 32, 64, -128};
        static byte[] pow2i = {-2, -3, -5, -9, -17, -33, -65, 127};
        public static void main(String[] args) throws Exception
            writeBitFlag(0, 6, true);
            for (int i = 0; i < 16; i++)   
                System.out.println("Flag " + i + ": " + getBitFlag(0, i));
            System.out.println();
            writeBitFlag(0, 12, true);
            invertBitFlag(0, 0);
            invertBitFlag(0, 0); 
            invertBitFlag(0, 1);
            for (int i = 0; i < 16; i++)   
                System.out.println("Flag " + i + ": " + getBitFlag(0, i));  
        }//end main
        public static boolean getBitFlag(int startAddress, int flag)
            return (bytes[startAddress + flag / 8] & pow2[flag %= 8]) == pow2[flag];
        }//end getBitFlag
        public static void invertBitFlag(int startAddress, int flag)
            bytes[startAddress + flag / 8] ^= pow2[flag % 8];
        }//end invertBitFlag
        public static void writeBitFlag(int startAddress, int flag, boolean flagVal)
            if (flagVal)
                bytes[startAddress + flag / 8] |= pow2[flag % 8];
            else
                bytes[startAddress + flag / 8] &= pow2i[flag % 8];
        }//end writeBitFlag      
    }//end class testDoes this look like the right way to do what I am trying to do?

    You could try BitSet which provides these function for you.
    public class test { 
        public static void main(String[] args) throws Exception {   
            byte[] bytes = new byte[2];
            writeBitFlag(bytes, 6, true);
            for (int i = 0; i < bytes.length*8; i++)   
                System.out.println("Flag " + i + ": " + getBitFlag(bytes, i));
            System.out.println();
            writeBitFlag(bytes, 12, true);
            invertBitFlag(bytes, 0);
            invertBitFlag(bytes, 0); 
            invertBitFlag(bytes, 1);
            for (int i = 0; i < bytes.length*8; i++)   
                System.out.println("Flag " + i + ": " + getBitFlag(bytes, i));  
        }//end main
        public static boolean getBitFlag(byte[] bytes, int flag) {
            return ((bytes[flag >> 3] >> flag) & 1) != 0;
        public static void invertBitFlag(byte[] bytes, int flag) {
            bytes[flag >> 3] ^= (1 << (flag & 7));
        public static void writeBitFlag(byte[] bytes, int flag, boolean flagVal) {
            if (flagVal)
                bytes[flag >> 3] |= (1 << (flag & 7));
            else
                bytes[flag >> 3] &= ~(1 << (flag & 7));
    }//end class test

  • InsufficientMemoryException while sending large byte array in WCF Service callback

    Hi
    I have a datacontract with a large byte array that is passed by the WCF Service back to the client using a callback contract.
    Until the byte array size remains below 25-30 MB, it is passed OK, but as soon as it exceeds that, an InsufficientMemoryException is thrown on the service side. I have set the max message sizes at 100 MB and using MTOM Encoder.
    I am using WSDualHTTPBinding with sessions and message security - hence streaming is not an option. I know chunking channels are an option, but I want to try to tune the service and client to maximize the byte array size which can be sent over a normal channel.
    Kindly advise how to tune settings to get to around 100 MB byte array size.
    Binding section of web.config is given below. Same settings for sizes are used on client.
    Thanks
    Abhishek
     <wsDualHttpBinding>
            <binding name="WSHttpBinding_IService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false"
    transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="100000000" maxReceivedMessageSize="100000000" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true">
              <readerQuotas maxArrayLength="100000000"  />
              <security mode="Message">
                <message clientCredentialType="Certificate" algorithmSuite="Default"  />
              </security>
            </binding>
          </wsDualHttpBinding>

    Hi abhisinghal21,
    >>InsufficientMemoryException while sending large byte array in WCF Service callback
    First please try to increase the timeout value in the both client and service side to see if it works:
    closeTimeout="00:10:00" openTimeout="00:10:00"
    receiveTimeout="00:10:00" sendTimeout="00:10:00"
    Then since you do not want to the use the Chunking option to help you, maybe you try to change the wsDualHttpBinding to use the binding which supports the streaming mode and supports callback.
    In streaming transfer mode, the receiver can begin to process the message before it is completely delivered. And the streaming mode is useful when the information that is passed is lengthy and can be processed serially. Streaming mode is also useful when the
    message is too large to be entirely buffered. So it will be better for you to use the streamed mode.
    For more information, please try to check:
    #How to: Enable Streaming:
    http://msdn.microsoft.com/en-us/library/ms789010(v=vs.110).aspx .
    Best Regards,
    Amy Peng
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How to send array of bytes to a servlet and store it in a file

    Hi,
    How to send array of bytes to a servlet and store it in a file.
    I am new to Servlets. if possible for any one, please provide the code.
    Thanks,
    cmbl

    Through HTTP this is only possible with a POST request of which the encoding type is set to multipart/form-data. You can use Apache Commons FileUpload to parse such a multipart form data request into useable elements. From the other side (the client) you can use a HTML <input type="file"> element or a Java class with Apache Commons PostMethod API.
    You may find this article useful: http://balusc.blogspot.com/2007/11/multipartfilter.html

  • How can I convert an array off byte into an Object ?

    Hi folks...
    I�m developing an application that comunicates a PDA and a computer via Wi-Fi. I�m using a DataStream ( Input and Output ) to receive / send information from / to the computer. Most off the data received from him is in the byte[] type...
    How can I convert an array off byte ( byte[] ) into an Object using MIDP 2.0 / CLDC 1.1 ?
    I found on the web 2 functions that made this... but it uses a ObjectOutputStream and ObjectInputStream classes that is not provided by the J2ME plataform...
    How can I do this ?
    Waiting answers
    Rodrigo Kerkhoff

    There are no ObjectOutputStream and ObjectInputStream classes in CLDC. You must know what you are writing to and reading from the DataStream. You should write the primitives like int, String to the DataOutputstream at one end and read those in exactly the same sequence at the outher end using readInt(), readUTF() methods.

  • How do I send an array over endpoint 2 and receive an array over endpoint 1?

    Background:
    I'm a device developer.  I have an HID device that interrupt transfers over endpoint 1 and endpoint 2.  The device enumerates successfully as an HID on mac, windows, and linux.  I've use a couple different .dll files to communicate with the device using visual studio and python with some success and now I'd like to get it to work in labview.
    Status to date:
    1.  Successfully installed device hijacker driver so NI MAX can see the device as USB::0x####::0x####:erialNumber::RAW (# inserted to protect the innocent and SerialNumber is an actual number)
    2.  I can see the device in MAX.  Tried to send a line of numbers and something is sent back, but it isn't useful yet.
    3.  Tried interruptusb.vi and it doesn't do anything but timeout.
    4.  Tried Read USB Descriptor Snippet1.vi and an 18 byte array shows up and the VID and PID are parsed out correctly if the bRequest is Get Descriptor and the Descriptor Type is Device.  None of the endpoint descriptor types return anything.  A bRequest won't trigger a device function.  It needs an array over the out endpoint.
    The problem:
    Intuitively I'm at a loss of what to do next.  The device needs to receive a command from a 16 byte array gets passed through the out endpoint (2).  Then it will respond with a 16 byte array through the IN endpoint(1).  It seems as though the interruptusb.vi should work, but the interrupt transfer is a receive only demonstration.  How should I format a 16 byte array to go through the out endpoint? Does it need to be flattened?  

    Thanks for the tip.
    The nuggets were great for getting started and helped with installing the labview hijack driver for the HID device.  Closer examination may lead to the conclusion that the code I'm using is very very similar to the nugget with minor changes to the output.  Definitely the nuggets are useful, but for my device, there is more to it. 
    It is not USBTMC compliant.  It requires an array of bytes be sent and received.  The problem may have to do with timing and ensuring that the byte transfer is correct.  When communicating from visual studio, a declared array of characters works fine.  I tried that with this setup and it doesn't work consistently.  Of particular concern is why with this setup the device shows up, doesn't work properly until stopped, then works fine, but when the labview VI is stopped, the device disappears and no longer available in the VISA combobox.  The Device Manager still shows the device, but Labview must have an open handle to it. 
    I'd really like to be able to call the dll used in Visual Studio, so the user can choose to use the included software or a Labview VI without having to reinstall a driver.  Afterall, HID is great because the driver is under the hood.  Having to load one for Labview defeats the purpose of developing an HID device.  If I wanted to load a driver, I'd program the device to be a USB-Serial device and use the labview VISA serial vi's. 
    For now I'll be happy to get a stable version in Labview that will communicate consistently even if it is the hijacked driver.

  • Increase UDP sending size over 64k bytes and get error -113,sending buffer not enough

    Dear all,
    I have a case that I must send a data over 64k bytes in a socket with UDP . I got a error-113 shows "A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram was smaller than the datagram itself.".I searched for this issue and got the closest answer as below:
    http://digital.ni.com/public.nsf/allkb/D5AC7E8AE545322D8625730100604F2D?OpenDocument
    It said I have to change buffer size with Wsock.dll. I used the same mathod to increaes the send buffer to 131072 bytes by choice optionname to so_sndbuf (x1001) and give it value with 131072 and it worked fine without error. However I still got an error 113 while sending data with " UDP Write.vi ". It seems UDP write.vi reset the buffer size? Are there any other things cause the error?
    I attached example code. In UDP Sender.vi you can see I change send buffer size to 131072 and send date included a 65536 bytes data.There is also a UDP receiver.vi and there is some missing VI which you can get from the LINK. But it's not necessary.
    Attachments:
    UDP Sender.vi ‏14 KB
    UDP Receiver.vi ‏16 KB
    UDP_set_send_buffer.vi ‏16 KB

    The header for a UDP packet includes a 16 bit field that defines the size of the UDP message (HEADER AND DATA)
    16 bits limits you to a total size of 65635 bytes, minus the header sizes; a minimum of 20 bytes are required to define an IP packet and 8 bytes for UDP leaving an effective data payload of 65507
     bytes.
    LabVIEW is not the issue...
    http://en.wikipedia.org/wiki/User_Datagram_Protocol#Packet_structure
    Now is the right time to use %^<%Y-%m-%dT%H:%M:%S%3uZ>T
    If you don't hate time zones, you're not a real programmer.
    "You are what you don't automate"
    Inplaceness is synonymous with insidiousness

  • Send an array to php

    I have to send a large number of variable to php and want to
    avoid having to repeatedly call the php script, so I thought the
    way to do this would be to send an array. I generally use code
    similar to the following to receive individual variables in php:
    $count = mysql_real_escape_string($_GET["count"]);
    Unfortunately, I don't know how to accept an array from Flex
    into PHP. I realize that this is more of a php question than a Flex
    question, but am hoping that someone has done this before. Here's
    how I want to send the array from Flex:

    These links might help:
    link1
    link2
    link3

  • Sending a array of integer to client

    Hi all,
    I'm new to socket programming and i had read many tutorials on sending files from server to client using DataOutputStream. Is it possible to send a array of integer out using DataOutputStream? If so any example or tutorial that i can read on? Thanks.

    Alternatively, can i convert the array of integer to byte and transfer to the client and covert it back to array of integer again?
    Can someone help me with this?
    Example:
    //I have a array of integer
    testArray = new int[10];
    int i;
    for(i=0;i<10;i++) {
    testArray[i] = i;
    //I need some code to send this array of integer to a client.

  • How do i send an array of clusters with variable size over TCP/IP?

    Hi,
            I'm trying to send an array of clusters with varible size over TCP/IP,. But I'm facing the following problems:
    1) I need to accept the size of array data from the user and increase the size dynamically.
    I'm doing this using the property node but how do I convey the new size to my TCP read?
    2) I need to wire an input to my 'bytes to read' of the TCP read.
    But the number of bytes to read changes dynamically
    How do I ensure  the correct number of bytes are read and reflected on the client side?
    3) Is there anyway I can use global varibles over a network such that their values are updated just as if they would on one computer?
     Will be a great help if someone posts a solution!
    Thank you...

    twilightfan wrote:
    Altenbach,
     ... xml string. ...number of columns that I'm varying using property node s... I solved these problems by using a local variable as the type input ...o TCP read is creating a problem.... second TCP read gets truncated data because
    its no longer just the first four bytes that specify the length of the data, it could be more as my array of cluster can be pretty huge.
    Instead of writing long and complicated sentences that make little sense, why don't you simply show us your code? 
    What does any of this have to do with xml strings???? I don't see how using a local variable as type input changes anything. The user cannot interact with "property nodes", just with controls. Please clarify. Once the array of clusters is flattened to a string you only have one size that describes the size of the data, no matter how huge it is (as long as it is within the limits of I32). Similarly, you read the string of that same defined length and form the array of clusters from it. How big are the strings? What is your definition of "huge"?
    Here's is my earlier code, but now dealing with an array of clusters. Not much of a change. Since you have columns, you want 2D. Add as many diensions you want, but make sure that the control, diagram constant, and indicator all match.
    The snipped shows for a 1D array, while the attached VI shows the same for a 2D array. Same difference.  
    Message Edited by altenbach on 01-31-2010 01:13 AM
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    FlattenArrayOfClusters.vi ‏12 KB
    ioclusters3MOD.png ‏25 KB

  • Converting an array of bytes to an image

    I think this is the right forum for my question.
    This code comes from the J2ME side of my application. It reads in an int from my server, and if its over a certain length, I can tell its an image so i can decode it from an array of bytes back into an image, but it throws an IllegalArgument Exception on image = Image.createImage(b,0,length);
    I've decoded the byte array before it gets sent and convereted it back into an image and that works, and ive run the debugger and the array is full of numbers on the J2ME side of the app, but the call to createImage, just keeps throwing this exception and i can't find an answer. If you need anymore code, I'll post it.
    int l = in.readInt();
    imageArray = new ImageItem[l];
    for(int i = 0; i < imageArray.length; i++){
    byte[] b = null;
    int length = in.readInt();
    System.out.println("length = " + length);
    if(length < 10) {
    imageArray[i] = null;
    System.out.println("null image");
    } else {
    b = new byte[length];
    in.readFully(b,0,length);
    System.out.println("image not null");
    Image image = Image.createImage(b,0,length);
    System.out.println("hit");
    ImageItem imageItem = new ImageItem("null", image, 0, "null");
    imageArray[i] = imageItem;
    If anyone can tell me how to indent the code, i would appreciate it, it looks indented when i post it my side.
    Message was edited by:
    trisonetwo
    Message was edited by:
    trisonetwo

    If it works before sending then check your code for sending the image.
    Also you can compare the byte array before sending with the array which was received at the other end.
    For indent and syntex highliting use [ c o d e ] and [ / c o d e ] tags (without spaces)
    Ex:-
    int l = in.readInt();
    imageArray = new ImageItem[l];
    for(int i = 0; i < imageArray.length; i++){
       byte[] b = null;
       int length = in.readInt();
       System.out.println("length = " + length);
       if(length < 10) {
          imageArray = null;
          System.out.println("null image");
       } else {
          b = new byte[length];
          in.readFully(b,0,length);
          System.out.println("image not null");
          Image image = Image.createImage(b,0,length);
          System.out.println("hit");
          ImageItem imageItem = new ImageItem("null", image, 0, "null");
         imageArray = imageItem;
    }

  • Trying to send large video file...

    I am trying to send a quicktime video file via my mac mail. When I try to send I get error that says my server only allows files 29.6mb and smaller. How do I send larger files (the one I want to send is 58.6mb)

    I did some digging, and if you aren't scared of Terminal.app, and the recipient on the other end is a Mac guy (or gal) and is not afraid of Terminal.app, there is a built-in unix utility that will do what you want.
    To learn all about it, type man split 
    As an example of how to use it, suppose you have a big file called "Movie.mov" that is in your Movies folder. cd ~/Movies to change directories to your Movies folder. Then type split -b 4m Movie.mov movseg 
    This will purportedly take the file "Movie.mov" that is in your home directory's Movies folder, and split it into multiple files movsegaa, movsegab, movsegac, ...., not one file segment exceeding 4 MB (the "-b" option is segment size in bytes and can be a plain number or append a k for kilobytes or m for megabytes, the latter as in this example).
    Now you can email each segment as an attachment in its own unique email message.
    The recipient should download all the attachments to the same place (to make it easy) and cd to that directory. Then type
    cat movsegaa movsegab movsegac {list of remaining movseg-- filenames separated by spaces} > Movie.mov and it rebuilds the original "Movie.mov" file.
    I didn't try this with a movie file, but I did try it with a 140MB ".tgz" file and with a small text file. The MD5 checksum of the original and the reconstructed files were identical and I was able to uncompress and unarchive the .tgz file with no problem, and could open and read the reconstructed text file with no problem. You'd probably want to "trial" the reassembly yourself before emailing all of the file segment attachments, though.
    The only problem -- if your recipient isn't a Mac user, I don't think Windoze command line supports these two commands.
    Also, if your file is really really big and the automatically generated file segment name suffixes aa-zz (26 x 26 possible files segments) isn't enough to handle the total number of file segments that you'll need to fit through the email servers, include a " -a 3 " option either just before or just after the "-b {segSize}" option to make filename suffixes aaa-zzz (26 x 26 x 26 possible file segments -- your recipient will love you!) One could put together a unix script pretty quickly to rebuild a gazillion file segments into a reconstructed copy of the original, though.
    PS - your .mac account may allow you to send a file attachment as large as 29.6MB, but many email providers only allow their customers to receive files up to 10MB in size, and some are even more draconian than that, imposing a 5MB limit (two local universities immediately come to mind). They will reject the attempted inbound delivery and you, the sender, may or may not get notification of that event from the recipient's server. So be sure to follow up with another email asking if the recipient received the "N" email attachments and did (s)he get the file reassembled okay.

  • Performance problem when initializing a large array

    I am writing an application in C on a SUN T1 machine running Solaris 10. I compiled my application using cc in Sun Studio 11.
    In my code I allocate a large array on the heap. I then initialize every element in this array. When the array contains up to about 2 million elements, the performance is as I would expect -- increasing run time due to more elements to process, cache misses, and instructions to execute. However, once the array size is on the order of 4 million or more elements, the run time increases dramatically -- a large jump not in line with the other increases due to more elements, instructions, cache misses, etc.
    An interesting aspect is that I experience this problem regardless of element size. The break point in performance happens between 2 and 4 million elements, even if the elements are one byte or 64 bytes.
    Could there be a paging issue or other subtle memory allocation issue happening here?
    Thanks in advance for any help you can give.
    -John

    to save me writing some code to reproduce this odd behaviour do you have a small testcase that shows this?
    tim

  • Need to improve speed when graphing large arrays

    My app generates several arrays of 500.000 elements which I want to display in a graph. Sending this array directly to a graph makes everything go very slow, I guess because of large memory allocations (LV is obviously not very good at handling large data unless having very good insight in how LV handles memory etc). It has been suggested in other posts and in App notes to decimate the data before displaying, since the graph can't display more points than the pixel-width anyway. The question is how to implement a decimation algorithm in order to get the same graph as if no decimation was made, and to preserve resolution when zooming into the graph. which require doing a new decimation with a new subset of the data. I think this graph decimation algo should have been implemented as a VI in LV, to put between the data and the graph. Since this is inherent in Labview when trying to graph more data points than pixelwidth, it should be hard to implement such a feature in LV. I would think this is  quite common problem and would like to get some comments , especially from NI people, about this issue. Any work-arounds are also appreciated.

    You are probably going to need the following sub vi's as well.
    Thank You,
    Earl
    Attachments:
    Contrast Bright Controls.vi ‏24 KB
    AI Read config.vi ‏28 KB
    Read Line.vi ‏21 KB

Maybe you are looking for