Massive accumulation of byte[]s

Here's hoping someone has some insight on the following problem:
We're running a series of environments that are smallish -- they'll be a few gigabytes in size when fully populated -- but we're having problems getting past the (say) 200Mb barrier. We can currently run our system for about half an hour, adding documents at the rate of about ten to fifteen a second (no queries), per thread, using ten threads. The keys for this database come in two forms: one is just a long that is almost always monotonically increasing (not guaranteed); the other is that same long plus a String. There's 10 of the long/String keys for every long key. Values for the long key is a serialized object with about 10 instance fields; for the latter it's a series of String values.
Here's what we see when the machine starts dying:
* It's associated with garbage collection, specifically with full gc. Typically, it happens when a full GC is triggered. -verbose:gc gives us a time of around 1.0 - 1.5s for the collection but it's obvious, looking at a profiler, that the actual time taken is closer to 30s. I've tried tuning the young/tenured and eden/survivor ratios, but nothing seems to work (details available if you want them.) This behavior doesn't -always- happen with a full gc, but a full gc is going on every time we see the problem.
* The runnable process count goes through the roof. It's usually 1-2; it spikes to around 165-200. It stays in this condition for anywhere between 20 seconds and a minute and, when it's done immediately drops to 1-2 within three seconds.
* The number of context switches and interrupts drops and stays low. CS drops by an order of magnitude; interrupts by 10-20%.
* System time hits ~95% and stays there.
* The machine is unresponsive, even to ls, less, etc.
* There's an ever-increasing number of byte[]s all through the run. We do eventually get out of memory exceptions, but the GC problems and the spike in runnables also occur well before we run out of memory.
* If we remove the code that adds the long/String keys (i.e., if we reduce the amount of data written to 1/11th its normal amount) the problem still happens, but it takes much longer. Also, the number of byte[]s written is dramatically lower -- but still ever-increasing.
I've already tried waiting until all cursors and transactions are closed/committed, then calling sync() and evictMemory(), as per these threads:
OutOfMemoryError: Java heap space [Lucene + BDB JE transactions]
OutOfMemoryException when i do  data insertion continuously
We do not have any large transactions.
Environment:
* java -version: Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_09-b01, mixed mode)
* uname -a: Linux bubba 2.6.15-1-amd64-k8-smp #2 SMP Tue Mar 7 21:00:29 UTC 2006 x86_64 GNU/Linux
* JE 3.2.13
* Machines are dual CPU, dual-core Opterons @2.0GHz, 8Gb RAM, 10k disks on SATA-II, no RAID
* Environment is transactional, allowCreate, txnWriteNoSync = true, 10Mb cache size (too low, but increasing it had no effect)
* Databases are transactional, allowCreate
* All writes are performed with transactions and all transactions are explicitly committed or aborted
Help is much appreciated. This is a blocking issue for us (because the machine is unresponsive, it crashes everything in our app and affects other machines as a result.) I've been beating my head against it -- to no avail.
Thanks in advance for whatever help you can offer. I'm stumped (again).
-j

Mark, Linda and all,
I spoke too soon when I said I found and solved the problem with the number of byte arrays and the massive number of runnable threads. It turns out that there were two problems -- I had a memory leak in my own code that didn't release the bytes, and there's another problem that seems to involve JE and is related to the number of threads in the runnable queue. ALL of the symptoms I described above -- the number of threads in the runnable queue, the number of context switches and interrupts, the system time stats, etc. -- with the exception of the number of byte arrays, are accurate descriptions of the second problem. In other words, they're not related to the problem with my memory leak, and I'm hoping you can lend a hand.
I have a test I can run outside our application that reproduces the problem with the latest 3.2 build, though it occurs at random and is worse on some machines compared to others. The test allocates enough environments, each with 10Mb cache, to fill a certain percentage of the heap; that means hundreds environments on our largest machine. Each environment has a single database and the test tries to add 50 million, 2-5k records in parallel. Obviously you wouldn't do this in a "real" application but we see the same problem this test produces in our "real" code, which uses only ~25 environments and ten threads adding data concurrently.
To answer some of your comments above:
* All my tests are using the latest 3.2 build. Each test is started cleanly, so no data from previous runs exists beforehand (avoiding the problem with the log file formats.)
* I can reproduce the problem on Xeon as well as Opteron machines. I don't have access to a single-core machine, so all the tests have been run on machines that are dual-core, dual-CPU or both. One of our Xeon machines really, really likes producing this problem. It has no problems running anything else, however. I'm in the process of validating the hardware now.
* I tried using the je.evictor.lruOnly=false setting and had the same results
* The problem becomes worse when more of the heap is used. If I allocate enough Environments to use 85% of the heap as cache, I see lots and lots of blocked threads, but none of the high runnable-count behavior. If I bump that up to 95% I start seeing problems
* I can't get a stack dump of any sort -- the machine is so locked up that the kill -3 executes AFTER the machine has relaxed. Using nice, writing to ramdisks, writing to other machines via netcat -- none of them helped us get a stack dump. Running vmstat before the machine locks works, but it seems like there's no way to fire up a new process.
Obviously we're not using JE in the way suggested by the docs -- in particular, we're running with 24 environments and that doesn't use resources efficiently. We're willing to live with the inefficiency. The real problem here is that the machine occasionally locks so hard that I can't even use 'ls' or 'kill'. I'd be hard-pressed to purposely write something in Java that does that.
I have full logs with -verbose:gc and -XX:+CITime set on the vm; the logs include stats dumps from the databases and environments, and the output of vmstat at five second intervals. I can also provide the test that reproduces the problem.
As I've said, any help is much appreciated. If we're doing something wrong, I would like to know about it. Thanks for your help!
-j

Similar Messages

  • Using a byte[] as a secondary index's key within the Collection's API

    I am using JE 4.1.7 and its Collections API. Overall I am very satisfied with the ease of using JE within our applications. (I need to know more about maintenance, however!) My problem is that I wanted a secondary index with a byte[] key. The key contains the 16 bytes of an MD5 hash. However, while the code compiles without error when it runs JE tell me
    Exception in thread "main" java.lang.IllegalArgumentException: ONE_TO_ONE and MANY_TO_ONE keys must not have an array or Collection type: example.MyRecord.hash
    See test code below. I read the docs again and found that the only "complex" formats that are acceptable are String and BigInteger. For now I am using String instead of byte[] but I would much rather use the smaller byte[]. Is it possible to trick JE into using the byte[]? (Which we know it is using internally.)
    -- Andrew
    package example;
    import com.sleepycat.je.Environment;
    import com.sleepycat.je.EnvironmentConfig;
    import com.sleepycat.persist.EntityStore;
    import com.sleepycat.persist.PrimaryIndex;
    import com.sleepycat.persist.SecondaryIndex;
    import com.sleepycat.persist.StoreConfig;
    import com.sleepycat.persist.model.Entity;
    import com.sleepycat.persist.model.PrimaryKey;
    import com.sleepycat.persist.model.Relationship;
    import com.sleepycat.persist.model.SecondaryKey;
    import java.io.File;
    @Entity
    public class MyRecord {
    @PrimaryKey
    private long id;
    @SecondaryKey(relate = Relationship.ONE_TO_ONE, name = "byHash")
    private byte[] hash;
    public static MyRecord create(long id, byte[] hash) {
    MyRecord r = new MyRecord();
    r.id = id;
    r.hash = hash;
    return r;
    public long getId() {
    return id;
    public byte[] getHash() {
    return hash;
    public static void main( String[] args ) throws Exception {
    File directory = new File( args[0] );
    EnvironmentConfig environmentConfig = new EnvironmentConfig();
    environmentConfig.setTransactional(false);
    environmentConfig.setAllowCreate(true);
    environmentConfig.setReadOnly(false);
    StoreConfig storeConfig = new StoreConfig();
    storeConfig.setTransactional(false);
    storeConfig.setAllowCreate(true);
    storeConfig.setReadOnly(false);
    Environment environment = new Environment(directory, environmentConfig);
    EntityStore myRecordEntityStore = new EntityStore(environment, "my-record", storeConfig);
    PrimaryIndex<Long, MyRecord> idToMyRecordIndex = myRecordEntityStore.getPrimaryIndex(Long.class, MyRecord.class);
    SecondaryIndex<byte[], Long, MyRecord> hashToMyRecordIndex = myRecordEntityStore.getSecondaryIndex(idToMyRecordIndex, byte[].class, "byHash");
    // END

    We have highly variable length data that we wish to use as keys. To avoid massive index sizes and slow key lookup we are using MD5 hashes (or something more collision resistant should we need it). (Note that I am making assumptions about key size and its relation to index size that may well inaccurate.)Thanks for explaining, that makes sense.
    It would be the whole field. (I did consider using my own key data design using the @Persistent and @KeyField annotations to place the MD5 hash into two longs. I abandoned that effort because I assumed (again) that lookup with a custom key design would slower than the built-in String key implementation.)A composite key class with several long or int fields will not be slower than a single String field, and will probably result in a smaller key since the UTF-8 encoding is avoided. Since the byte array is fixed size (I didn't realize that earlier), this is the best approach.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • IDCS5 Mac - Massive memory leaks - Followup to previous discussion

    Hello Everyone:
       After some painful, tedious marching though the code on the Mac, I found something really, really strange.  The code seemed to be working fine, except when a couple of documents were added in the scan jobs.  I isolated one of the documents and removed each object on the page until I came across the object that, when scanned, caused InDesign to report the massive memory leaks.
       The object was a quarter-inch by quarter-inch overset text frame (that's .63 cm by .63 cm for those of you who are used to the more sensible metric system).  In the text frame was a single character, not normally visible when on the page.  According to the ASSERT dialog box that was thrown up on the screen when InDesign was shutting down, this little bitty object was the source of 104450 leaks amounting to between 38,000,000 abd 51,000,000 bytes in leaks.  Here's the code where the leaks must be coming from (and Dirk, I owe you a crow pie...as in I'm gonna have to eat crow...I use PMString for one small command in this snippet, but it seemed so much faster and simpler than what the examples in widestring.h were suggesting):
    //  TI is TextIndex, assigned from a call to IMultiColumnTextFrame::TextStart()
    //  TS is int32 assigned from a call to IMultiColumnTextFrame::TextSpan()
    WideString Text;
    TextIterator beginTextChunk(FrameTextModel,          TI );
    TextIterator endTextChunk(   FrameTextModel, (TS + TI));
    std::copy(beginTextChunk, endTextChunk, std::back_inserter(Text));
    PMString PMText(Text);
    std::string text = PMText.GetPlatformString();
    IdentifyNamesInText(text);
    This final function call could not have been the source of the problem because one of the steps I took in tracking down the problem was to put "return" as the first line in its implementation but the bug still occurred.

    Hi, I don't know if this is relevant or not, particularly since we're on Windows and the problem you're seeing is on Mac.
    One of our guys put in some code to workaround a problem which he encountered using std:;copy with TextIterators under VS2005 (so it would have been CS4)
    /// Alternative to using std::copy to workaround issues using std::copy and TextIterator's in VS2005
    inline void TextIteratorCopy( TextIterator& posBegin, TextIterator& posEnd, WideString& strDest )
        std::back_insert_iterator < WideString > posDest = std::back_inserter( strDest );
        for ( ; posBegin != posEnd; ++posDest, ++posBegin )
            *posDest = *posBegin;
    that said, it might not affect you on the Mac, it might no longer be an issue with CS5 either.

  • Accumulative Distinct Counts

    I’ve been at this for two days now. Could someone lead me down the right path? Given the following data set:
    create table data_owner.test_data
    item_number varchar2(10 byte),
    store_number varchar2(10 byte),
    calendar_year varchar2(10 byte),
    calendar_week varchar2(10 byte),
    units_sold integer
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('1111', '31', '2010', '51', 4)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('1111', '16', '2010', '51', 2)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('1111', '31', '2010', '52', 3)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('1111', '27', '2010', '52', 1)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('1111', '16', '2011', '1', 3)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('1111', '27', '2011', '2', 5)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('1111', '20', '2011', '2', 4)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('2222', '27', '2010', '51', 3)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('2222', '16', '2010', '52', 2)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('2222', '20', '2010', '52', 1)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('2222', '16', '2011', '1', 3)
    insert into test_data(item_number, store_number, calendar_year, calendar_week, units_sold)
    values ('2222', '31', '2011', '2', 3)
    select * from test_data
    item_number     store_number     calendar_year     calendar_week     units_sold
    1111     31     2010     51     4
    1111     16     2010     51     2
    1111     31     2010     52     3
    1111     27     2010     52     1
    1111     16     2011     1     3
    1111     27     2011     2     5
    1111     20     2011     2     4
    2222     27     2010     51     3
    2222     16     2010     52     2
    2222     20     2010     52     1
    2222     16     2011     1     3
    2222     31     2011     2     3
    My desired result is a sum of units sold and an accumulative distinct count of store numbers grouped by item, year, and week. i.e.:
    item_number     calendar_year     calendar_week     store_count     sum(units_sold)
    1111     2010     51     2     6
    1111     2010     52     3     4
    1111     2011     1     3     3
    1111     2011     2     4     9
    2222     2010     51     1     3
    2222     2010     52     3     3
    2222     2011     1     3     3
    2222     2011     2     4     3
    I can’t seem to get the store count right. I’ve been trying various methods of the count(distinct store_number) over (…) analytic function, but nothing works. Thanks.

    Hi,
    Interesting problem!
    When using analytic functions, you can't use both DISTINCT and ORDER BY. Too bad; that sure would be convenient.
    The most general solution is to use aggregate functions instead of analytic functions, and do a self-join to pair every row ("table" l, for "later" in the query below) with every earleir row ("table" e below) for the same item:
    SELECT       l.item_number
    ,       l.calendar_year
    ,       l.calendar_week
    ,       COUNT (DISTINCT e.store_number)     AS store_count
    ,       SUM (l.units_sold)
         / COUNT (DISTINCT e.ROWID)          AS total_units_sold
    FROM       test_data   e
    JOIN       test_data   l      ON     e.item_number     = l.item_number
    AND                               e.calendar_year || LPAD (e.calendar_week, 2)
                                    <= l.calendar_year || LPAD (l.calendar_week, 2)
    GROUP BY  l.item_number
    ,       l.calendar_year
    ,       l.calendar_week
    ORDER BY  l.item_number
    ,       l.calendar_year
    ,       l.calendar_week
    ;You might think about storing a DATE (say, the date when the week begins) instead of year and week. It would simplify this query, and probably lots of other ones, too. I realize that might complicate some other queries, but I think you'll fiond a net gain.
    Thanks for posting the CREATE TABLE and INSERT statements; that helps a lot!
    Edited by: Frank Kulash on Nov 18, 2011 12:48 PM
    Here's an analytic solution. As you can see, it requires more code, and more complicated code, but it might perform better:
    WITH     got_r_num   AS
         SELECT     item_number
         ,     calendar_year
         ,     calendar_week
         ,     units_sold
         ,     ROW_NUMBER () OVER ( PARTITION BY  item_number
                                   ,                    store_number
                             ORDER BY        calendar_year
                             ,                calendar_week
                           )      AS r_num
         FROM    test_data
    SELECT DISTINCT
         item_number
    ,     calendar_year
    ,     calendar_week
    ,     COUNT ( CASE
                        WHEN  r_num = 1
                  THEN  1
                    END
               )             OVER ( PARTITION BY  item_number
                                      ORDER BY      calendar_year
                          ,          calendar_week
                                 )                    AS store_count
    ,       SUM (units_sold) OVER ( PARTITION BY  item_number
                                    ,             calendar_year
                          ,             calendar_week
                             )                         AS  total_units_sold
    FROM       got_r_num
    ORDER BY  item_number
    ,            calendar_year
    ,       calendar_week
    ;This approah will not work in all windowing situations. It's okay fo this job, but not if you wanted,for example, a count of distinct stores from the last 6 weeks, and the report covers more than 6 weeks.

  • HowTO: convert Image- byte[] when don't know image format?

    I have byte[] field in my DB. Images have been stored there.
    Images can be jpg/gif/png.
    My task is to scale them and save back (to another field)
    I have such code:
    I know, that getScaledInstance is bad, but please, don't pay attention. This operation +(massive image resizing will be performed once a year at night)+
    public class ImageResizer {
         private static byte[] resizeImage(byte[] sourceImg, int newWidth){
              byte[] result = null;
              Image resizedImage = null;     //output image
              ImageIcon imageIcon = new ImageIcon(sourceImg);     //source image
              Image imageSource = imageIcon.getImage();
              int imageSourceWidth = imageSource.getWidth(null);
              int imageSourceHeight = imageSource.getHeight(null);
              if(imageSourceWidth > newWidth){
                   float scaleFactor =  newWidth/imageSourceWidth;
                   int newHeight = Math.round(imageSourceHeight*scaleFactor);
                 resizedImage = imageSource.getScaledInstance(newWidth, newHeight,Image.SCALE_SMOOTH);
                 Image temp = new ImageIcon(resizedImage).getImage();// This code ensures that all the pixels in the image are loaded.
                 // Create the buffered image.
                 BufferedImage bufferedImage = new BufferedImage(temp.getWidth(null), temp.getHeight(null),BufferedImage.TYPE_INT_RGB);
                 /**And what next?*/
              }else{
                   result = sourceImg;
              return result;
         public static byte[] scaleToSmall(byte[] sourceImg){
              return resizeImage(sourceImg, 42);
         public static byte[] scaleToBig(byte[] sourceImg){
              return resizeImage(sourceImg, 150);
         public static byte[] serializeObjectToBytearray(Object o) {
             byte[] array;
             try {
               ByteArrayOutputStream baos = new ByteArrayOutputStream();
               ObjectOutputStream oos = new ObjectOutputStream(baos);
               oos.writeObject(o);
               array = baos.toByteArray();
             catch (IOException ioe) {
               ioe.printStackTrace();
               return null;
             return array;
    }On this forum I've found many solutions, but approximately all of them suppose that I know file format (PixelGrabber, ImageIO, e.t.c). But I don't know it. I know, that it can be jpeg/gif/png.
    What can I do?
    I've found that simple serialization can help me (+public static byte[] serializeObjectToBytearray(Object o)+), but seems like it doesn't work.
    Edited by: Holod on 01.11.2008 10:18

    Here I came up with one possible solution using some more functionality of ImageIO.
    public class ImageResizer {
        private static byte[] resizeImage(byte[] sourceBytes, int newWidth) throws Exception {
            byte[] scaledBytes;
            // ImageIO works with Files or Streams, so convert byte[] to stream
            ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(sourceBytes);
            // Why not just use ImageIO.read(inputStream)? - Because there would be
            // no way to know the original image format (I am assuming here that
            // you need to write back the image in the same format as the original)
            ImageInputStream imageInputStream = ImageIO.createImageInputStream(byteArrayInputStream);
            // assuming there is at least one ImageReader able to read the image
            ImageReader imageReader = ImageIO.getImageReaders(imageInputStream).next();
            // save image format name so we can write it back in the same format
            String formatName = imageReader.getFormatName();
            imageReader.setInput(imageInputStream);
            BufferedImage sourceImage = imageReader.read(0);
            int imageSourceWidth = sourceImage.getWidth();
            int imageSourceHeight = sourceImage.getHeight();
            if (imageSourceWidth > newWidth) {
                // be careful with integer divisions ( 500 / 1000 = 0!)
                double scaleFactor = (double) newWidth / (double) imageSourceWidth;
                int newHeight = (int) Math.round(imageSourceHeight * scaleFactor);
                System.out.println("newWidth=" + newWidth + ", newHeight=" + newHeight + ", formatName=" + formatName);
                // getScaledInstance provides the best downscaling quality but is
                // orders of magnitude slower than the alternatives
                // since you're saying performance is not issue, I leave it as is
                Image scaledImage = sourceImage.getScaledInstance(newWidth, newHeight, Image.SCALE_SMOOTH);
                // Unfortunately we need a RenderedImage to use ImageIO.write.
                // So the next lines convert whatever type of Image was returned
                // by getScaledImage into a BufferedImage.
                // Using TYPE_INT_ARGB so potential alpha channels are preserved
                BufferedImage scaledBufferedImage = new BufferedImage(newWidth, newHeight, BufferedImage.TYPE_INT_ARGB);
                Graphics2D g2 = scaledBufferedImage.createGraphics();
                g2.drawImage(scaledImage, 0, 0, null);
                g2.dispose();
                // Now use ImageIO.write to encode the image back into a byte[],
                // using the same image format
                ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
                ImageIO.write(scaledBufferedImage, formatName, byteArrayOutputStream);
                scaledBytes = byteArrayOutputStream.toByteArray();
            } else {
                // if not scaling happened, just return the original byte[]
                scaledBytes = sourceBytes;
            return scaledBytes;
        public static void main(String[] args) throws Exception {
            // this is just for my own local testing
            // simulate byte[] input from database
            BufferedImage image = ImageIO.read(new File("input.jpg"));
            ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
            ImageIO.write(image, "PNG", byteArrayOutputStream);
            byte[] sourceBytes = byteArrayOutputStream.toByteArray();
            byte[] scaledBytes = resizeImage(sourceBytes, 640);
            // write out again to check result
            ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(scaledBytes);
            image = ImageIO.read(byteArrayInputStream);
            ImageIO.write(image, "PNG", new File("output.jpg"));
    }

  • How to copy bytes to an array byte variable....??

    Hi everyone....can anyone help me out...i want to store byte into an array-byte variable,(byte by byte), which are returning from the readByte() function of DataInputStream.
    Thank you

    int filesize = request.getContentLength();
    byte dataBytes[] = new byte[filesize];
    int read=0;
    while( read <filesize)
    bytesRead[read++] =  in.readByte();  // *in* is object of DataInputStream
    }Change all that to this:
    int count;
    byte[] buffer = newbyte[8192];
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    while ((count = in.read(buffer)) > 0)
      baos.write(buffer, 0, count);
    byte[] data = baos.toByteArray();Of course, like your code, this assumes that the data will fit into memory. It is preferable to process the data read each time as you go rather than accumulating it all first and then processing. This saves both time and space.

  • Difference between int and byte

    what is the main difference between int and byte?

    A byte is the format data is stored in memory in past. 8 bits.
    An int is a format likewise you get it as value from the accumulator. For X64 that is Int64.
    For compatibility the "Integer" is kept currently on Int32, the register format from the X86 computers. 
    Older computers likewise the 8088 had an 8 bit Int and therefore that was the same as the byte.
    The 80286 had a 16 bits Integer.
    Success
    Cor

  • How Do I Load An Animated GIF Into PictureBox Control From Byte Array?

    I'm having a problem with loading an animated GIF int a picturebox control in a C# program after it has been converted to a base64 string, then converted to a byte array, then written to binary, then converted to a base64 string again, then converted to
    a byte array, then loaded into a memory stream and finally into a picturebox control.
    Here's the step-by-step code I've written:
    1. First I open an animated GIF from a file and load it directly into a picturebox control. It animates just fine.
    2. Next I convert the image in the picturebox control (pbTitlePageImage) to a base64 string as shown in the code below:
                    if (pbTitlePageImage.Image != null)
                        string Image2BConverted;
                        using (Bitmap bm = new Bitmap(pbTitlePageImage.Image))
                            using (MemoryStream ms = new MemoryStream())
                                bm.Save(ms, ImageFormat.Jpeg);
                                Image2BConverted = Convert.ToBase64String(ms.ToArray());
                                GameInfo.TitlePageImage = Image2BConverted;
                                ms.Close();
                                GameInfo.TitlePageImagePresent = true;
                                ProjectNeedsSaving = true;
    3. Then I write the base64 string to a binary file using FileStream and BinaryWriter.
    4. Next I get the image from the binary file using FileStream and BinaryReader and assign it to a string variable. It is now a base64 string again.
    5. Next I load the base64 string into a byte array, then I load it into StreamReader and finally into the picturebox control (pbGameImages) as shown in the code below:
    byte[] TitlePageImageBuffer = Convert.FromBase64String(GameInfo.TitlePageImage);
                            MemoryStream memTitlePageImageStream = new MemoryStream(TitlePageImageBuffer, 0, TitlePageImageBuffer.Length);
                            memTitlePageImageStream.Write(TitlePageImageBuffer, 0, TitlePageImageBuffer.Length);
                            memTitlePageImageStream.Position = 0;
                            pbGameImages.Image = Image.FromStream(memTitlePageImageStream, true);
                            memTitlePageImageStream.Close();
                            memTitlePageImageStream = null;
                            TitlePageImageBuffer = null;
    This step-by-step will work with all image file types except animated GIFs (standard GIFs work fine). It looks like it's just taking one frame from the animation and loading it into the picturebox. I need to be able to load the entire animation. Does any of
    the code above cause the animation to be lost? Any ideas?

    There is an ImageAnimator so you may not need to use byte array instead.
    ImageAnimator.Animate Method
    http://msdn.microsoft.com/en-us/library/system.drawing.imageanimator.animate(v=vs.110).aspx
    chanmm
    chanmm

  • NetBeans MobilityPack 5 BUG?? how can I send a byte array?

    I�ve created a WebApp and a Simple servlet with one public method.
    public byte[] getStr(byte[] b) {       
    String s = "A string";
    return s.getBytes();
    Then I've used "Mobile To Web App" wizard to generate stubs to that getStr method. And when I�ve tried to call that getStr method:
    byte[] aByte = "st".getBytes();
    byte[] b = client.getStr(aByte);
    I've got an error at Server in Utility.java in generated ReadArray method.
    * Reads an array from the given data source and returns it.
    *@param source The source from which the data is extracted.
    *@return The array from the data source
    *@exception IOException If an error occured while reading the data.
    public static Object readArray(DataInput in) throws IOException {
    short type = in.readShort();
    int length = in.readInt();
    if (length == -1) {
    return null;
    } else {
    switch (type) {
    case BYTE_TYPE: {
    byte[] data = new byte[length];
    in.readFully(data);
    return data;
    default: {
    throw new IllegalArgumentException("Unsupported return type (" + type + ")");
    At this line "in.readFully(data);" I�ve got an EOFException.
    The the aByte = "st".getBytes(); was 2bytes long and at the server "int length = in.readInt();" it was 363273. Why?
    All code was generated by NetBeans Mobility pack 5 it's not mine so its a bug?
    How can I fix this?

    I found that bug. I've described it here
    http://www.netbeans.org/issues/show_bug.cgi?id=74109
    There's a solution how to fix the generated code.

  • View images in a datatable from byte[]

    I am trying to view images in a datatable where the image is
    a byte[]. This is what I tried... doesn't work.
    <h:dataTable rows="5" value="#{AirportList.airportList}" var="airport" binding="#{AirportList.airportData}">
              <h:column>
                <f:facet name="header">
                  <h:outputText value="airportCode"/>
                </f:facet>
                <h:outputText value="#{airport.airportCode}"/>
              </h:column>
              <h:column>
                <f:facet name="header">
                  <h:outputText value="airportMap"/>
                </f:facet>
                  <h:inputHidden id="ole" value="#{airport.airportMap}"/>
                  <h:outputText value="#{AirportList.myMap}"/>
                 <div>
                 <!--
                      try
                        byte[] pic = (byte[])request.getAttribute("AirportList.myMap");
                        response.setContentType("image/jpeg");
                        OutputStream os = null;
                        os = response.getOutputStream() ;
                       // os.write(pic);
                        os.close();
                        os = null;
                      catch (Exception ex)
                        out.println("Exception: " + ex.getMessage());
                        ex.printStackTrace();
                   -->
                  </div>
              </h:column>
              <h:column>
                <f:facet name="header">
                  <h:outputText value="airportSugBook"/>
                </f:facet>
                <h:outputText value="#{airport.airportSugBook}"/>
              </h:column>
            </h:dataTable>

    my backing code
    public class Airport
      Collection AirportList;
      private UIData AirportData;
      byte[] MyMap;
      public Airport()
      public Collection getAirportList()
        try{
          InitialContext context = new InitialContext();
          AirportLOBLocalHome home =  (AirportLOBLocalHome)context.lookup("java:comp/env/ejb/local/AirportLOBLocal");
          AirportLOBLocal local = home.create();
          AirportList = local.showAirports();
          Iterator it = AirportList.iterator();
          while(it.hasNext())
            ((OtnAirportLobDetailsLocalDTO)it.next()).getAirportMap();
          return AirportList;
        catch(Throwable e)
          e.printStackTrace();
        return null;
      public void setAirportList(Collection AirportList)
        this.AirportList = AirportList;
      public UIData getAirportData()
        return AirportData;
      public void setAirportData(UIData AirportData)
        this.AirportData = AirportData;
      public byte[] getMyMap()
        OtnAirportLobDetailsLocalDTO ap = (OtnAirportLobDetailsLocalDTO)AirportData.getRowData();
        return ap.getAirportMap();
       // return null;
      public void setMyMap(byte[] MyMap)
        this.MyMap = MyMap;
    }

  • The vi is identifyng the number of bytes to be read but the VISA Read vi is not able to read the data from the port.

    We are trying to communicate with the AT106 balance of Mettler Toledo.The VI is attached.
    We are sending in "SI" which is a standard command that is recoginsed by the balance. The balance reads it.The indicator after the property node indicates that there are 8 bytes available on the serial port. However, the VISA read VI fails to read the bytes at the serial port and gives the following error:
    Error -1073807253 occurred at VISA Read in visa test.vi
    Possible reason(s):
    VISA: (Hex 0xBFFF006B) A framing error occurred during transfer.
    The Vi is atttached.
    Thanks
    Vivek
    Attachments:
    visa_test.vi ‏50 KB

    Hello,
    You should also definitely check the baud rates specified; a framing error often occurs when different baud rates are specified, as the UARTs will be attempting to transmit and receive at different rates, causing the receiving end to either miss bits in a given frame, or sample bits more than once (depending on whether the receiving rate is lower or higher respectively). You should be able to check the baud rate used by your balance in the user manual, and match it in your VI with the baud rate parameter of the VISA Configure Serial Port VI.
    Best Regards,
    JLS
    Best,
    JLS
    Sixclear

  • Importing the (active) Fix Asset with ◦Accumulated ordinary depreciation to Sap Business One 9

    Hi all,
    I'm trying to upload the  Active Fix Asset to the SBO (no new).
    for example:
    item:                          FixAsset
    Useful life :                48 (month)
    Remaining Life:        12 (month)
    APC(Historical cost): 10000
    Accumulated Ordinary Depr.: 7500
    So  Value Balance: 1500
    and Life Balance :     12 month
    I have tried to Import active ITEM  by Excel ,follow this link : Importing Fixed Asset Master Data from Microsoft Excel - SAP Business One 9.0 - SAP Library
    every time recieve the message :
    Cannot import asset "fixasset"; a new asset's useful life and remaining life must be the same in depreciation area "AFA"

    Hi,
    Please check SAP note:
    2001876 - The system does not consider the Salvage Value nor the
    Remaining Book Value when you import assets
    Thanks & Regards,
    Nagarajan

  • How can I create files in unicode format without "byte order mark"?

    Hello together,
    I have to export files in UTF-8 format and have to send them to another partner system which works with linux as operating system.
    I have tried the different possibities to create files with ABAP. But nothing works 100% how I want.
    Some examples:
    1.)
    OPEN DATASET [filename] FOR OUTPUT IN TEXT MODE ENCODING UTF-8.
    If I create a file in this way and download it from application server to local system the result for file format in a unicode text edior like NotePad is "ANSI AS UTF-8". This means I have no BYTE ORDER MARK inside.
    But it is also possible that the file format is only ANSI if the file contains no "special characters", isn't it?
    In my test cases I create 3 files. 2 of them has format "ANSI AS UTF-8", and one only "ANSII".
    After transfer to Linux I get as result 2 times UTF8 and one time ASCII as file format.
    2.)
    OPEN DATASET [filename] FOR OUTPUT IN TEXT MODE ENCODING UTF-8 WITH BYTE ORDER MARK.
    With this syntax the result in local editor looks like ok. I get as format really "UTF-8".
    But I get problems with the system which receives the files.
    All files has the file format UTF-8 on Linux but the interface / script can not read the file with BYTE ORDER MARK.
    This is a very big problem for me.
    Do anybody of you know if it possible to force creation in UTF-8 without a BYTE ORDER MARK?
    This means more or less the first example but all files should have UTF-8 format!
    Thanks in advance
    Christian

    This means it is not possible to create a pure unicode file without the byte order mark?
    You wouldn't happen to know how a file with byte order mark should read on a Linux system?
    Or if this possible or not?
    Regards
    Christian

  • JMS paging store - what is difference between bytes and messages threshholds?

              You can activate and configure both "bytes paging" and "message paging". What
              is the difference? Why use one or the other or are both required?
              Thanks,
              John
              

    Hi Mr. Black,
              Cool name.
              Message paging is based on number of messages. Bytes paging is based
              on the size of message payload (properties, body, corr-id, and type),
              where the payload does not include the header size. One can set
              either or both -- if both are set, paging kicks in as soon
              as the first threshhold is reached.
              As for which one to use, you may wish to set both. The former
              accounts for large numbers of small messages, and the latter
              handles large messages. (eg 1000 small 10 byte messages
              will charge 10,000 bytes toward quota but actually use up
              128,000 bytes of JVM memory once the header is thrown in...)
              Tom
              John Black wrote:
              > You can activate and configure both "bytes paging" and "message
              paging". What
              > is the difference? Why use one or the other or are both required?
              >
              > Thanks,
              > John
              

  • Changing header and footer in massive projects

    Hi all,
    I am working with RH8 but i have to work on the project which is created in RH5.0.2.It contain total of 249 project and one project in which all the projects are merged.Worst part is that in the project, master page is only used in the parent project.
    My problem is:
    I have to update headers and footers,do i need to update in all the 249 projects separately and then merge in the final (parent) project or there is any other way out?
    If i make changes in any of the project,do i need to remerge it again? what should i care about while doing this?
    Do we need to regenerate the all chm files again or the only file in which we do changes?
    Can anyone suggest a checklist to manage massive project in RH8.
    I tried to import master page from parent project to child project but content get duplicated.
    Thanks and Regards,
    Rahul

    Hi there
    If your CHM is failing a compile you will want to examine the output report to see what message is being posted. It may provide a clue. Assuming that fails for some reason, follow the steps below.
    Perhaps the Single Source Layout is corrupt. To  rule that out, create a new Single Source Layout and try again. If that fails...
    Odds are it's a single topic somewhere. The trick then becomes how to easily find the topic. So copy your project and delete half the topics. Compile. Does it work now? If so, the suspect topic is among the remaining half. Rinse and repeat as needed until you determine where the issue is.
    Cheers... Rick
    Helpful and Handy Links
    RoboHelp Wish Form/Bug Reporting Form
    Begin learning RoboHelp HTML 7 or 8 within the day - $24.95!
    Adobe Certified RoboHelp HTML Training
    SorcerStone Blog
    RoboHelp eBooks

Maybe you are looking for