Loading large volumes of arbitrary binary data to a clip

It is easy to download external data in XML format to a movie
clip. However, what I need is to load really large volumes of
readonly binary data. In my case, textual representation is not an
option. Is it possible to download an arbitrary array of bytes into
memory then seek this array to read individual bytes?
I don't think that ActionsScript arrays like this one
var data:Array = [1,2,3,...];
could be solution fro my problem either. The reason is that
virtual machine associates so much extra information with every
array member.
The only solution I came so far is to pack binary data as
strings,
var data:String = "\u0000\u1234\uabcd";
two byte per character. There should not be any storage
overhead, and seeking for an individual data member is trivial.
But I doubt is there any better solution?

For as2 I don't believe there's any other option other than
to load it in as an encoded string and then decode it internally.
So if you have \u0000 as in the above example you will find it
doesn't work.
var data:String = "\u0000\u1234\uabcd";
trace(data.length) //traces 0 (zero) because the first
character is a string terminator
I think you need an encoding method like base64 in the source
string and an equivalent decoder class for decoding to binary
inside flash. I'm not expert on this stuff... others may know more
or it could be a starting point for your research.
In the past I've used the meychi.com classes for this type of
thing. Couldn't see them online now... but there's something else
here that may be useful:
http://www.svendens.be/blog/archives/8
With as3 - as I understand it - there's no problem because
you can load binary data.

Similar Messages

  • Dealing with large volumes of data

    Background:
    I recently "inherited" support for our company's "data mining" group, which amounts to a number of semi-technical people who have received introductory level training in writing SQL queries and been turned loose with SQL Server Management
    Studio to develop and run queries to "mine" several databases that have been created for their use.  The database design (if you can call it that) is absolutely horrible.  All of the data, which we receive at defined intervals from our
    clients, is typically dumped into a single table consisting of 200+ varchar(x) fields.  There are no indexes or primary keys on the tables in these databases, and the tables in each database contain several hundred million rows (for example one table
    contains 650 million rows of data and takes up a little over 1 TB of disk space, and we receive weekly feeds from our client which adds another 300,000 rows of data).
    Needless to say, query performance is terrible, since every query ends up being a table scan of 650 million rows of data.  I have been asked to "fix" the problems.
    My experience is primarily in applications development.  I know enough about SQL Server to perform some basic performance tuning and write reasonably efficient queries; however, I'm not accustomed to having to completely overhaul such a poor design
    with such a large volume of data.  We have already tried to add an identity column and set it up as a primary key, but the server ran out of disk space while trying to implement the change.
    I'm looking for any recommendations on how best to implement changes to the table(s) housing such a large volume of data.  In the short term, I'm going to need to be able to perform a certain amount of data analysis so I can determine the proper data
    types for fields (and whether any existing data would cause a problem when trying to convert the data to the new data type), so I'll need to know what can be done to make it possible to perform such analysis without the process consuming entire days to analyze
    the data in one or two fields.
    I'm looking for reference materials / information on how to deal with the issues, particularly when a large volumn of data is involved.  I'm also looking for information on how to load large volumes of data to the database (current processing of a typical
    data file takes 10-12 hours to load 300,000 records).  Any guidance that can be provided is appreciated.  If more specific information is needed, I'll be happy to try to answer any questions you might have about my situation.

    I don't think you will find a single magic bullet to solve all the issues.  The main point is that there will be no shortcut for major schema and index changes.  You will need at least 120% free space to create a clustered index and facilitate
    major schema changes.
    I suggest an incremental approach to address you biggest pain points.  You mention it takes 10-12 hours to load 300,000 rows, which suggests there may be queries involved in the process which require full scans of the 650 million row table.  Perhaps
    some indexes targeted at improving that process is a good first step.
    What SQL Server version and edition are you using?  You'll have more options with Enterprise (partitioning, row/page compression). 
    Regarding the data types, I would take a best guess at the proper types and run a query with TRY_CONVERT (assuming SQL 2012) to determine counts of rows that conform or not for each column.  Then create a new table (using SELECT INTO) that has strongly
    typed columns for those columns that are not problematic, plus the others that cannot easily be converted, and then drop the old table and rename the new one.  You can follow up later to address columns data corrections and/or transformations. 
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Errors and exceptions in writing large binary data on sockets!!! urgent

    hi
    I am trying to write large binary data in the form of byte arrays on sockets.
    Data is as large as 512KB(== 524288bytes) So i store the data (actually read from a file through FileInputStream ) and then write on the socket with lines like this
    DataOutputStream dos =
    new DataOutputStream(new BufferedOutputStream(sock.getOutputStream()));
    dos.write(b);
    /* suppose b is the arrayreference in which data is stored. sometimes i write with that offset + len function*/
    dos.flush();
    dos.close();
    sock.close();
    but the program is not stable: sometimes the whole 512KB is read on other side and sometimes less usually 64KB.
    The program is unthreaded.
    There is another problem : one side(reading or writing) sometimes gives error :
    java.net.SocketException: Software caused connection abort: socket write error
    please reply and reply soon and give ur suggestions
    thanks

    hi
    I am trying to write large binary data in theform
    of byte arrays on sockets.
    Data is as large as 512KB(== 524288bytes) So istore
    the data (actually read from a file through
    FileInputStream ) and then write on the socketwith
    lines like this
    DataOutputStream dos =
    new DataOutputStream(new
    BufferedOutputStream(sock.getOutputStream()));
    dos.write(b);
    /* suppose b is the arrayreference in which datais
    stored. sometimes i write with that offset + len
    function*/
    dos.flush();
    dos.close();
    sock.close();
    but the program is not stable: sometimes the whole
    512KB is read on other side and sometimes less
    usually 64KB.
    The program is unthreaded.
    There is another problem : one side(reading or
    writing) sometimes gives error :
    java.net.SocketException: Software caused
    connection abort: socket write error
    please reply and reply soon and give ursuggestions
    thanksUmm how are you reading the data on the other side?
    some of your code snippet might help. Your writing
    code seems ok. I've written a file transfer program
    in a similar fashion and have successfully testing on
    different platforms (AIX, AS400, Solaris, Windows,
    etc) without any problems and without needing to set
    the buffer sizes with files as large as 600MB and you
    said you're testing this on the loopback?
    Point here is you should never need to reset any of the default TCP options to get program correctness. The options are more for optimizations and fine tuning. If indeed you need to change the options to get your program to work, then you program wont be able to scale under different load.

  • Loading Labview Binary Data into Matlab

    This post explains the Labview binary data format. I couldn't find all of this information in any one place so this ought to help anyone in the future.  I didn't want to add any overhead in Labview so I did all of my conversion in Matlab.
    The Labview VI "Write to Binary File" writes data to a file in a linear format using Big Endian numbers of the type wired into the "Write to Binary File" VI. The array dimensions are listed before the actual array data. 
    fid = fopen('BinaryData.bin','r','ieee-be'); % Open the binary file
    Dim1 = fread(fid,4); % Reads the first dimension
    Dim2 = fread(fid,4); % Reads the second dimension
    Dim3 = ...
    Each dimension's length is specified by 4 bytes. Each increment of the first, second, third, and fourth byte represent 2^32, 2^16, 2^8, and 1 respectively. 0 0 2 38 equates to 2*256 + 38 = 550 values for that particular dimension.
    As long as you know the number of dimensions and precision of your binary data you can load it.
    Data = fread(fid,prod([Dim1 Dim2 Dim3]),'double',0,'ieee-be'); % Load double precision data
    If you have appended multiple arrays to the same file in Labview you would repeat this procedure. Load each dimension then load the data, repeat.
    Data = fread(fid,prod([Dim1 Dim2 Dim3]),'int8',0,'ieee-be'); % Load int8 precision data or boolean data
    I had to create a function for my own purposes so I thought I'd share it with everyone else too.  I uploaded it to the Matlab File Exchange.  The file is named labviewload.m.
    This was tested on Matlab R2007a and Labview 8.2.

    Thanks. I have the same questions as I tried to load labview binary data into Matlab. 
    -John

  • Retrive SQL from Webi report and process it for large volume of data

    We have a Scenario where, we need to extract large volumes of data into flat files and distribute them from u2018Teradatau2019 warehouse which we usually call them as u2018Extractsu2019. But the requirement is such that, Business users wants to build their own u2018Adhoc Extractsu2019.   The only way, I thought, to achieve this, is Build a universe, create the query, save the report and do not run it. Then write a RAS SDK to retrieve the SQL code from the reports and save into .txt file and process it directly in Teradata?
    Is there any predefined Solution available with SAP BO or any other tool for this kind of Scenarios?

    Hi Shawn,
    Do we have some VB macro to retrieve Sql queries of data providers of all the WebI reports in CMS.
    Any information or even direction where I can get information will be helpful.
    Thanks in advance.
    Ashesh

  • Power BI performance issue when load large amount of data from database

    I need to load data set from my database, which have large amount of data, it will take so many time to initialize data before I can build report, is there any good way to process large amount of data for PowerBI? As I know many people analysis data based
    on PowerBI, is there any suggestion for loading large amount of data from database?
    Thanks a lot for help

    Hi Ruixue,
    We have made significant performance improvements to Data Load in the February update for the Power BI Designer:
    http://blogs.msdn.com/b/powerbi/archive/2015/02/19/6-new-updates-for-the-power-bi-preview-february-2015.aspx
    Would you be able to try again and let us know if it's still slow? With the latest improvements, it should take between half and one third of the time that it used to.
    Thanks,
    M.

  • Safari keeps crashing when loading large data on web pages

    I have owned my iPad 3 for more than a year now and I have never encountered this problem but ever since I have updated to the latest iOS version(7.1.1), my safari keeps crashing on my community site where threads have maybe over a 100 comments in them.
    Now whenever I open a thread page on Safari, it loads for a few seconds then crashes, to make things worse for me is one in a while when it crashes, it reboots my iPad. I have seen several similar questions asked by other people regarding safari crashing when loading large amounts of data and got help saying that there's something wrong with the CSS. or something but I never got any idea on how to resolve this bug.
    Can anyone tell me how do I prevent my Safari from crashing everytime I open a thread? Any solutions? Because I am getting really ******.
    (Ps: I have already tried other browsing apps such as Google Search and Chrome, they still crash. I even tried letting it load half-way and then stopping it but it isn't going well.)

    I have restored my iOS devices a number of times in order to resolve issues and it has gone very smoothly every time. I am not going to lie, it takes a fair amount of time to backup, restore the iOS and then restore from the backup, but it could very well resolve the issue.
    On the other hand, it may not help at all. Restoring the software is a standard troubleshooting measure And that is why it is recommended when other suggestions aren't working. But before you restore, there are a couple of other things that you could try,
    Reset all settings. Settings>General>Reset>Reset all Settings. You will not lose any data when you do this, but it does take some time to enter all of the device settings again, so be aware of that.
    Another thing to try is to erase the device and start over. This is different than restoring to factory settings. Reads rhis for more information.
    iOS: How to back up your data and set up your device as a new device

  • Large Volume Data Merge Issues with Indesign CS5

    I recently started using Indesign CS5 to design a marketing mail piece which requires merging data from Microsoft Excel.  With small tasks up to 100 pieces I do not have any issues.  However I need to merge 2,000-5,000 pieces of data through Indesign on a daily basis and my current lap top was not able to handle merging more than 250 pieces at a time, and the process of merging 250 pieces takes up to 30-45 mins if I get lucky and software does not crash.
    To solve this issue, I purchased a Desktop with a second generation Core i7 processor and 8GB of memerory thinking that this would solve my problem.  I tried to merge 1,000 piece of data with this new computer, and I was forced to restart Adobe Indesign after 45 mins of no results.  I then merged 500 pieces and the task was completed, but the process took a little less than 30 minutes.
    I need some help with this issue because I can not seem to find another software that can design my Mail Piece the way Indesign can, yet the time it takes to merge large volumes of data is very frustrating as the software does crash from time to time after waiting a good 30-45 mins for completion.
    Any feedback is greatly appreciated.
    Thank you!

    Operating System is Windows 7
    I do not know what you mean by Patched to 7.0.4
    I do not have a crash report, the software just freezes and I have to do a force close on the program.
    Thank you for your time...

  • Ways to handle large volume data (file size = 60MB) in PI 7.0 file to file

    Hi,
    In a file to file scenario (flat file to xml file), the flat file is getting picked up by FCC and then send to XI. In xi its performing message mapping and then XSL transformation in a sequence.
    The scenario is working fine for small files (size upto 5MB) but when the input flat file size is more then 60 MB, then XI is showing lots of problem like (1) JCo call error or (2) some times even XI is stoped and we have to strat it manually again to function properly.
    Please suggest some way to handle large volume (file size upto 60MB) in PI 7.0 file to file scenario.
    Best Regards,
    Madan Agrawal.

    Hi Madan,
    If every record of your source file was processed in a target system, maybe you could split your source file into several messages by setting up this in Recordset Per Messages parameter.
    However, you just want to convert you .txt file into a .xml file. So, try firstly to setting up
    EO_MSG_SIZE_LIMIT parameter in SXMB_ADM.
    However this could solve the problem in Inegration Engine, but the problem will persit in Adapter Engine, I mean,  JCo call error ...
    Take into account that file is first proccessed in Adapter Engine, File Content Conversion and so on...
    and then it is sent to the pipeline in Integration Engine.
    Carlos

  • Horizontal scaling, with large amounts of binary data

    question about horizontal scaling. tried asking late last night, but no one was active. Basically, I have an app that needs to scale (by adding new machines all talking to a database in the backend). This is all fine, but I have some binary file storage requirements for the app (files over 80 megs in size). This introduces a concurency issue, as I can't store this binary data on any of the individual server (because then it will be on one, and not all, leaving the app in an inconsistent state). So where do I store the data to enforce a consistent state? Have the individual apps FTP the file to a central location?
    I am trying to avoid storing binary data in the database; Does anyone have any suggestions on how to address this problem?

    I understand why you are trying to avoid storing binary data in a database but if you need to ensure that this data cannot be modified without the appropriate restrictions then using a database might make sense. You could even have a separate database just for the binary data because you will need to ensure you get the block sizing correct. Also, some databases might be better than others in this case. For example Oracle is likely to be significantly better than MySQL.
    If you do want to use files then you need to put the file in a central location and enforce locking the file to prevent concurrent modification. You can probably tie into a protocol that automatically handles this for you.

  • Airport downloading binary data instead of loading web page?

    I've been having the usual problems with dropped connections for the past month since I set up my new dsl service. It didn't bother me too much since all I had to do was unplug my airport and plug it back in. This morning it happened again and I decided to go ahead and update the firmware to 5.7.
    Whenever I tried to open a web page, I would get a little popup box that stated I was trying to download binary data. It gives me the option of opening in Textedit, etc. I'd try a different web page and the same thing happened. I've tried this on multiple computers and multiple browsers with the same result. I then downgraded to 5.5.1: same result. Tried soft and hard factory resets. No help. It's not the modem, since I am connected to it via ethernet right now with no problems. I can't believe I am the only one with this problem but I've searched all over and cannot find a fix for this. Any suggestions (in english please, I'm not completely computer-literate)?

    Mac OS X (10.6.6)
    Well, there is one problem....

  • How to upload large binary data to dB so it can be read by any app?

    Hi everyone,
    Short version: how do you upload binary data into a MySQL blob field in such a way that it can be read by any application?
    Long version:
    I've been struggling with the problem of putting files into database BLOB fields. (MySQL and Database Connectivity Toolkit).
    I was initially building a query string and executing the query but was finding that certain binary characters were causing failures (end of string terminators, etc...) So, a working solution was to encode the binary string, and that worked fine, although bloated the dB a fair bit. I could decode in LabVIEW and then save the file as needed.
    Now, the customer wants to be able to save the files using other apps, including the MySQL Query Browser, so an encoded file is no good.
    I found using a parametrized query allows me to put the unencoded string into the dB, but it appends a 4-byte length at the front of the BLOB before it inserts it into the dB. Some apps ignore these 4-bytes (such as .pdf) but most do not.
    A related thread on NI discussion forums: http://forums.ni.com/ni/board/message?boar...ssage.id=354361 has no solution, and my support ticket at NI has been ongoing without answer for a while.
    Thanks,
    Ben

    The problem is the DCT. Using ADO it is fairly easy to insert binary data into a BLOB field. I have not tried it in MySQL, but it works fine in SQL Server, Oracle, Firebird and other free/open source databases I have tried. To get you started, see this thread.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps

  • Store large volume of Image files, what is better ?  File System or Oracle

    I am working on a IM (Image Management) software that need to store and manage over 8.000.000 images.
    I am not sure if I have to use File System to store images or database (blob or clob).
    Until now I only used File System.
    Could someone that already have any experience with store large volume of images tell me what is the advantages and disadvantages to use File System or to use Oracle Database ?
    My initial database will have 8.000.000 images and it will grow 3.000.000 at year.
    Each image will have sizes between 200 KB and 8 MB, but the mean is 300 KB.
    I am using Oracle 10g I. I read in others forums about postgresql and firebird, that isn't good store images on database because always database crashes.
    I need to know if with Oracle is the same and why. Can I trust in Oracle for this large service ? There are tips to store files on database ?
    Thank's for help.
    Best Regards,
    Eduardo
    Brazil.

    1) Assuming I'm doing my math correctly, you're talking about an initial load of 2.4 TB of images with roughly 0.9 TB added per year, right? That sort of data volume certainly isn't going to cause Oracle to crash, but it does put you into the realm of a rather large database, so you have to be rather careful with the architecture.
    2) CLOBs store Character Large OBjects, so you would not use a CLOB to store binary data. You can use a BLOB. And that may be fine if you just want the database to be a bit-bucket for images. Given the volume of images you are going to have, though, I'm going to wager that you'll want the database to be a bit more sophisticated about how the images are handled, so you probably want to use [Oracle interMedia|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14302/ch_intr.htm#IMURG1000] and store the data in OrdImage columns which provides a number of interfaces to better manage the data.
    3) Storing the data in a database would generally strike me as preferrable if only because of the recoverability implications. If you store data on a file system, you are inevitably going to have cases where an application writes a file and the transaction to insert the row into the database fails or a the transaction to delete a row from the database succeeds before the file is deleted, which can make things inconsistent (images with nothing in the database and database rows with no corresponding images). If something fails, you also can't restore the file system and the database to the same point in time.
    4) Given the volume of data you're dealing with, you may want to look closely at moving to 11g. There are substantial benefits to storing large objects in 11g with Advanced Compression (allowing you to compress the data in LOBs automatically and to automatically de-dupe data if you have similar images). SecureFile LOBs can also be used to substantially reduce the amount of REDO that gets generated when inserting data into a LOB column.
    Justin

  • Variable length binary data.

    I can't seem to find much on working with binary data in quantities smaller or larger than bytes. I am doing research on Huffman Trees, and created a deonstration, but am having trouble imagining how I would read or write to file one bit at a time. As it stands, I am using BigInteger and it's setBit and clearBit methods and terminating them with a 1, but I can only hope that there is a better way. Are there any suggestions for working with bits?
    This is a portion of the code I have currently that generates a BigInteger containing an arbitrarially long binary string:
             * This returns a BigInteger containing the binary code for this node.  This
             * is a prefix if called on a branch, and the symbol's code if called on
             * a leaf.
             * This is stored in a BigInteger not exceeding 2^31 - 1 bits in length
             * terminated by a 1.
             * The code 000 would be returned as 0001, and 1010 would be 10101.
             * @return
            public BigInteger getCode()
                BigInteger prefix;
                if(parent != null)
                    prefix = parent.getCode();
                    int length = prefix.bitLength();
                    prefix = prefix.setBit(length);
                    if(parent.right == this)
                        prefix = prefix.clearBit(length - 1);
                else
                    prefix = BigInteger.ONE;
                return prefix;
            }Edited by: Zanthra on Dec 14, 2007 5:32 PM
    Edited by: Zanthra on Dec 14, 2007 5:33 PM
    Edited by: Zanthra on Dec 14, 2007 5:34 PM

    I had to do exactly this in the past. For exactly the same reason, by the way: working with huffman encoding :)
    As the previous poster mentionned, you can use BitSet to represent your number in memory. If you want to write the results to a file you will have to write your own implementation of InputStream and OutputStream.

Maybe you are looking for

  • HT201320 Which Apple ID should I use to set up my iPad?

    If I have 2 Apple IDs, one for iTunes and one for mail, calendars, etc., which do I use when setting up my iPad?

  • How to create 'OSS ' notes

    how to create 'OSS ' notes

  • DataElement Package check error

    Hi, I am creating a structure in package b,using a data element defined in package a,when i activate the structure,i get an error saying the datalement of package A is not visible. How do i solve this problem? regards kaushik

  • Lsnrctl start newlist, TNS-12560: TNS:protocol adapter error

    Hello I am using Windows 7. Oracle 11g R2. Firstly I created my directory C:\oracle\net. Then I created an entry for TNSADMIN in the windows registry entering the newly created c:\oracle\net. I ran Oracle Net Manager. The path C:\oracle\net is displa

  • Digitizing DVX-100 footage

    I've been trying to digitize, from FireWire 400, footage I've shot in 24p mode on the DVX-100, not the 24p advanced mode. My footage has come in looking compressed and with strange interlacing. I've tried just about every codec and different setup. A