LZW algorithm for data compression with partial clearing

hello,
i want to implement zip utility and in that i have to develop LWZ algorithm with partial clearing i.e unshrinking method in JAVA
i am not getting information anout unshrinking on net.
please anyone can help me.

LZW had patent issues, so was replaced. I assume you don't want patent issues.
The new algorithm is already programmed for you via the Inflater and Deflater classes in java.util.zip. Unfortunately the doc is nearly incomprehensible. Fortunately for you several of us have worked together and got it to work. Go to Advanced Language Topics and search for topics on Inflater. I've posted code there that is dead simple to use and would be a good foundation for any zip-like utilities.

Similar Messages

  • Tip for Data Merge with Multiple Records (labels, etc.)

    I have seen many InDesign Data Merge questions about how to create sheets of mailing labels, especially the problem of getting only one label per page, when you need 30 or more.
    Adobe's instructions are poor and incomplete in that InDesign doesn't step out the records from a data source - it steps out the FRAMES that contain the data field placeholders.
    That is why you only need to place a text or image frame once on a single page - during the datamerge, InDesign will create the additional FRAMES for each record, and it will create the pages required to hold them.
    You do have to set the desired spacing on the
    If you create the frame on a Master page, ID allows you to update the data source (when it changes) in the Data Merge tool panel.
    These are very nice and robust features, but the documentation for them is confusing to many people.
    You will find more great in-depth help  for Data Merge, with screen captures and attachments, here in the forum.

    For a multiple record merge you need one set of placeholders, then set the margins and spacing in the merge options so the positioning is correct.
    Warning: Using Preview in a multiple record merge will corrupt the file. If you press the preview button, Undo after looking at the preview, then merge without using preview.

  • Custom InfoObject for date, problem with offset

    Hi all,
    I have a BEx reports presenting 0CALWEEK and another IO I created for dates.
    This IO has the same properties as 0CALWEEK (but it was not created as a reference of 0CALWEEK).
    In my report I'd like to use offsets for both IO's.
    It works perfectly for 0CALWEEK, but it doesn't work at all for the other IO. Actually, only the first offset is shown. For all others weeks, I get "#".
    Additionally, I made some calculations based on the week difference (using formula variables). Of course, every time a "#" appears, the calculation cannot be done (otherwise it works like it should).
    Could anybody help me out with this?
    Thanks in advance

    Hi Jagadeesh,
    did I understand you correctly that:
    1) offsets only work with standard characteristics? (this seems really weird to me...)
    2) if I want to use offsets with with Z_* characteristic, I will need ABAP?
    The problem is that I don't know anything about ABAP.
    Do you know any "typical" code to use to resolve this type of problem?
    Where do I have to use this code: in the query, in the IO, in a start routine somewhere?
    Thanks

  • Formula variable for date calculations with date-characteristics (2004s)

    Hi SDN,
    I'd like to calculate the number of days between to date-characteristics. In 3.5, I was used to create 2 formula variables of the type 'replacement path', with 'date' as the dimension indicator. In my formula, I used the 'proces value as date' function for each variable, and I could perform calculations with them.
    I'm trying to do the same in 2004s. However, I can't create replacement path's with 'date' as a dimension indicator. So I use 'number' instead, but it doesn't work: my query shows 'x'.
    I can use the variables that I created using the 3.5 query designer as a workaround. But I hope there is a better solution.
    If other people experience the same problem, please respond. Then I know it's probably a bug.
    Kind regards,
    Daniel

    Daniel,
    Try to look at the formula variables defined before the upgrade and see what is different to the newly defined. I am guessing just the terminology used is different.
    If not the date value might be blank or something for one of the f-variables used. Try to display the formula variable values as KF in the query results and check what it is showing.
    I hope this helps.
    -Bala

  • Steps for Data Guard with one primary and 2 standby

    Hi,
    Database :10.2.0.4, 11.2.0.1
    Os: Windows , Unix
    A ----------------> Primary database
    B ----------------> Standby Database 1
    C ----------------> Standby Database 2
    I want to configure *2 standby* databases for single primary database.
    Lets take, A ,B and C are my machines.My data guard configuration will be like,*archive logs will be moving* from A to B and A to C.
    If i do any switchover in between A and B , now B is primary and remaining A and C are standby databases.At this stage also , archive logs should move from B to A and B to C. Also, same should happen from C to A and C to B,If i do switchover in between B and C.If everything is fine , then i will do switchback to main Primary database(A).
    How do i have to mention PFILE in all machines ,the parameters like
    LOG_ARCHIVE_DEST_1=LOCATION=<PATH> -- LOCAL ARCHIVE PATH
    LOG_ARCHIVE_DEST_2=SERVICE=
    LOG_ARCHIVE_DEST_3=SERVICE=
    FAL_SERVER=
    FAL_CLIENT=
    STANDBY_FILE_MANAGEMENT=
    In my tnsnames.ora , primary,standby1 and standby2 are my service entries and these are same in all of my machines.
    Please suggest me , how do i can configure my pfiles in all machines ?.
    Thanks,
    Sunand

    Not yet, but now you have me interested.
    Please consider Flashback.
    I still have to test but here's my take:
    PRIMARY SETTINGS
    *.FAL_SERVER=STANDBY
    *.FAL_CLIENT=PRIMARY
    *.STANDBY_FILE_MANAGEMENT=AUTO
    *.DB_UNIQUE_NAME=PRIMARY
    *.LOG_FILE_NAME_CONVERT='STANDBY','PRIMARY'
    *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRIMARY'
    *.log_archive_dest_2='SERVICE=STANDBY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY'
    *.log_archive_dest_3='SERVICE=STANDBY2 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY2'
    *.LOG_ARCHIVE_DEST_STATE_1=ENABLE
    *.LOG_ARCHIVE_DEST_STATE_2=ENABLE
    *.LOG_ARCHIVE_DEST_STATE_3=ENABLE
    *.LOG_ARCHIVE_MAX_PROCESSES=30
    STANDBY 1 SETTINGS
    *.FAL_SERVER=PRIMARY
    *.FAL_CLIENT=STANDBY
    *.STANDBY_FILE_MANAGEMENT=AUTO
    *.DB_UNIQUE_NAME=STANDBY
    *.LOG_FILE_NAME_CONVERT='PRIMARY','STANDBY'
    *.log_archive_dest_1=LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=STANDBY'
    *.log_archive_dest_2='SERVICE=PRIMARY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRIMARY'
    *.log_archive_dest_3='SERVICE=STANDBY2 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY2'
    *.LOG_ARCHIVE_DEST_STATE_1=ENABLE
    *.LOG_ARCHIVE_DEST_STATE_2=DEFER
    *.LOG_ARCHIVE_DEST_STATE_3=DEFER
    *.LOG_ARCHIVE_MAX_PROCESSES=30
    STANDBY2 SETTINGS
    *.FAL_SERVER=PRIMARY
    *.FAL_CLIENT=STANDBY2
    *.STANDBY_FILE_MANAGEMENT=AUTO
    *.DB_UNIQUE_NAME=STANDBY2
    *.LOG_FILE_NAME_CONVERT='PRIMARY','STANDBY2'
    *.log_archive_dest_1=LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=STANDBY2'
    *.log_archive_dest_2='SERVICE=STANDBY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY'
    *.log_archive_dest_3='SERVICE=PRIMARY LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRIMARY'
    *.LOG_ARCHIVE_DEST_STATE_1=ENABLE
    *.LOG_ARCHIVE_DEST_STATE_2=DEFER
    *.LOG_ARCHIVE_DEST_STATE_3=DEFER
    *.LOG_ARCHIVE_MAX_PROCESSES=30
    Edited by: mseberg on Nov 29, 2010 9:39 AM
    The first test slapped me. Looking at 409013.1 Cascaded Standby Databases
    Edited by: mseberg on Nov 29, 2010 12:49 PM

  • What is QDX for data exchange with SAP

    Hi,
    Anyone know what is QDX. I was told that it is a Data Exchange Interface (in the higher layer than BAPI). I could not find any relevant information on the Web so far.
    Please share with me if you have any information.
    Thanks
    Andy Tan

    Hi,
    Anyone know what is QDX. I was told that it is a Data Exchange Interface (in the higher layer than BAPI). I could not find any relevant information on the Web so far.
    Please share with me if you have any information.
    Thanks
    Andy Tan

  • Which type of index is useful for date columns with time stamp

    Hi all,
    I am using date column in the where clause of an SQL Query. The values stored in the date column are includes timestamp. The query is very slow and there is no index on the date column.
    Can any body suggest which index is better on date columns
    Thanks

    I am using date column in the where clause of an SQL Query.Dates a re hard queries to tune. This ...
    WHERE start_date BETWEEN to_date('01-SEP-05') AND to_date('02-SEP-05')...probably requires a very different execution plan to this...
    WHERE start_date BETWEEN to_date('01-JAN-01') AND to_date('02-SEP-05')Just bunging an index on the date column may speed up your specific query but break something else. So be careful.
    Cheers, APC

  • Inconsistent compression with Deflater/Inflater

    I try to (un)compress some data using the (In)Deflater class. I want to have GZIP compression, so I set nowrap to true. The javadoc for the Inflater class says something mysterious like "Note: When using the 'nowrap' option it is also necessary to provide an extra "dummy" byte as input. This is required by the ZLIB native library in order to support certain optimizations." What does that mean?
    I have written a test program in which I check if the original data and the uncompressed data is the same. When I run this program I get about 2 errors per run (Inflater not finished).
    Any ideas?
    import java.util.zip.DataFormatException;
    import java.util.zip.Deflater;
    import java.util.zip.Inflater;
    public class ZipTest
        private static byte[] createTestBytes(int size)
            byte[] result = new byte[size];
            for (int i = 0; i < result.length; i++)
                // result[i] = (byte)(i % 256);
                // result[i] = 0;
                result[i] = (byte)(Math.random() * 256);
            return result;
        public static void testGZIP()
            byte[] data = createTestBytes(DATA_SIZE);
            Deflater def = new Deflater(5, true);
            def.reset();
            def.setInput(data);
            def.finish();
            byte[] comp = new byte[MAX_DATA_SIZE_COMP];
            int compBytes = def.deflate(comp);
            byte[] decompData = new byte[DATA_SIZE];
            Inflater inf = new Inflater(true);
            inf.reset();
            inf.setInput(comp, 0, compBytes);
            int decompBytes = 0;
            try
                decompBytes = inf.inflate(decompData);
            catch (DataFormatException exp)
                System.err.println(exp);
            if (inf.finished() == false)
                System.err.println("Inflater not finished "+DATA_SIZE+" "+inf.getRemaining());
            int nDiffs = 0;
            for (int i = 0; i < DATA_SIZE; i++)
                if (data[i] != decompData)
    nDiffs++;
    if (nDiffs > 0)
    System.err.println(nDiffs + " diffs");
    public static int DATA_SIZE = 16080;
    public static int MAX_DATA_SIZE_COMP = (int)(DATA_SIZE * 1.5d);
    public static Deflater def = new Deflater(5, true);
    public static Inflater inf = new Inflater(true);
    public static void main(String[] args)
    for (int i = 0; i < 5000; i++)
    DATA_SIZE = (int)(Math.random()*(1024*64));
    MAX_DATA_SIZE_COMP = (int)(DATA_SIZE * 1.5d)+1024;
    testGZIP();
    System.out.println("Finished.");

    Has nobody an idea why data compressed with Deflater(5,true) cannot always be decompressed by Inflater(true)? What am I doing wrong?
    Thanx
    Ulrich

  • Data compression can have negative impact on application ?

    Hi,
    They are going to analyse table/index structure & go ahead with data compression in order to speed up the performance.
    We have been asked to verify if data compression will have affect the application or not?
    For ex: we run one process which we run through application which rebuild index of a big table. So I may put this thing in effect.
    But could please help me which all areas I should focus and investigate before asking them to proceed for data compression?
    -Vaibhav Chaudhari

    This article will give you most of the answers:
    http://technet.microsoft.com/en-us/library/dd894051(v=sql.100).aspx

  • About data compression

    The lastest version of Berkeley DB supports data compression with its set_bt_compression method.
    I created a database, using default data compression method provided by Berkeley DB. Like this:
    DB *dbp;
    db_create(&dbp, inenv, 0);
    dbp->set_flags( dbp, DB_DUPSORT );
    dbp->set_bt_compress(dbp, NULL, NULL);
    Then i insert key, data.
    For keys, they are random char arrays,
    For data, they are char array with the same content.
    Now the problem is: the compressed database file is the same size of the one that i didn't use the compress method.
    Can someone tell me why? THX

    Hi,
    This is likely because the default compression function does not have much to work with.
    Specifying NULL for both compression and decompression functions in the DB->set_bt_compress method call implies using the default compression/decompression functions in BDB. Berkeley DB's default compression function performs prefix compression on all keys and prefix compression on data values for duplicate keys.
    You haven't specified a prefix or key comparison function (DB->set_bt_prefix, DB->set_bt_compare), hence a default lexical comparison function is used as the prefix function. Given that your keys are random char arrays, the default lexical comparison function may not perform very well in identifying efficient (large-sized) prefixes for the keys.
    Also, as the keys are truly random, it's unlikely that you'll have duplicates, so there's likely nothing to compress on data values for duplicate keys.
    Even if the compression function does compress any keys' prefixes or prefixes for duplicate's data items, if the compressed items (and uncompressed ones) still require to be stored on the same number of database pages as in the case without compression, you'll not see any difference in database file size.
    Regards,
    Andrei

  • Best  Course  for Data Warehousing

    Hi,
                I am planning to join data warehousing course .I heard there is lot courses in data warehousing .
    Data warehousing with ETL tools or
    Data warehousing with Crystal Reports or
    Data warehousing with Business object or
    Data warehousing with Informatica or
    Data warehousing with Bo-Webel or
    Data warehousing with Cognos or
    Data warehousing with Data Stage or
    Data warehousing with MSTR or
    Data warehousing with Erwin or
    Data warehousing with oracle.
    Please suggest me which best to choose and  which have more scope because I  don't know  the ABC of data warehousing  but I have some experience in oracle.
    Is it must that I need work experience in data warehousing  then only can get a job ?Please tell me which is the best book for data warehousing which should start from scratch.  Please  give your suggestions about to my queries.
    Thanks & Regards,
    Raji

    Hi,
    Basically Datawarehouse is a concept.To develop DW , we need two tools mainly. One is ETL tool and other one is Reporting tool .
    The few famous ETL tools are
    Informatica
    Data Stage
    Few famous Reporting tools are
    Crystal Reports
    Cognos
    Business object
    As a DW developer you should aware of atleat one ETL tool and atleat one Reporting tool.The combination is your choice.It better to finout the best combination in point of job market , and then learn them.
    Erwin is Datamodel tool. It can aslo be used in DW implementation. You have already have experience on ORacle,So my adivce is go for Data warehousing with oracle or Data warehousing with Informatica .And learn one reporting tool.I donot is there any reporting tool available from ORACLE.
    My suggestion on books.
    Fundamentals of Datawarehouse by PaulRaj Ponnai and
    Datawarehouse toolkit.
    http://www.inmoncif.com/about.html is one of the best site for Datawarehouse.
    With rgds,
    Anil Kumar Sharma .P
    Assigning points is the way to say thanks in SDN site.

  • What bean for date selection should I choose?

    I need bean for date selection with folowing features:
    1) Period selection
    2) Internatioanization
    I foung some components:
    1) http://www.java-calendar.com/
    (+) supports all need features
    (-) 50$ :(
    2) http://www.toedter.com/
    (+) free, interationalization
    (-) no period selection
    3) http://sourceforge.net/projects/jdatechooser
    (+) free, supports all need features
    (-) project is too young, i'm searching more mature project
    I have no time for long searches, so advise me component please.

    From your own findings it's between 1 and 3, but 3 is "too young" so it looks like 1. Spend the 50.

  • How to replace blank values in DATE field with 00000000

    Hi
    I have a DSO with Date field in which there are blank values. I want to replace the blank values in the DSO for date field with 00000000. Because of this blank values in the DSO the report is giving an ORA error.
    How do we replace the blank values in the DSO for historical data and also the new loads?
    Please advise.
    Thank you.
    Regards,
    Pavan.

    Hi Suman,
    I'm trying to run a query built on a DSO. The DSO has a field 'Start Date'. This Start Date InfoObject has a reference Char as 0DATE.
    This Start Field has blank values due to which I'm getting the ORA-01722 error. I came across many threads with same topic of discussion. As mentioned in one of the Threads I have written a program to update 'Start Date' with 00000000. The code wriiten is "UPDATE /BIC/AZ_MONINV00 SET /BIC/ZSTR_DTE = '00000000' where /BIC/ZSTR_DTE = ' '.
    Now when I see the data in the active data table of DSO the blank value is replaced with '00000000'.
    But If I right click on the DSO click on display data the 'Start Date' field is blank. And now the Query also gets executed without any error. But in the output of the query the 'Start Date' Field has "#" values.
    Can anyone suggest on how to remove these "#" values in the report?

  • Coming home for two weeks with unlocked 3GS.  advice?

    i bought my 3GS in taiwan unlocked and i've been using it with my carrier over here. i'll be coming home for two weeks and i was wondering if anyone has any advice on how to get a pay as you go plan for when i'm at home. i'll be in portland. any advice would be appreciated.
    thanks a lot!

    Not sure if there is a local/regional GSM carrier in that area but if not, the only other GSM carrier in the U.S. that comes anywhere close to AT&T in terms of network coverage is T-Mobile, and I don't believe T-Mobile's 3G network in the U.S. is compatible with the 3G and 3GS, so it would EDGE or slower for data access with T-Mobile.
    You can get data access with a pay-as-you-go plan, but data access with AT&T's network via pay-as-you-go will be expensive. AT&T is the only official carrier for the iPhone in the U.S. at this time.

  • Item Interest Calculation for partially cleared items

    Hi
    We need to do interest calculation on Customer Line Items. The T Code we are using is FINT. We have set an interest indicator for Item Interest Calculation, with Interest Calculation based on Items Cleared with Payments. The requirement is that Interest should be calculated on even partially cleared items. Suppose a customer invoice is generated on 1.1.2009 for INR 100000 and becomes due for payment on 30.1.2009. Now on 10.2.2009, a partial payment is recieved against this invoice for INR 30000. System should calculate interest on INR 30000 for 11 days. Now again on 20.2.2009, remaining payment of INR 70000 is recieved. In such a case, interest should be calculated on INR 70000 for 21 days @ 1.25% PM. In the current configuration, when we define that system should calculate interest on Open Items cleared with payments, system calculates interest on INR 100000 for 1.25% for 21 days. Pls suggest.
    Regards
    Sanil Bhandari

    Hi u can check all below steps with specific fields i thought it is working perfectly check it.
    1. Define Interest Calculation Types
    here u can enter int rate type as a "S" Balance interest calculation
    2. Prepare Account Balance Interest Calculation
    here u can enter int calculation frequency means monthly or quarterly etc. calander type G, select balance plu int check box
    3. Define Reference Interest Rates
    Here u can enter date currency
    4.  Define Time-Dependent Terms
    here u can enter currency effective from date sequential number term (Debit interest: balance interest calc. or Credit interest: balance interest calc.) referance int rate enter before step what u r defined that one u can enter here.
    5. Enter Interest Values
    here u can enter interest rate for that referance int type
    6. Prepare G/L Account Balance Interest Calculation
    Here u can enter ur g/l accounts
    0001            Interest received (int received a/c)
    0002            Interest paid      (int paid a/c)
    0011            Pt vl.min.int.earned(int received a/c)
    0012            Pst vl.min.int.paid(int paid a/c)
    0013            Pst vl.dt.int.earned(int received a/c)
    0014            Past val.dt.int.paid(int paid a/c)
    0015            Calc.per.int.earned(int received a/c)
    0016            Calc.period int.paid(int paid a/c)
    1000            G/L account (earned)(Loan giving a/c)
    2000            G/L account (paid) (Loan taking a/c)
    after that u can post transaction  execute ur transaction code i thought it is helpful for u
    Regards,
    Nauma.

Maybe you are looking for

  • SEM-SRM : Stakeholder no longer exists

    Hi, I am creating a stakeholder in SEM-SRM. I save it after entering Basic data (Address etc). I get this error message saying the stakeholder doesn't exist. I am not able to see this stakeholder in SRM. However  if I use the search help, it does sho

  • Indesign CS6 transparency problem

    Hi all, Problem: A placed TIFF with a transparent background overlaps an indesign partially transparent graphic shape and where the two transparencies overlap text appears heavier, opacity is darker and sometimes there is an outline at the edge of th

  • We're having a problem syncing your information 86...

    Just started having this issue 2 days ago and can't seem to resolve it.  I can't connect my Lumia 920 to my Exchange account.  I have other Exchange users who are not having ANY issues, including others with the identical WP8 Phones. I tried removing

  • Apogee ensemble firewire and thunderbolt

    Does Apogee Ensemble firewire  is compatible ith new mac pro 2014?

  • Big Problem! - Veriation of RSS Problem??

    Ok, so as you can see I have the first version of the 2 GHZ Macbook, and I have a HUGE problem! Most of my applications, mainly safari, itunes, iphoto, textedit, pages & ichat shut down on thier own over and over again, a box pops up asking if i want