Compress Uncompress infoprovider

Hi,
I have compressed the infoprovider. Now I want uncompress it. Is it possible? I am on version BW 3.1.
There is a custom abap program which uses the datapacket table to populate custom tables. If I can find the dimid field value for each request compressed, I should be able recreate the custom table.
I did the compress yesterday.
Best regards,
Jai

Hi,
Pizzaman is correct. This is a Mission Impossible !
Once your request is compressed. the records are merged with those existing records in E tables. Even you have not deleted your requests compressed, you are not able to uncompress them.
You will have two ways.
1. If you still have all the history records in your source system or PSA, you can do a full load and start your delta again.
2. If you don't have all the history data, what I can suggest is that you reverse the requests you want to uncompress. To reverse the request, you can change your transformation rules ( or transfer rules) to reverse all key figures. Change key figure = 0 - key figure. Activate your rules and load only the requests you are interested again and compress them. As your new transfer rule is exactly the same as the old one except the key figure having an opposite sign. This load and compress should reverse your previous load. After the reversing, you need to restore your transfer rule to original and load those requests again. This time DO NOT compress them as you wish. Unless you really have to, I don't believe it worth taking so much effort to do it.
If your destination object is newly created and there is no data except those requests you wants to uncompress, then you can simply delete all the data and reload the requests without compressing.

Similar Messages

  • Compression / UnCompression of Files

    I am using Java Swing as Front end and Visual FoxPro as Database.
    In an application, i need to back up the database files. I need the compressed back up ie the files to be compressed as Winzip by the Java itself and can be stored in Floppy. Later during Restore, the application should automatically UnZip that and copy files into the Respective folder in HardDisc.
    I had tried with java.util.ZipInputStream and
    ZipOutputStream
    GZIPInputStream
    GZIPOutputStream
    It is compressing but How to Extract the files from that.
    Please give me some elaborate Clues.
    Please give me a solution asap.

    At
    coldatags Suite
    http://coldjava.hypermart.net/jsp.htm
    there is a tag which is helpful in
    making GZIP and then uncompress it .
    U can see it.
    i m sure that it will help u.

  • Compression/ uncompression issue over sokects

    Only one thread running at the time of test;
    JAVA: Java(TM) SE Runtime Environment (build 1.7.0_02-b13)
    i have a problem with the following scenario.
    A client inside a loop reads data from a file, does some processing of incoming data, compresses it and sends it
    to the server over a socket. The server at the first instance of the read throws: *"java.util.zip.ZipException: invalid distance too far back"*
    The Client side compressiion code snippet is:
    ==================================
    try{
    Deflater defltr = new Deflater(Deflater.DEFAULT_COMPRESSION, true);
    DeflaterOutputStream deflOpStrm = new DeflaterOutputStream(new BufferedOutputStream(
         thrdHlpr.getClntSkt().getOutputStream()), defltr, BUF_SIZE); //BUF_SIZE= 10240 .....1024/512
    DataOutputStream opStrm = new DataOutputStream(deflOpStrm);
    while(!isInterrupted()) {
    while(true){
    while(0 != chBufLen && -1 != chBufLen){
    char[] chBuff = new char[BUF_SIZE]; //BUF_SIZE= 10240 .....1024/512
    chBufLen = inFile.read(chBuff, 0, BUF_SIZE);
    if( 0 < chBufLen){
    offset += chBufLen;
    rawBuf.append(chBuff, 0, chBufLen);
    //Do my things with the data and transfer data to outBuf
    opStrm.writeUTF(outBuf.toString());
    deflOpStrm.flush();
    opStrm.flush();
    //I have tried with and without calling finish & reset
    //deflOpStrm.finish();
    //defltr.reset();
    chBuff = null;
    }//end while(0 != chBufLen &&.....
    }//end while(!isInterrupted
    //Catch block.....
    Server side UncompressCode is:
    ======================================
    try{
    Inflater infltr = new Inflater(true);
    InflaterInputStream infInputStrm = new InflaterInputStream(clntSokt.getInputStream(), infltr, BUF_SIZE/2);
    DataInputStream dInptStrm = new DataInputStream(infInputStrm);
    while(true) {
    String result = dInptStrm.readUTF(); //Exception is thrown here
    output.seek(output.length());
    output.writeUTF(result);
    //Have tried with and without reset.
    //infltr.reset();
    }//end while(true....
    //catch block
    The Satk is:
    =============================
    java.util.zip.ZipException: invalid distance too far back
         at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
         at java.io.DataInputStream.readFully(DataInputStream.java:195)
         at java.io.DataInputStream.readUTF(DataInputStream.java:609)
         at java.io.DataInputStream.readUTF(DataInputStream.java:564)
    Since i have tried this many different ways..... i most appreciate time and effort of those who have a difinite answer to take a shot at it.
    Thanks very much for your time and answer.
    Edited by: user10049207 on Feb 3, 2012 8:38 AM
    Edited by: user10049207 on Feb 3, 2012 8:40 AM
    Edited by: user10049207 on Feb 3, 2012 8:42 AM

    please don't double post
    java.util.zip.ZipException: invalid stored block lengths

  • Table and Index compression in data warehouse - thoughts?

    Hi,
    We have a data warehouse with large fact tables and materialized views of this data.
    Approx 3 million inserts per day week-ends about 12 million.
    The fact tables we have expected to have 200 million, and couple with 1-3 billion.
    Tables partitioned and have bitmap indexes.
    Just wondered what thoughts were about compressing large fact tables and mviews both from point of view of ETL into them and reporting from them afterwards.
    I take it, can compress/uncompress accordingly without any problem?
    Many Thanks

    After compression, most SELECT statements would not get slower. Actually, many can get faster due to reduced IO and buffer needs.
    The situation with DMLs is more complex. It depends on the exact compression options (basic or advanced) and the DML (INSERT,UPDATE, direct load,..),but generally DML are negatively affected by compression.
    In a Data Warehouses (DWs), it is usually quite beneficial to compress partitions or tables that contain data that is not supposed to be modified (read only or read mostly). Please note that in many cases you do not have to compress while you are loading the data – you can do that later.
    You can also consider compressing some of your B-tree indexes (if you use them in your DW system).
    Iordan Iotzov
    http://iiotzov.wordpress.com/

  • Viewing a BLOB Data (Including Compressed Data) in Classic Report APEX 3.2

    Hi All,
    I have a table with a BLOB field.
    That blob field is containing image with .tiff format.
    I need to display those images in APEX 3.2 Classic Report.
    Am able to achieve this by following the instructions in the below link.
    http://st-curriculum.oracle.com/obe/db/apex/r31/apex31nf/apex31blob.htm#t4
    Now my problem is, Some of the image in my table is in compressed format.
    So those images are not displaying in the report, rather it is giving a message called 'No Preview Available'.
    How can i extract those images and display in the Classic Report.
    Please help me to achieve this.
    Thanks & Regards,
    Sakthi.

    What kind of compression are we talking about here ?
    <li>If its some kind of native image compression methods, then you could try the ORDImage utlity, here's a thread which discusses that : {thread:id=1048248}
    If its compressed using zip or some file compression utility, you may need to load the java code(into the DB) that can uncompress it for you and then call it from procedure which then uncompress'es it(using java class) and then send it back as an image(programmatic way way of showing images in apex).
    There posts should be of assistance
    <li>Extract XML from docx . Forget the post title , this post has some code which uncompresses a zip file in the database(ignore the rest of the post, if irrelevant).
    <li>Another posting about the same requirement, but with code for unzipping which you can adapt
    <li>If you are not in a hurry, this Oracle-Sun Java documentation explains the whole process of compression/uncompression with code that you can process within the database(from PLSQL using java wrapper)
    <li>The simplest is the PLSQL Package UTL_COMPRESS utility (also ) , which wouldn't need any extra coding for compressing/uncompressing binary data.this<a/>) , which wouldn't need any extra coding for compressing/uncompressing binary data.

  • Uncompress folder

    Hi there!
    I am playing with the uncompress function of air and
    javascript.
    I have done the "ByteArray example: Reading a .zip file"
    training (
    link)
    and everythings works fine.
    Now i am trying to uncompress a zip file including folders.
    The Reading a .zip file example finds the correct files but
    after finding a folder it gets a problem, i think its becoause it
    is reading the wrong bytes.
    Anyone got an idea of how to uncompress folders? Google didnt
    help at all.
    Here is a sample code where i think the error occures:
    quote:
    while (zStream.position < zfile.size) {
    // read fixed metadata portion of local file header
    zStream.readBytes(bytes, 0, 30);
    bytes.position = 0;
    signature = bytes.readInt();
    // if no longer reading data files, quit
    if (signature != 0x04034b50) {
    break;
    bytes.position = 8;
    compMethod = bytes.readByte(); // store compression method
    (8 == Deflate)
    offset = 0; // stores length of variable portion of metadata
    bytes.position = 26; // offset to file name length
    flNameLength = bytes.readShort(); // store file name
    offset += flNameLength; // add length of file name
    bytes.position = 28; // offset to extra field length
    xfldLength = bytes.readShort();
    offset += xfldLength; // add length of extra field
    // if a folder is found offset seems to have the size of all
    files inside the folder
    // read variable length bytes between fixed-length header
    and compressed file data
    zStream.readBytes(bytes, 30, offset);
    bytes.position = 30;
    fileName = bytes.readUTFBytes(flNameLength); // read file
    name
    output += fileName + "<br />"; // write file name to
    text area
    bytes.position = 18;
    compSize = bytes.readUnsignedInt(); // store size of
    compressed portion
    output += "\tCompressed size is: " + compSize + '<br
    />';
    bytes.position = 22; // offset to uncompressed size
    uncompSize = bytes.readUnsignedInt(); // store uncompressed
    size
    output += "\tUncompressed size is: " + uncompSize + '<br
    />';
    // read compressed file to offset 0 of bytes; for
    uncompressed files
    // the compressed and uncompressed size is the same
    // if a folder is found the compSize is 0
    zStream.readBytes(bytes, 0, compSize);
    if (compMethod == 8) {// if file is compressed, uncompress
    bytes.uncompress(air.CompressionAlgorithm.DEFLATE);
    thx!

    No ideas?!

  • Could someone uncompress a file for me?

    I need the firmware update for version 1.1 for a wrt54g wireless router. Could someone uncompress it for me and email to it me if the file isn't too large. Say under 10mg. I will post my email if someone can do that. I have Stuffit and apparently that won't work. Thanks for your help

    Why dont you download winzip, or winrar, or any other compress/uncompress software and extract firmware. It would save much time.

  • Change the application compression utility

    Maybe a silly question...
    I want substitute the application for compress/uncompress (archive utility) with another (Unarchiver). But the question is in which way?
    1- substitute BOMArchiver with Unarchiver, renamed it 'archive utility'?
    2- change the link (in contextual menu of the finder) point to Unarchiver, and how I do it?
    very grateful 4 your help

    Hi,
    is not right that i don't like, but working with winzozz is necessary that you compress files compatible
    this is why I ask for this, and I prefer to do this with contextual menu

  • LDOM Disk IO file transfers, compression and discompression issues

    We are currently moving to an ldom environment a lot of our oracle database servers, an issue we have come across is when compressing/uncomprssing or transferring files from within our ldoms everything is slow. If I run the same operations from a zone everything runs faster. Our ldoms are all hosted on 3 t5220s with a 2540 disk array back end. The zones are running on M4000 with the same disk array backend. All our ldoms started with 4 vcpu and 4MB. We have 5-8 ldoms on each t5220.When running compress pmap -x for PID show 2.5Mb used from 4Mb, mpstat shows idl time on 8 vcpu >60% zpool iostat -v shows max reads 270 writes 70 bandwidth upto read 12M write 5M. I tried increasing memory and vcpus but didnt expect it to make a difference and it didnt. This is affecting our backup times. Compressing a 10GB oracle backup file takes 32 minutes uncompressing 20 minutes on the ldoms on the zones the time is a 1/3 still slow but managable. Solaris 10u7 LDOM 1.2 Any ideas on what to check next?

    After further investigation it seems it is a feature of the t5220's I also tried this on a basic OS install on a t5120 an got similar performance output. The t5*** series is good for multi threading apps but not very powerful for single thread apps. Apparently compress/uncompress gzip all use single threads which is why they are slow, I need to find a multi thread compression tool approved by SUN I have found a pbzip2 and need to check it out.If anyone is the wiser please advise Any other suggestions would be great thanks.

  • Large query result set

    Hi all,
    At the moment we have some java classes (not ejb - cmp/bmp) for search in
    our ejb application.
    Now we have a problem i.e. records have grown too high( millions ) and
    sometimes query results in retrieval of millions of records. It results in
    too much memory consumtion in our ejb application. What is the best way to
    address this issue.
    Any help will be highly appreciated.
    Thanks & regards,
    Parvez

    you can think of following options
    1) paging: read only few thousands at a time and maintain a index to page
    through complete dataset
    2) caching!
    a) you can create a serialized data file in server to cache the result set
    and can use that to browse through. you may do on the fly
    compression/uncompression while sending data to client.
    b) applet based solution where caching could be in client side. Look in
    http://www.sitraka.com/software/jclass/cs_ims.html
    thanks,
    Srinivas
    "chauhan" <[email protected]> wrote in message
    news:[email protected]...
    Thanks Slava Imeshev,
    We already have search criteria and a limit. When records exceeds thatlimit
    then we prompt user that it may take sometime, do you want to proceed? If
    he clicks yes then we retrieve those records. This results in lot ofmemory
    consumtion.
    I was thinking if there is some way that from database I can retrieve some
    block of records at a time rather the all records of a query. I wander how
    internet search sites work, where thousnds of sites/pages match criteriaand
    client can move back & front on any page.
    Regards,
    Parvez
    "Slava Imeshev" <[email protected]> wrote in message
    news:[email protected]...
    Hi chauhan,
    You may want to narrow search criteria along with processing a
    limited number of resulting records. I.e. if the size of the result
    is bigger than a limit, you stop fetching results and notify the client
    that search criteria should be narrowed.
    HTH.
    Regards,
    Slava Imeshev
    "chauhan" <[email protected]> wrote in message
    news:[email protected]...
    Hi all,
    At the moment we have some java classes (not ejb - cmp/bmp) for
    search
    in
    our ejb application.
    Now we have a problem i.e. records have grown too high( millions ) and
    sometimes query results in retrieval of millions of records. It
    results
    in
    too much memory consumtion in our ejb application. What is the best
    way
    to
    address this issue.
    Any help will be highly appreciated.
    Thanks & regards,
    Parvez

  • Media Encoder does not use updated After Effects files

    Hi,
    I am working on a project using the new After Effects CC and Media Encoder. In previous versions (CS6) if I updated a file in After Effects, saved it and brought it back into Media Encoder the new rendered file would be fine.
    Now it seems that I have to quit Media Encoder in order to get the refreshed composition from After Effects into the render queue, otherwise it just re-renders whatever the last version was.
    Any thoughts?

    Nothing to fix because nothing is broken. The programs wil ltake what they need and not beyond that or more to the point, what their internal infrastructure allows them to use. Many encoding tasks require strict linear processing as do many effects in AE, which severely limits the ability to use multiple cores/ processors no matter what. Furthermore things like linear file access in clip based formats would furtehr compound the issue. And finally, naturally it matters a lot what's going on in your projects. If your footage loads slowly, move it to a faster disk. if it's compressed, uncompress/ convert it. If you use slow effects, substitute them with faster commercial third-party plug-ins. if you render to clips, redner to image sequences. The list for potential optimization could go on and on, but suffice it to say that none of this involves just pushing a magic button or checking a specific option. you need to learn these things and tweak them based on experience.
    Mylenium

  • Problem in reading BMP file .....

    I've facing some inconsistency in reading BMP headers. There are two of them, and I've outlined them below....
    Here's a description of the BMP headers, just for refreshing ....
    1.
         typedef struct {
         BITMAPFILEHEADER bmfHeader;
         BITMAPINFOHEADER bmiHeader;
         GLubyte *image_data;
         } BITMAP_IMAGE;
         typedef struct tagBITMAPFILEHEADER {
              WORD bfType;               // 2B
              DWORD bfSize;               // 4B
              WORD bfReserved1;     // 2B
              WORD bfReserved2;     // 2B
              DWORD bfOffBits;          // 4B
         } BITMAPFILEHEADER, *PBITMAPFILEHEADER;
         typedef struct tagBITMAPINFOHEADER{
         DWORD biSize;
         LONG biWidth;
         LONG biHeight;
         WORD biPlanes;
         WORD biBitCount;
         DWORD biCompression;
         DWORD biSizeImage;
         LONG biXPelsPerMeter;
         LONG biYPelsPerMeter;
         DWORD biClrUsed;
         DWORD biClrImportant;
         } BITMAPINFOHEADER, *PBITMAPINFOHEADER;
    2. The file I'm reading test2.bmp has the following parameters
         File Type                     Windows 3.x Bitmap (BMP)
         Width                         4
         Height                         4
         Horizontal Resolution     96
         Vertical Resolution          96
         Bit Depth                    24
         Color Representation     True Color RGB
         Compression                    (uncompressed)
         Size                         102 bytes
         Size on disk               4.00 KB (4,096 bytes)
    3. Output of the program is....
              File Headers are...
              bfType :424d
              bfSize :102
              bfReserved1 :0
              bfReserved2 :0
              bfOffBits :54
              File Info Headers are...
              biSize :40
              biWidth :4
              biHeight :4
              biPlanes :0
              biBitCount :1
              biCompression :1572864
              biSizeImage :48
              biXPelsPerMeter :196
              biYPelsPerMeter :234881220
              biClrUsed :234881024
              biClrImportant :0
    4.      readInt()     4
              Reads four input bytes and returns an int value.
         readUnsignedShort()     2
              Reads two input bytes and returns an int value in the range 0 through 65535.
         readByte()
              Reads one Byte
    5. The bfType fields value should be 'BM' whose Hex is 0x4D42 according to this link
    http://edais.earlsoft.co.uk/Tutorials/Programming/Bitmaps/BMPch1.html
    But the Hex I get is 424d.How come ?
    6. When reading bfSize (see ## ), we should read 4 bytes as per the above structure. But I don't
    get the answer of 102 when I do readInt() (which reads 4B). On the contrary, when I do
    readByte(), thats when I the right file size of 102.
    Why ?
    Rest all the output look ok.
    import java.io.*;
    class BMPReader{
         private byte data[];
         private DataInputStream dis = null;
         public BMPReader(String fileName){
              readFile(fileName);
         private void readFile(String fileName){
              try{
                   dis = new DataInputStream(new FileInputStream(new File(fileName)));
              catch(Exception e){
                   System.out.println(e);
         public void printFileHeader(){
              System.out.println("File Headers are...");
              try{
                   // converting the 2 bytes read to Hex
                   System.out.println("bfType :" + Integer.toString(dis.readUnsignedShort(),16));
                   System.out.println("bfSize :" + dis.readInt());
                   //System.out.println("bfSize :" + dis.readByte());// ## identifier, should read 4B here instead of 1B
                   System.out.println("bfReserved1 :" + dis.readUnsignedShort());     // bfReserved1
                   System.out.println("bfReserved2 :" + dis.readUnsignedShort());     // bfReserved2
                   System.out.println("bfOffBits      :" + dis.readInt());               // bfOffBits
              catch(Exception e){
                   System.out.println(e.toString());
         public void printFileInfoHeader(){
              System.out.println();
              System.out.println("File Info Headers are...");
              try{
                   System.out.println("biSize           :" + dis.readInt());                         //DWORD
                   System.out.println("biWidth      :" + dis.readInt());                     //LONG
                   System.out.println("biHeight      :" + dis.readInt());                     //LONG
                   System.out.println("biPlanes      :" + dis.readUnsignedShort());          //WORD
                   System.out.println("biBitCount      :" + dis.readUnsignedShort());     //WORD
                   System.out.println("biCompression     :" + dis.readInt());               //DWORD
                   System.out.println("biSizeImage      :" + dis.readInt());               //DWORD
                   System.out.println("biXPelsPerMeter :" + dis.readInt());          //LONG
                   System.out.println("biYPelsPerMeter :" + dis.readInt());          //LONG
                   System.out.println("biClrUsed          :" + dis.readInt());                    //DWORD
                   System.out.println("biClrImportant      :" + dis.readInt());               //DWORD
              catch(Exception e){
                   System.out.println(e.toString());
    public class BMPReadForum{
         public static void main(String args[]){
              String fName = "test2.bmp";
              BMPReader bmpReader = new BMPReader(fName);
              bmpReader.printFileHeader();
              bmpReader.printFileInfoHeader();
    }

    I stumbled across this thread while messing with numbers coming out of my palm (via pilot-xfer) that are unsigned 32 bit values read into a 4 element byte array. I would very much like to turn these things into longs. I looked over the above posted code (for which I am most grateful) and made some alterations that seemed prudent, would someone be so kind as to verify that this is doing what I would like it to do?
    private long makeUInt(byte[] b) {
        // Convert a four byte array into an unsigned
        // int value. Created to handle converting
        // binary data in files to DWORDs, or
        // unsigned ints.
        long result = 0;
        int bit;
        int n = 0;    // I assumed this was the power to take the base to - 0 for lsb
        // I further assumed that we need to move backwards through this array,
        // as LSB is in the largest index, is this right?
        for (int a=3; a >= 0; a--) {
            // Similarly, I need to step through this backwards, for the same
            // reason
            for (int i=7; i >= 0; i--) {
                bit = b[a] & 1;
                result += bit * Math.pow(2, n++);
                b[a] >>>= 1;
        return result;
    }So, as you see - I assumed the "n" term was a count for what power of 2 we were in in the bit string, and I further assumed that, since for me the MSB is in the byte array element with the largest index, I needed to work backwards through this thing.
    Does this seem reasonable?
    Thanks a lot
    Lee

  • Capture Image Of A Very Large JPanel

    Below is some code used to save an image of a JPanel to a file...
        int w = panel.getSize().width;
        int h = panel.getSize().height;
        BufferedImage image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
        Graphics graphics = image.getGraphics();
        // Make the component believe its visible and do its layout.
        panel.addNotify();
        panel.setVisible(true);
        panel.validate();
        // Draw the graphics.
        panel.print(graphics);
        // Write the image to a file.
        ImageFile imageFile = new ImageFile("test.png");
        imageFile.save(image);
        // Dispose of the graphics.
        graphics.dispose();This works fine but my problem is that I am trying to save what may be a very large JPanel, perhaps as large as 10000x10000 pixels. It doesn't take long for the java heap to be used up and an exception to be thrown.
    I know I can increase the heap size of the JVM but since I can't ever be sure how large the panel may be that's a far from ideal solution.
    So the question is how do I save an image of a very large JPanel to a file?

    1) Does the OoM happens while instantiating the buffered image, (which probably tries to allocate a big continuous native array of pixels)?
    Or the Graphics object (same reason, though the Graphics is probably just an empty shell over the big pixel array)?
    2) In which format do you need to save the image? Do you only need to be able to read it again in your own program?
    If yes to both questions, then a pulled-by-the-hair solution coud be to instantiate your own Graphics subclass (no Buffered Image), whose operations would save their arguments directly to the image file, instead of into a big in-memory model of the panel image.
    If the output format is a standard one though (GIF, JPG,...), then maybe your custom Graphics's operations could contain the logic to encode/compress as much as possible of the arguments into an in-memory bytearray of the target format?
    I'm not very confident though; I d'ont know the GIF or JPEG encoding, but I suspect (especially for JPEG) that you need to know the "whole" image to encode it properly.
    But if the target format supports encoders that work on the fly out of streams of bytes (e.g. BMP ) then you can use whatever compress/uncompress technique you see fit (e.g. RLE ): you know the nature of the panels, you may be aware of some optimizations you may perform wrt pixels storage. prior to encoding (e.g., bug empty areas, predictable chessboard pattern, black-and-white palette,...).
    Edited by: jduprez on Sep 19, 2009 7:33 PM

  • VHS & DVD VideoGlide capture has horizontal fuzzy lines

    I have imported some footage from my VCR and DVD player. With uncompressed settings and default compression, I always get fuzzy looking footage. The lines are more pronounced when there is lots of movement. I see the lines when VideoGlide Capture is open but not recording. The lines are not present when watching on a TV. The lines are also there when I connect my DVD player instead of VCR. I have reset the settings by deleting the preferences for VideoGlide. It doesn't appear to be the video player. The lines are there - VCR & DVD. I am using a 0x2861 Empia USB capture device.
    Any help appreciated. I have 14 days to return the product and would like to resolve this quickly. Do I need to get a refund or would a different hardware/software setup yield better results?
    Many thanks
    Wayne

    I want to convert VHS tapes (PAL/UK) to my iMac with the choice of compressed/uncompressed as well as any length of recording (some tapes are 3-4 hrs long. Also, when I looked at the Roxio and Elgato video kits, they seemed to have 640 x 480 resolution. Do they support the correct resolution for TV/VHS? Perhaps someone can explain why this resolution appears to be the default setting for capture.
    On a side note, is it better to import VHS at its native 360 x 288 resolution, or that of TV at 720 x 576?

  • Help with step one

    Can someone please help me to "unzip" the files he talks about in step one building a web page? I don't find that option anywhere....

    Unzipping is not part of Dreamweaver. It's done by your operating system or by a third-party utility such as WinZip or Stuffit.
    On Mac OS X, ZIP files are automatically unzipped as soon as they're downloaded. You should see a folder with the same name in the Downloads folder.
    Instructions for Windows 8 are here: http://windows.microsoft.com/en-gb/windows-8/zip-unzip-files.
    For Windows 7: http://windows.microsoft.com/en-gb/windows/compress-uncompress-files-zip-files#1TC=windows -7.

Maybe you are looking for

  • How many computers can I install labview on?

    I have two automated tests that I would like to run at the same time on separate computers. Increased productivity and the whole 9 yards. Can I legally install Lab view on both computers and run them at the same time? Or is it only 1 copy of Labveiw

  • How do I set multiple pattern matching Vi's and make overlappin​g pattern matches to count as one?

    Hello! I'm a student and I'm currently making a project using pattern matching. My patterns are from chick foot/feet. I'm  created multiple pattern matching VI's to detect all the feet because I find it difficult/impossible to match all the feet with

  • Configuration Issue while Installing 10g

    Hi, I am trying to install 10g (10.3.1.0) But while in configuration step am getting error ERROR : Failed at "Could not get DeploymentManager". This is typically the result of an invalid deployer URI format being supplied, the target server not being

  • Battery works fine one day, next day "Irreparable Battery Damage Has Occured"... TWICE

    I have a three year old IBM X60s that up until a few days ago, was running perfectly. I decided to restore the entire thing to factory condition using the recovery CD's at which point I then updated all the software to the most recent versions. I hav

  • ALV output to spool

    hi, i am having a issue when running ALV output to background. If user gives his email address i am able to send output as attachment to email.but incase he doesnt give then i have to send it to SPOOL. I am able to send the output to spool but when i