Data compression

Dear All,
I have one more query how to compress a table (syntax) and how to check the difference in size between compress table "A" and uncompress table "A".
Context :
We all are forms & Report 6i developer, no DBA in our company.
We have data of last 10 years HR.AC *etc related data. What my boss(he is completly non technical) say is remove data of first 5 years. It should be kept aside. when user request a report which involved the data which you have removed and kept aside should be quickly available on users request.
That's why i want to know how to do compression and purging.
Thanking you in anticipation
Regards,
Devendra

Hello,
I saw one of the demo on site having following sql statement
select table_name,compression,compress_for from user_tables;
but when i desc user_tables it gives follwoing :
SQL> desc user_tables;
Name Null? Type
TABLE_NAME NOT NULL VARCHAR2(30)
TABLESPACE_NAME VARCHAR2(30)
CLUSTER_NAME VARCHAR2(30)
IOT_NAME VARCHAR2(30)
PCT_FREE NUMBER
PCT_USED NUMBER
INI_TRANS NUMBER
MAX_TRANS NUMBER
INITIAL_EXTENT NUMBER
NEXT_EXTENT NUMBER
MIN_EXTENTS NUMBER
MAX_EXTENTS NUMBER
PCT_INCREASE NUMBER
FREELISTS NUMBER
FREELIST_GROUPS NUMBER
LOGGING VARCHAR2(3)
BACKED_UP VARCHAR2(1)
NUM_ROWS NUMBER
BLOCKS NUMBER
EMPTY_BLOCKS NUMBER
AVG_SPACE NUMBER
CHAIN_CNT NUMBER
AVG_ROW_LEN NUMBER
AVG_SPACE_FREELIST_BLOCKS NUMBER
NUM_FREELIST_BLOCKS NUMBER
DEGREE VARCHAR2(10)
INSTANCES VARCHAR2(10)
CACHE VARCHAR2(5)
TABLE_LOCK VARCHAR2(8)
SAMPLE_SIZE NUMBER
LAST_ANALYZED DATE
PARTITIONED VARCHAR2(3)
IOT_TYPE VARCHAR2(12)
TEMPORARY VARCHAR2(1)
SECONDARY VARCHAR2(1)
NESTED VARCHAR2(3)
BUFFER_POOL VARCHAR2(7)
ROW_MOVEMENT VARCHAR2(8)
GLOBAL_STATS VARCHAR2(3)
USER_STATS VARCHAR2(3)
DURATION VARCHAR2(15)
SKIP_CORRUPT VARCHAR2(8)
MONITORING VARCHAR2(3)
CLUSTER_OWNER VARCHAR2(30)
DEPENDENCIES VARCHAR2(8)
May be due to version difference i have 9i DB.
So to get the same result what is the SQL.
Best Regards,
Devendra

Similar Messages

  • File Adapter Data Compression

    I'd like to extend file adapter behavior to add data compression features like unzip after read file and zip before write file. I read oracles's file adapter documentation but i didn't find any extension point

    if its java mapping, just create a DT with any structure as you wish.
    ex.
    DT_Dummy
    |__ Dummy_field
    java mapping does not validate the xml against the DT you created

  • Using Data Compression on Microsoft SQL 2008 R2

    We have a very large database which keeps growing and growing. This has made our upgrade process extremely troublesome because the upgrade wizard seems to require close to 3 times the database size of free space to even start.
    As such, we are considering activating DATA COMPRESSION on the PAGE level in Microsoft SQL Server 2008 R2. This technology is native to the SQL Server and compresses the rows and the pages so that they do not take up more space than necessary.
    Traditionally each row take up the space of the maximum of all the fields even though only part of a field is filled with data.
    [Blog about Data Compression|http://blogs.msdn.com/b/sqlserverstorageengine/archive/2007/11/12/types-of-data-compression-in-sql-server-2008.aspx]
    Our idea is to use this initially on the Axxx tables (historic data) to minimize the space they take by using for example:
    ALTER TABLE [dbo].[ADO1] REBUILD PARTITION = ALL
    WITH (DATA_COMPRESSION = PAGE)
    On a test database we have seen tables go from 6GB of space to around 1,5GB which is a significant saving.
    MY QUESTION: Is this allowed to do from SAP point of view? The technology is completely transparent but it does involve a rebuild of the table as demonstrated above.
    Thanks.
    Best regards,
    Mike

    We are using Simple recovery model, so our log files are pretty small.
    Our database itself is about 140GB now and it keeps growing.
    We've also reduced the history size to about 10 versions.
    Still, some of our tables are 6-10GB.
    Some of the advantages of data compression is to also that it improves disk I/O at the cost of slightly higher CPU, which we are pretty sure our server can handle.
    Mike

  • How to find data compression and speed

    1. whats the command/way for viewing how much space the data has taken on its HANA tables as oposed to the same data in disk . I mean how do people measure that there has been a  10:1 data compression.
    2. The time taken for execution, as per seen from executing a same SQL on HANA varies ( i see that when i am F8 ing the same query repeatedly) , so its not given in terms of pure cpu cycles , which would have been more absolute .
    I always thought  that there must a better way of checking the speed of execution  like checking the log which gives all data regarding executions , than just seeing the output window query executions.

    Rajarshi Muhuri wrote:
    1. whats the command/way for viewing how much space the data has taken on its HANA tables as oposed to the same data in disk . I mean how do people measure that there has been a  10:1 data compression.
    The data is stored the same way in memory as it is on disk. In fact, scans, joins etc. are performed on compressed data.
    To calculate the compression factor, we check the required storage after compression and compare it to what would be required to save the same amount of data uncompressed (you know, length of data x number of occurance for each distinct value of a column).
    One thing to note here is: compression factors must always be seen for one column at a time. There is no such measure like "table compression factor".
    > 2. The time taken for execution, as per seen from executing a same SQL on HANA varies ( i see that when i am F8 ing the same query repeatedly) , so its not given in terms of pure cpu cycles , which would have been more absolute .
    >
    > I always thought  that there must a better way of checking the speed of execution  like checking the log which gives all data regarding executions , than just seeing the output window query executions.
    Well, CPU cycles wouldn't be an absolute measure as well.
    Think about the time that is not  spend on the CPU.
    Wait time for locks for example.
    Or time lost because other processes used the CPU.
    In reality you're ususally not interested so much in the perfect execution on one query that has all resources of the system bound to it, but instead you strive to get the best performance when the system has it's typical workload.
    In the end, the actual response time is what means money to business processes.
    So that's what we're looking at.
    And there are some tools available for that. The performance trace for example.
    And yes, query runtimes will always differ and never be totally stable all the time.
    That is why performance benchmarks take averages for multiple runs.
    regards,
    Lars

  • Data compression in xi

    Hi ,
    How do you do data compression in xi?
    thanks in advance,
    Ramya Shenoy

    Hi Ramya,
                    Are you talking about the archiving of the messages in the XI server. Or compressing individual XI message as parteek has explained in his reply uisng the PayloadZipBean.
    Thanks
    Ajay

  • DB2 : data compression active ?

    Hi forum,
    how can I find out whether data compression is active in our DB2 database with SAP BW ?
    Kind regards
    S0007336574

    Hello Olaf,
    to check whether you are entitled for compression license-wise, use the following:
    db2licm -l
    Check for the storage optimization feature.
    To check, whether there are any tables that have the compression flag enabled
    db2 "select count(*) from syscat.tables where compression = 'R' or compression = 'B'"
    To get the names of these tables
    db2 "select tabname, compression from syscat.tables where compression = 'R' or compression = 'B'"
    "R" means row compression, "B" means both Row and Value compression.
    To check the actual compression savings for tables, you can use this:
    --> Be aware that it may run for a longer time
    --> Replace the SAPHJZ by your own SAPSID
    SELECT substr(TABNAME,1,25) as TABLE_NAME,
         DICT_BUILDER,
         COMPRESS_DICT_SIZE,
         EXPAND_DICT_SIZE,
         PAGES_SAVED_PERCENT,
         BYTES_SAVED_PERCENT
            FROM SYSIBMADM.ADMINTABCOMPRESSINFO
         WHERE tabschema='SAPHJZ'
         AND COMPRESS_DICT_SIZE > 0
         order by 1 asc
    Hope this helps.
    Hans-Jürgen

  • Data Compression express VI chops off last waveform element

    While using the data compression express vi, I noticed the vi truncates the array of waveforms by removing the last element. 
    When I log this to an excel file, it puts a blank row into it.  I ran this VI twice which appended two runs in teh same log file to show it.
    What is an easy way to delete this last element out of the signal array?
    See the graphs last sample on the x axis of the vi I attached.  Element at x=1.0 has been deleted on the graph thus resulting in a blank row in my excel file.

    Hi Id,
    I think this happens because it is using the data that comes after the point to average that point, though I am uncertain. I will have to look into it further.

  • Error in FI data compression

    Hi Gurus,
    I have made summarization in obcy as per 36353 note,uploaded the program(zttypv) as per correction instruction & made vbrk table -``/field - ``
    I created billing document but I am getting this error when try to release the document through t/code VFX3.
    The detail of the error at VFX3
    9056500667 000000 Error in FI data compression
         LongText
             Detail
              Diagnosis
                  The data in the FI document should be summarized via field
                  ''. However, the system could not find this field in the
                  internal structure (see 'LFACIGEN').
              System Response
                  This error stems from inconsistencies between the data base
                  tables of the FI document and the internal summarization
                  structure.
              Procedure
                  Start program 'SAPFACCG', in order to reset the internal
                  structure, and then check whether field '' is in
                  'LFACIGEN'. enthalten ist.
    I HAVE TRIED THE PROGRAM "SAPFACCG"  TO EXECUTE
    BUT IT IS NOT,IT IS SAYING IT NEED TO BE RESTARTED NOW HOW TO RESTART???
    REPORT SAPFACCG MESSAGE-ID F5.
    Generating compression structure P_ACC in LFACIGEN.
    This program has to be restarted in case of problem with compression
    (Message F5 843)
      CALL FUNCTION 'FI_DOCUMENT_INIT'.
    Is there anyway to resovlve this issue
    Thanks in advance
    Umed

    Hi
    I have uploaded the program(zttypv) as per correction instruction & which has made in TTYPV as object type-vbrk, table -``/field - ``,now same thing is reflecting in OBCY as vbrk table -``/field - ``
    umed

  • Is data compression all automatic? Or does manual steps occur in HANA Studio

    Hi all,
    I'm new to HANA and have been leveraging SCN in a BIG way to answer my questions (thanks everyone for your contributions). This is my first post as I was unable to find an answer to my current question .
    I've been reading up on data compression in HANA and I learned that there are different techniques, such as Dictionary, Cluster, and Run-length encoding.
    My Question is: Is the compression automatic? Or does it need to be modeled within HANA Studio? Let's use Dictionary Encoding as an example. Are there algorithms in place within HANA that will automatically aggregate values so only distinct values remain? Will the attribute vector  and inverted indexed tables automatically be created?
    Just as some background, this is what I am basing my question on:
    http://www.agilityworks.co.uk/our-blog/demystifying-the-column-store-%E2%80%93-viewing-column-store-statistics-in-sap-ha…
    Thank you!
    Kyle

    Hi Justin,
    you are right, the compression is related to the delta merge - and therefore, as long as delta merges happen automatically, compression will also happen automatically.
    SAP HANA has two compression stages on the column store: the first - and typically dominating one - is the implicit compression obtained from using a data dictionary for all column-store columns.
    The second stage is often called "sparse compression" - it offers additional compression on top of the dictionary compression, with several available algorithms. During the "optimize compression" run, the most appropriate compression algorithm is chosen for each column (and some columns may not be sparse-compressed at all, because it would not bring a benefit).
    The optimize compression run does not happen with every delta merge. Instead, it will be performed with the first delta merge on a given table, and then only if the data content of the table has changed significantly (typically, if I remember correctly, the system waits until the number of records has doubled. Note that the optimize compression run may _change_ the compression type chosen for a given column. The sparse compression itself will be done with every delta merge, using the algorithms selected in the past optimze compression.
    If you have switched off automatic delta merging globally, or if you have disabled automatic delta merging for a given table, there will also be no automatic compression (in the entire system or on the given table). BW on HANA uses the smart merge feature, so that in a standard BW on HANA, automatic compression will happen (with the timing of the delta merge being decided in a cooperation between BW and HANA).
    Best,
    Richard

  • Data compression can have negative impact on application ?

    Hi,
    They are going to analyse table/index structure & go ahead with data compression in order to speed up the performance.
    We have been asked to verify if data compression will have affect the application or not?
    For ex: we run one process which we run through application which rebuild index of a big table. So I may put this thing in effect.
    But could please help me which all areas I should focus and investigate before asking them to proceed for data compression?
    -Vaibhav Chaudhari

    This article will give you most of the answers:
    http://technet.microsoft.com/en-us/library/dd894051(v=sql.100).aspx

  • How do I do data compression when message is sent over RMI-IIOP

    Hi,
    Is there a way one can do data compression/de-compression when message is sent over RMI-IIOP ?
    Regards,
    Sourav

    To elaborate on Tammy's idea, you could use for instance C:\Users\Public at the place where you initially put your Excel file to make sure this is found on the target computer. I would consider this a workaround though.
    Or for Lumira documents that you already designed, change the location of the Excel file and use Data/Edit from your computer, then save the Lumira documents before sending them to the target audience. 
    From my humble opinion, the product should allow to use Data/Edit and change the source file even if the initial file path is no longer found. This should be possible for your target audience.
    Antoine

  • Re: Data Compression

    -----Original Message-----
    From: Jose Suriol <[email protected]>
    To: 'Forte mail list' <[email protected]>
    Date: Friday, February 27, 1998 1:00 PM
    Subject: Data Compression
    >
    Thanks to all who replied to my post about Fort

    >
    Thanks to all who replied to my post about Forte compressing
    data before sending them to the network. It appears Forte tries to
    minimize the size of certain data types but does not do compression
    across the board. As I understand Forte Version 4 will probably
    support the Secure Sockets Layer (SSL) which has a data compression
    option, but unfortunately SSL is, first and foremost, a secureprotocol,
    and while compression is optional, encryption (a CPU intensive process)
    in not.
    Encryption, integrity and compression are all optional in SSL.
    Its possible to request a connection that only has compression, assuming
    that the
    other side agrees.
    Derek

  • Message Data Compression

    I'm not exactly new to java, I've just been away from it for a few years.
    I'm trying to create an XML-based messaging system for communication between a suite of applications. XML being what it is (verbose text) I want to apply data compression. (In the short term, during development, the messages will be between components on a single machine. Ultimately, the applications will likely run on many machines and use the internet to communicate with each other.)
    I was looking at the java.util.zip tools, but I'm not actually creating files. I thought I could use just the ZipEntry part, but it's not coming together well. I was also thinking some flavor of SOAP might serve my needs, but SOAP has come onto the scene during my absence from development activities. I've got to familiarize myself with it a bit more before I can assess whether or not it fits my needs.
    I'm open to suggestions as to how I should approach this. All ideas anyone cares to share are greately appreciated.
    - Patrick

    The system will probably use a combination of RMI and JMS, but that's not anything I want to bring into the question at hand.
    The only problem I'm concerned about right now is, "How do I compress a packet of data?" What I do with that packet of compressed data is a problem for a different level of abstraction. I've got a fairly large buffer of XML that I want to compress before passing off to another entity to act upon. What's the best way to do that?
    - Patrick

  • About data compression

    The lastest version of Berkeley DB supports data compression with its set_bt_compression method.
    I created a database, using default data compression method provided by Berkeley DB. Like this:
    DB *dbp;
    db_create(&dbp, inenv, 0);
    dbp->set_flags( dbp, DB_DUPSORT );
    dbp->set_bt_compress(dbp, NULL, NULL);
    Then i insert key, data.
    For keys, they are random char arrays,
    For data, they are char array with the same content.
    Now the problem is: the compressed database file is the same size of the one that i didn't use the compress method.
    Can someone tell me why? THX

    Hi,
    This is likely because the default compression function does not have much to work with.
    Specifying NULL for both compression and decompression functions in the DB->set_bt_compress method call implies using the default compression/decompression functions in BDB. Berkeley DB's default compression function performs prefix compression on all keys and prefix compression on data values for duplicate keys.
    You haven't specified a prefix or key comparison function (DB->set_bt_prefix, DB->set_bt_compare), hence a default lexical comparison function is used as the prefix function. Given that your keys are random char arrays, the default lexical comparison function may not perform very well in identifying efficient (large-sized) prefixes for the keys.
    Also, as the keys are truly random, it's unlikely that you'll have duplicates, so there's likely nothing to compress on data values for duplicate keys.
    Even if the compression function does compress any keys' prefixes or prefixes for duplicate's data items, if the compressed items (and uncompressed ones) still require to be stored on the same number of database pages as in the case without compression, you'll not see any difference in database file size.
    Regards,
    Andrei

  • Http Data Compression

    Is there any Http Data Compression support in WLS 6.1 or 7.0 ?
    There are tools for the IIS and Apache server. This helps the network
    performance and downloading time.
    www.ehyperspace.com
    http://www.innermedia.com/Products/SqueezePlay_IIS_Real-Time_Web_/squeezepla
    y_iis_real-time_web_.htm
    thanks
    /selvan

    There are no generic solutions for Weblogic 5.1.
    We support filter-like functionality for Weblogic 5.1 with our EnGarde
    software, but we only provide it through OEM contracts (no direct sales).
    Sorry.
    You can use a "front component" to route all requests to other servlets/JSPs
    yourself, but if you do substitution with a "front component", you'll have
    to extend the WL classes themselves (request, response), which gets tricky.
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com/coherence.jsp
    Tangosol Coherence: Clustered Replicated Cache for Weblogic
    "Selvan Ramasamy" <[email protected]> wrote in message
    news:[email protected]..
    Yes, I totally forgot about the filters ... Thank you .
    What will be your suggestion for the Weblogic 5.1 server ? As most of my
    customers are using the weblogic 5.1.
    thanks
    "Cameron Purdy" <[email protected]> wrote in message
    news:[email protected]..
    Cameron, how can I do this so that I don't have change all of my jspsand
    servlets ?
    Should I plug a custom ServletResponse to do this ?In 6.1 (maybe) or 7.0 you can use a filter, which is like a Servlet that
    substitutes its own Request and/or Response object.
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com/coherence.jsp
    Tangosol Coherence: Clustered Replicated Cache for Weblogic
    "Selvan Ramasamy" <[email protected]> wrote in message
    news:[email protected]..
    >

  • Data Compression in ColdFusion 9 for SQL Server 2008

    Hi,
    I need to store fairly large blocks of HTML text in SQL Server (I won't go into why, unless necessary).
    Unfortunately even after stripping out the white space, the text string gets so long that the INSERT/UPDATE query times out.
    I'm looking for a way to compress that text on the fly and send over just the compressed string.  Then uncompress it when it's retrieved from DB.
    The only way I can think of doing it is using CFZIP to create a temporary zip file, then read it in and send that binary string to SQL Server.  But I don't know if that'll speed things up any, cause of all the file operations.  Ideally I would just do it in memory.
    I also looked into GZIP for ColdFusion, but I don't think there is much difference.
    Any suggestions?

    I don't really want to go into too many details as to why, but in this particular scenario just caching the page doesn't work.
    It's used by TV talent to view scores, and has to be almost instant.  The scores are coming in from 100 different sources constantly, so if I cache, then the first user who gets to that page has to wait too long for it to load, and I have to keep clearing the cache like every 5 sec. Plus it's on a clustered server, so what I'm doing is I'm doing all the heavy lifting in a separate process that might even run on a different server, and then simply getting the generated HTML from SQL.
    When a score comes in it triggers an EventGateway that runs a process to generate all different variations of screens for TV talent in the background.
    Then when TV talent looks at any screen in loads instanly cause there are no calculations to be done, and the data is only delayed by how long the process takes.  The actuall process is about 10 sec, but it's getting slowed down a lot because the query that sends the HTML over to SQL Server to store, is HUGE.
    If I can compress that HTML and store the compressed text, then when I do the SELECT from SQL uncompress it and send it back to the users, that would work just fine.
    But I can't figure out how to compress text in ColdFusion.  I'm trying to do GZIP but it's not working.
    Please let me know if you know how to compress a long string in ColdFusion using either GZIP or any other method.
    Thanks.

Maybe you are looking for

  • No video output from MacBook Air micro DVI to Polaroid HDTV via DVI to HDMI

    Grrr! What's going on? The Apple online store finally sent me a micro DVI to DVI adapter and a DVI to HDMI adapter cable as well as headphone output to RCA adapter cables (for separate audio, since DVI will not transmit audio) and I hooked it up and

  • How do I resize photos in iPhoto using automator, I haven't been able to do

    Hi all, was in Sydney and took over 200 pics there 8.0 megapixels each. it takes forever to send them via ichat and would like to downsize them to 5.1 or 5.0. how do i go about getting them all downsized? Thanks all

  • PSD files always imported as one layer QuickTime

    Does anyone know how to work around the bug in DVD SP 4.2.1 that formats every imported PSD file to Quick Time format destroying the layer structure? No matter if the file has layers or not, no matter if the file is saved in PSD, TIF, JEPG, etc the f

  • LG blu ray players to stream FIOS channels wireless

    I am looking to try using a wireless LG blu ray player to stream some channels from FIOS instead of having to purchase additional cable boxes for some bedroom TVs.  I can not find the technical requirements  I need in a device to get this to work.  I

  • How can you tell if an unopened product is engraved?

    I got an Ipod Touch for Christmas and was wondering if there is a way to tell if it is engraved without opening it. I googled it and couldn't really find a conclusive answer. I am interested in upgrading from an 8gb to a 32gb and read that apple does