Oracle 10g: Table Compress

Guys,
I was reading an article about table compression that I was reading for data warehousing environment.
http://www.oracle.com/technology/products/bi/db/10g/pdf/twp_data_compression_10gr2_0505.pdf
I didnt understand couple of things like
Oracle’s compression algorithm is based upon eliminating duplicate values in each block - what does eliminating duplicate values in each block mean
ALTER TABLE ... MOVE COMPRESS works in 10g what is its equivalent in Oracle 9i.
Also is there a concept of table compression in Oracle 9i
Any inputs/suggestions would help
Thanks

what does eliminating duplicate values in each block meanThat a compression method. Have only once the same info. That doesn't drop from the table duplicate rows. Important phrase is here :
"Duplicate values in all the rows and columns in a block are stored once at the beginning of the block, in what is called a symbol table for that block. All occurrences of such values are replaced with a short reference to the symbol table."
Also is there a concept of table compression in Oracle 9iThere is such thing :
http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_73a.htm#2128735
Nicolas.

Similar Messages

  • Read Oracle 10g Tables to SQL Server 2012

    Hi all,
    I have Oracle 10g on an XP machine, and use the 'Oracle in OraDB10g_home1' driver to read the data. I have another Windows Server 2008 R2 machine on the same network, with SQL Server 2012 on it. What is the best way to read Oracle Tables in SQL Server? Can I setup an ODBC link from my Windows Server machine to the Oracle Database (which would require me to download an Oracle ODBC driver)? Or is the best way to export the required tables from Oracle (e.g. into csv format) and import them into SQL?
    Thanking you in advance,
    Imelda.

    987575 wrote:
    Hi all,
    I have Oracle 10g on an XP machine, and use the 'Oracle in OraDB10g_home1' driver to read the data. I have another Windows Server 2008 R2 machine on the same network, with SQL Server 2012 on it. What is the best way to read Oracle Tables in SQL Server? Can I setup an ODBC link from my Windows Server machine to the Oracle Database (which would require me to download an Oracle ODBC driver)? Or is the best way to export the required tables from Oracle (e.g. into csv format) and import them into SQL?
    Thanking you in advance,
    Imelda.You should use Heterogeneous Services
    Following is a demonstration in ASKTOM to connect from Oracle to Excel, You can use the same to connect to SQL Server.
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:4406709207206#18830681837358

  • Oracle 11g table compression feature

    Is anybody using Oracle 11g's compression feature in production? I read a whitepaper on this and also read some forums/threads on this topic and so far I haven't read anything negative yet, that doesn't meant that there isn't anything that could have an adverse affect. I wanted to check with you guys out there to see if anyone is really using is this feature in production and see if there are any affects on the performance or any disadvantages of using this compression feature. I have tested this on one my major tablespace and I did see a big difference in the reduce size on the tablespace but I am still hesitated to put this into production. I would like to hear from you guys to see what you guys think?

    I have tested this on one my major tablespace and I did see a big difference in the reduce size on the tablespace but I am still hesitated to put this into production.Nothing better than testing the solution before you can put into real.
    http://www.oracle.com/technetwork/articles/oem/11g-compression-198295.html

  • Query on Oracle Concepts: Table Compression

    The attributes for table compression can be declared for a tablespace, table, or table partition. If declared at the tablespace level, then tables created in the tablespace are compressed by default. You can alter the compression attribute for a table, in which case the change only applies to new data going into that table. Consequently, a single table or partition may contain compressed and uncompressed blocks, which guarantees that data size will not increase because of compression. If compression could increase the size of a block, then the database does not apply it to the block.
    Can anybody please explain text marked as bold? How can data-size/block-size can increase by compression?
    Regards,
    Ankit Rathi
    http://oraclenbeyond.blogspot.in

    >
    The attributes for table compression can be declared for a tablespace, table, or table partition. If declared at the tablespace level, then tables created in the tablespace are compressed by default. You can alter the compression attribute for a table, in which case the change only applies to new data going into that table. Consequently, a single table or partition may contain compressed and uncompressed blocks, which guarantees that data size will not increase because of compression. If compression could increase the size of a block, then the database does not apply it to the block.
    Can anybody please explain text marked as bold? How can data-size/block-size can increase by compression?
    >
    First let's be clear on what is being said. The doc says this:
    >
    If compression could increase the size of a block, then the database does not apply it to the block.
    >
    That is misleading because, of course, the size of the block can't change. You should really read that as
    >
    If compression could increase the size of the data being stored in a block, then the database does not apply it to the block.
    >
    There is overhead associated with the compression because the metadata that is needed to translate any compressed data back into its original state is stored in the block along with the compressed data.
    The simplest analogy (though not a perfect one) is the effect you can get if you try to zip an already highly compressed file.
    For example, if you try to use Winzip to compress an image file (jpg, gif, etc) or a video file you can easily wind up with a zip file that is larger than the uncompressed file was to begin with. That is because the file itself hardly compresses at all but the overhead of the zip file adds to the ultimate file size.
    I suggest you edit your thread subject since this question is NOT about partitioning.

  • How to Insert data from an XML file into an Oracle 10g table

    Hello,
    Please can you help me as I have hit a brick wall with this problem.
    We are running version 10g Oracle and we will start receiving XML files with employee data that needs loading into a table, this is the XML file:
    <?xml version="1.0"?>
    <RECRUITS>
    <RECRUIT>
    <FIRST_NAME>Gordon</FIRST_NAME>
    <LAST_NAME>Brown</LAST_NAME>
    <SHORT_NAME>GORDONBROWN</SHORT_NAME>
    <APP_NO>00002</APP_NO>
    <STATUS>M</STATUS>
    <DATE_FROM>21-JUL-2006</DATE_FROM>
    <RESOURCE_TYPE>P</RESOURCE_TYPE>
    <TITLE>Mr</TITLE>
    <DATE_OF_BIRTH>28-DEC-1983</DATE_OF_BIRTH>
    <SOCIAL_SEC>AB128456A</SOCIAL_SEC>
    <PARTTIME_PCT>1</PARTTIME_PCT>
    <SEX>M</SEX>
    <ADDRESS_TYPE>1</ADDRESS_TYPE>
    <ADDRESS>A HOUSE SOMEWHERE HERE</ADDRESS>
    <ZIP_CODE>PE3 LLL</ZIP_CODE>
    <PLACE>BOROUGH</PLACE>
    <COUNTRY_CODE>UK</COUNTRY_CODE>
    <PROVINCE>UK</PROVINCE>
    <EMAIL>[email protected]</EMAIL>
    </RECRUIT>
    (FYI - there may be more than 1 employee in each file so all of the above will be repeated X amount of times)
    </RECRUITS>
    To make things simple we have created a table which mirrors the XML file completely to load the data into, the SQL i have used is thus:
    CREATE TABLE RECRUITMENT
    FIRST_NAME VARCHAR2(30),
    LAST_NAME VARCHAR2(30),
    SHORT_NAME VARCHAR2(30),
    APP_NO NUMBER,
    STATUS VARCHAR2(1),
    DATE_FROM DATE,
    RESOURCE_TYPE VARCHAR2(1),
    TITLE VARCHAR2(4),
    DATE_OF_BIRTH DATE,
    SOCIAL_SEC VARCHAR2(9),
    PARTTIME_PCT NUMBER,
    SEX VARCHAR2(1),
    ADDRESS_TYPE VARCHAR2(1),
    ADDRESS VARCHAR2(30),
    ZIP_CODE VARCHAR2(8),
    PLACE VARCHAR2(10),
    PROVINCE VARCHAR2(3),
    EMAIL VARCHAR2(20)
    Every method we try from the numerous documents and so called "user guides" have failed, please can somebody show me the PL/SQL i need to get this files data into the above table?
    We need to be able to do this purely through SQL*PLUS as we hope - if we ever get it working manually to create a procuedure that will encapsulate everything so it can be run over and over again.
    The XML file is sitting in the XMLDIR and is called REC.XML.
    Please help : (

    Hi, I have got some material for inserting data into oracle table from xml file, this might help you.
    Create XML Document Table
    create table XML_DOCUMENT_TABLE
    FILENAME varchar2(64),
    XML_DOCUMENT XMLTYPE
    (This will be as per your record details).
    Inserting record to Oracle Table
    declare
    XML_TEXT CLOB := '<smsnotification>
                   <messageid> 256427844 </messageid>
              <protocolid> CO0NPS2KHQ </protocolid>
              <notifiedon> 1156123007416 </notifiedon>
              <status> 3PBI: Invalid </status>
    <additionalinfo> Customer account not active </additionalinfo>
    <carrierid> 1175 </carrierid>
    </smsnotification>';
    begin
    insert into XML_DOCUMENT_TABLE values ('Receipt.xml',XMLTYPE(XML_TEXT));
    end;
    Select Statement
    select extractValue(XML_DOCUMENT,'/smsnotification/messageid') Messageid,
    extractValue(XML_DOCUMENT,'/smsnotification/status') Status,
    extractValue(XML_DOCUMENT,'/smsnotification/carrierid') CarrierID
    from XML_DOCUMENT_TABLE;

  • Migrating MySQL5 database to Oracle 10g - table with blob data fails

    During data migration phase from MySQL5 to Oracle, noticed that the tables that have record length greater than 64K fails. It just says FINISHED for those tables without migrating any data. This is specially happening in the tables that have text and blob columns with data more than 20Meg in the blob fields.
    Is this a bug? If not is ther any configuration file to increase the record size? Does it use SQL*Loader underneath to transfer the data or some other mechanism?

    Thanks Spain, but using Offline scripts does not work for me as my record length is around 28MB and SQL*loader allows maximum record length of 20MB.
    Is there any way to dump just the blob column data out of MySQL into a separate file?

  • License for Table compression

    License is required in Oracle 11g for advanced compression.
    For oracle 10g table compression, is the license required?

    The licensing guide of Oracle 11.1 makes a distinction between:
    Oracle Advanced Compression --> extra cost option on top of EE
    Direct-Load Table Compression --> No extra cost option, but requires EE
    The admin guide says:
    To enable compression for all operations you must use the COMPRESS FOR ALL OPERATIONS clause. To enable compression for direct-path inserts only, you use the COMPRESS FOR DIRECT_LOAD OPERATIONS clause. The keyword COMPRESS by itself is the same as the clause COMPRESS FOR DIRECT_LOAD OPERATIONS, and invokes the same compression behavior as previous database releases.
    So if you stick to the old way of compressing data (direct load), it is less efficient than the new way (all operations), but it does not require the extra license.
    Geert De Paep

  • Oracle 10g on Windows server 2008 R2 - error : Logon failed. Details: ADO

    Hello,
    I have installed Oracle 10g 64bit client on Windows 2008 R2 64bit server. I have installed Visual studio framework 2.0 and 3.5 on Application Server i.e. Windows 2008 R2 64bit Server.
    But when i run report from server, it shows following error message :
    *Logon failed. Details: ADO Error Code: 0x Source: ADODB.Connection Description: Provider cannot be found. It may not be properly installed. Error in File C:\Windows\TEMP\testreport {FE5A4BC0-DF74-4E58-87F1-0F203501A3FC}.rpt: Unable to connect: incorrect log on parameters.*
    I have created report with design time connection with oracle 10g tables. It runs in my development machine. but when i deployed on windows 2008 r2 bit server, it raised above error. After that, i created another report with command instead of using design time database connection with table, it run on server smoothly.
    So Pls help me to come out from this problem.
    If anyone has solution for it, pls share it
    Thanks in Advance
    Keyur

    Today I installed all software in following order.
    1) First install oracle 10g 32bit client & 64bit client on windows 2008 R2 development server.
    2) After oracle installation, I installed visual studio 2005 and tested crystal report on server in debug mode. It was working completely.
    (Both type of report - report created with the help of command in crystal report & crystal report created in direct reference of database table)
    Then i published my test project and setup on same server for clientside testing. but on client machine, it shows error message.
    that was : The type initializer for 'CrystalDecisions.CrystalReports.Engine.ReportDocument' threw an exception.
    One more thing i have observed that was report created with help of command in crystal report was working on client machine. But report in which i have added table direclty, was generating above error.
    Then i installed crredist2005_x86.msi and crredist2005_x64.msi from visual studio installed folder. After installation, error message changed on clinet machine.
    Error Message : Logon failed. Details: Error Code: 0x Source: ADODB.Connection Description: Provider cannot be found. It may not be properly installed. Error in File C:\Windows\TEMP\tablesource {EE65A074-2C4E-43DE-A3FC-1FE673093BF1}.rpt: Unable to connect: incorrect log on parameters
    Still in debug mode on server, asp.net project is working with crystal report. I think, when i run asp.net project in debug mode on server, project may be run on 32bit mode. When i run same project from published setup on client machine, it may be run on 64bit mode.
    I can't understand how to solve this problem.
    Pls. help me.
    Thanks in Advance
    Keyur

  • Table compression in 10g

    Hi,
    If a table has compression enabled, then the data will be compressed only if there is bulk/direct load. Is there a way we can find, that the data is inserted using simple insert statement(insert into table...values...)?
    We just want to determine the candidate data for compression, which was not inserted in the defined way and it didn't get compressed.
    Database version: 10.2.0.4
    Regards,

    Hi Santi,
    Since you are using Oracle 10g version, so there is no feature by which we can perform compressed DML. Any DML uncompresses the block (in 10g).
    Read below link and replies by Hemant and HJR :
    10g Data Compression old/new
    Regards
    Girish Sharma

  • Nested tables and multiset operators in Oracle 10g

    Consider the following scenario:
    We have two identical relations R and S defined as:
    CREATE TABLE R(
    a INTEGER,
    b table_type)
    NESTED TABLE b STORE as b_1;
    CREATE TABLE S(
    a INTEGER,
    b table_type)
    NESTED TABLE b STORE as b_2;
    where table_typ is defined as
    CREATE TYPE table_typ AS TABLE OF VARCHAR2(8);
    Suppose we have two instances of R and S, each having one tuple as follows: R(1,table_typ('a','b')) and S(1,table_typ('b','c')).
    I would like to "merge" these two simple instances (e.g., achieve the effect of a simple SELECT * FROM R UNION SELECT * FROM S query) and obtain the following resulting instance: Result(1,table_typ('a','b','c')).
    Would this be possible in Oracle 10g? A simple UNION does not work (I got a "inconsistent datatypes: expected - got SCOTT.TABLE_TYP" error). I also took a look at the MULTISET UNION operator over nested tables available in Oracle 10g, but it doesn't seem to get me anywhere. Any help on this would be greatly appreciated.
    Thank you,
    Laura

    Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    SQL> CREATE OR REPLACE TYPE table_type AS TABLE OF VARCHAR2 (8);
      2  /
    Type created.
    SQL> CREATE TABLE r(
      2    a INTEGER,
      3    b table_type)
      4    NESTED TABLE b STORE as b_1;
    Table created.
    SQL> CREATE TABLE s(
      2    a INTEGER,
      3    b table_type)
      4    NESTED TABLE b STORE as b_2;
    Table created.
    SQL> INSERT INTO r VALUES (1, table_type ('a', 'b'));
    1 row created.
    SQL> INSERT INTO s VALUES (1, table_type ('b', 'c'));
    1 row created.
    SQL> COLUMN c FORMAT A10;
    SQL> SELECT r.a, r.b MULTISET UNION DISTINCT s.b c
      2  FROM   r, s
      3  WHERE  r.a = s.a;
             A C
             1 TABLE_TYPE('a', 'b', 'c')
    SQL>

  • How to hide the data in particular table in oracle 10g

    How to hide the data in particular table in oracle 10g
    i want steps

    If its on Report u can  always hide the column - Keyfigure or Selection - Display - Hide......y do u want to have it on the report if it is to be hided in the first place?

  • Retrieve data from a large table from ORACLE 10g

    I am working with a Microsoft Visual Studio Project that requires to retrieve data from a large table from Oracle 10g database and export the data into the hard drive.
    The problem here is that I am not able to connect to the database directly because of license issue but I can use a third party API to retrieve data from the database. This API has sufficient previllege/license permission on to the database to perform retrieval of data. So, I am not able to use DTS/SSIS or other tool to import data from the database directly connecting to it.
    Here my approach is...first retrieve the data using the API into a .net DataTable and then dump the records from it into the hard drive in a specific format (might be in Excel file/ another SQL server database).
    When I try to retrieve the data from a large table having over 13 lacs records (3-4 GB) in a data table using the visual studio project, I get an Out of memory exception.
    But is there any better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?
    Any help on this problem will be highly appriciated.
    Thanks in advance...
    -Jahedur Rahman
    Edited by: Jahedur on May 16, 2010 11:42 PM

    Girish...Thanks for your reply...But I am sorry for the confusions. Let me explain that...
    1."export the data into another media into the hard drive."
    What does it mean by this line i.e. another media into hard drive???
    ANS: Sorry...I just want to write the data in a file or in a table in SQL server database.
    2."I am not able to connect to the database directly because of license issue"
    huh?? I never heard this question that a user is not able to connect the db because of license. What error / message you are getting?
    ANS: My company uses a 3rd party application that uses ORACLE 10g. And my compnay is licensed to use the 3rd party application (APP+Database is a package) and did not purchased ORACLE license to use directly. So I will not connect to the database directly.
    3.I am not sure which API is you are talking about, but i am running an application of the visual studio data grid or similar kind of controls; in which i can select (select query) as many rows as i needed; no issue.
    ANS: This API is provided by the 3rd party application vendor. I can pass a query to it and it returns a datatable.
    4."better way to retrieve the records chunk by chunk and do the export without loosing the state of the data in the table?"
    ANS: As I get a system error (out of memory) when I select all rows in a datatable at a time, I wanted to retrieve the data in multiple phases.
    E.g: 1 to 20,000 records in 1st phase
    20,001 to 40,000 records in 2nd phase
    40,001 to ...... records in 3nd phase
    and so on...
    Please let me know if this does not clarify your confusions... :)
    Thanks...
    -Jahedur Rahman
    Edited by: user13114507 on May 12, 2010 11:28 PM

  • Oracle 10g - issue with "DELETE from TABLE WHERE ID in (1,2,3)" (cfqueryparam used)

    Hello, everyone.
    I am having issues with running a DELETE statement on an Oracle 10g database.
    DELETE
    FROM tableA
    WHERE ID in (1,2,3)
    If there is only one ID for the IN clause, it works.  But if more than one ID is supplied, I get an "SQL command not properly ended" error message.  Here is the query as CF:
    DELETE
    FROM TRAINING
    WHERE userID = <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#trim(form.userID)#">
         AND TRAINING_ID in <cfqueryparam value="#form.trainingIDs#" cfsqltype="CF_SQL_INTEGER" list="yes">
    Anyone work with Oracle that can help me with this?  I'm an experienced MS-SQL developer; Oracle is new to me.
    Thanks,
    ^_^

    Nevermind.. a co-worker just told me that I still have to use parenthesis around the values for the IN clause. 

  • What Are the Tables in new oracle 10g express

    Hi guys,
    I am new at oracle and I just installed the Oracle 10g express. after login to the oracle by SQL Developer I noticed there are lots of tables in table folder which some of them come with $ and some not! can you please let me know what are these tables for? are they like system tables in SQL Server? if so then what are the tables under
    -->Other Users/System/Tables?
    I am lost here and I can't figure out what are the Other Users? are they like databases? I tries to google them but I couldn't find any thing
    Thanks a lot

    http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/toc.htm
    http://download.oracle.com/docs/cd/E11882_01/server.112/e17110/toc.htm
    SELECT USERNAME FROM ALL_USERS;
    USER/SCHEMA are to Oracle; what database is to SQL_Server
    Edited by: sb92075 on Apr 22, 2011 5:53 PM

  • Table space not reduce after delete in oracle 10g

    Hi..
    Based on my system, i have found that my oracle table space did not reduce after the deletion query. Why ?.. Could somebody help me. As your info, I am using oracle 10g.
    Thank you,
    Baharin

    After Delete the table space will not be set free. high water mark will not be reset. to regain the space you need to recognize the objects from which you deleted the data. This can be done in many ways.
    1) Move the objects.
    Alter table temp move --> optionally tablespace clause can be used. After this you need to rebuild table indexes.
    2) With 10g table can be shrinked or reorganize to free the space.
    alter table mytable enable row movement;
    alter table mytable shrink space;
    3) Export/Import
    export the objects and drop and recreate with import.

Maybe you are looking for

  • ODS Rollback tables post upgrade

    Hi All, We are currently upgrading our BW system from SAP BW 3.5 to BI 7.0. After upgrade, we noticed the roll back tables of ODS are missing. However, we are able to load data in the ODS and view them. Could you suggest any way to re-create these ro

  • Question regarding G510 and switchable graphic cards

    So as I have mentioned in another post, my machine is the G510 with i5 cpu, 4 gigs of ram, HD4500 integrated video card and 2GB HD8570M dedicated video card. My question is the following> Ok, the whole switchable thing is nice and all, but I'm having

  • How can I move a ringtone from my iPhone6 to my PC in order to move to a family member's iPhone6?

    How can I move a ringtone from my iPhone6 to my PC in order to move to a family member's iPhone6?iTunes 12?  With the sidebar this was an easy task. Now I'm lost. My pc is running Windows 8.1, my iTunes is 12.0.1.

  • Create a view  with combine two  queries

    Hi I have two sql queries but just the where condition is just differed But i need a separate coulmn in one view. The data is comming from same table. I need to make two different column like qry 1 fullname, qry2.full name like that first query Selec

  • This is a problem that is causing major tears

    At the beginning of the year i bought a brand new iphone 5s with 32 GB!!! I barely have any apps, yes many pictures but so what? and some songs. However, i still get a message saying that I dont have anymoer available storage. I dont get how this is