Existing table compression in oracle 11g

Hi,
We have a achema of 45gb in oracle 11g and need to compress the tables as it is rarely used,
please can you tell me what are the option available to compress the tables.
Thanks

Thanks for the update.
I was able to compress the other tables which are very less in size and i am able to find one lobsegment which is 37GB
so how i can use compression on the LOBSEGMENT as i could not found any document for this
i found one document in metalink says that
"To achieve LOB compression, you need to specify LOB column storage as SECUREFILE.  With BASICFILE option, you can not use COMPRESSION for LOB column"
SQL> select owner, segment_name,segment_type , bytes/1024/1024 MB from dba_segments where owner='WEBSPR' order by 4 desc ;
OWNER                     SEGMENT_NAME              SEGMENT_TYPE                      MB
WEBSPR                    SYS_LOB0000012869C00002$$ LOBSEGMENT                     37760
and my existing lobsegment metadata is like below
LOB ("DATA") STORE AS (
  TABLESPACE "WEBSPR_DATA" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10
  NOCACHE LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT))
and my lob column is either SECUREFILE nor BASICFILE so is there any other method to compress the existing LOBSEGMENT
Appreciated for the inputs.Thanks

Similar Messages

  • Use or not to use table compression in Oracle 11g (11.2)?

    Hi All,
    I was trying to explore the difference between COMPRESS FOR ALL OPERATIONS, COMPRESS FOR DIRECT_LOAD OPERATIONS and NOCOMPRESS, for a table in Oracle 11.2.
    I know, we can go thru documentation and make a decision.
    Still I have run some very simple tests here.
    Case 1. Create table with COMPRESS FOR DIRECT_LOAD OPERATIONS and then update few records
    Case 2. Create table with COMPRESS FOR ALL OPERATIONS and then update few records
    Case 3. Create table with NOCOMPRESS and update few rows
    I know, Case 1 is a real dummy, but still I did that to see difference between Case1 and Case2.
    --  ---------- CASE 1 --------
    SQL> create table aaa
      2  nologging
      3  compress for direct_load operations
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:02.00
    SQL> select count(*) from aaa ;
      COUNT(*)
         50317
    Elapsed: 00:00:00.11
    SQL> update aaa set created=sysdate where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    *Elapsed: 00:00:05.43*
    SQL> commit;
    Commit complete.
    Elapsed: 00:00:00.04
    SQL>
    --  ---------- CASE 2 --------
    SQL>
    SQL> create table bbb
      2  nologging
      3  compress for all operations
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:02.01
    SQL> select count(*) from bbb ;
      COUNT(*)
         50318
    Elapsed: 00:00:00.20
    SQL> update bbb set created=sysdate  where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    *Elapsed: 00:00:05.31*
    SQL> commit;
    Commit complete.
    Elapsed: 00:00:00.04
    SQL>
    SQL>
    --  ---------- CASE 3 --------
    SQL> create table ccc
      2  nologging
      3  nocompress
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:01.84
    SQL> select count(*) from ccc ;
      COUNT(*)
         50319
    Elapsed: 00:00:00.15
    SQL> update ccc set created=sysdate  where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    *Elapsed: 00:00:00.06*Case1 and Case2 took 5.43 and 5.31 seconds respectively. Case 3 took 0.06 seconds.
    Difference is drastic.
    Am I doing wrong kind of test (lets be honest)?
    Should we not use compression for OLTP systems (or any systems with reasonable updates)?
    Apart from allowing to drop a column, what is the difference between COMPRESS FOR ALL OPERATIONS and COMPRESS FOR DIRECT_LOAD OPERATIONS ? where/how can I see that difference?
    Thoughts please.
    Thanks in advance.

    Hi,
    I have realised that I am using the syntax which is deprecated in 11.2.
    So I am doing the same test with
    COMPRESS BASIC
    COMPRESS FOR OLTP
    instead of
    COMPRESS FOR DIRECT_LOAD OPERATIONS (deprecated)
    COMPRESS FOR ALL OPERATIONS (deprecated)
    But the results are same. Even if I do COMPRESS FOR OLTP, my update is taking 5.4 seconds which is not very different from COMPRESS BASIC
    -- --------- CASE 1 ---------------
    SQL> create table aaa
      2  nologging
      3  compress basic
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:02.46
    SQL>
    SQL> select count(*) from aaa ;
      COUNT(*)
         50318
    Elapsed: 00:00:00.11
    SQL>
    SQL> update aaa set created=sysdate where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    Elapsed: 00:00:05.48
    -- ---------- CASE 2 ---------------
    SQL> create table bbb
      2  nologging
      3  compress for oltp
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:02.01
    SQL>
    SQL> select count(*) from bbb ;
      COUNT(*)
         50319
    Elapsed: 00:00:00.12
    SQL>
    SQL> update bbb set created=sysdate  where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    Elapsed: 00:00:05.25
    -- ---------- CASE 3 ---------------
    SQL> create table ccc
      2  nologging
      3  nocompress
      4  as
      5  select * from all_objects ;
    Table created.
    Elapsed: 00:00:01.81
    SQL>
    SQL> select count(*) from ccc ;
      COUNT(*)
         50320
    Elapsed: 00:00:00.10
    SQL>
    SQL> update ccc set created=sysdate  where owner='SYS' and object_type='VIEW';
    3485 rows updated.
    Elapsed: 00:00:00.04Any thoughts??

  • Automatic table partitioning in Oracle 11g

    Hi All,
    I need to implement automatic table partitioning in Oracle 11g version, but partitioning interval should be on daily basis(For every day).
    I was able to perform this for Monthly and Yearly but not on daily basis.
    create table part
    (a date)PARTITION BY RANGE (a)
    INTERVAL (NUMTOYMINTERVAL(1,'*MONTH*'))
    (partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
    Table created
    create table part
    (a date)PARTITION BY RANGE (a)
    INTERVAL (NUMTOYMINTERVAL(1,'*YEAR*'))
    (partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
    Table createdBut if i use DD or DAY instead of YEAR or MONTH it fails......Please suggest me how to perform this on daily basis.
    SQL>
      1  create table part
      2  (a date)PARTITION BY RANGE (a)
      3  INTERVAL (NUMTOYMINTERVAL(1,'*DAY*'))
      4  (partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
      5* )
    SQL> /
    INTERVAL (NUMTOYMINTERVAL(1,'DAY'))
    ERROR at line 3:
    ORA-14752: Interval expression is not a constant of the correct type
    SQL> create table part
    (a date)PARTITION BY RANGE (a)
    INTERVAL (NUMTOYMINTERVAL(1,'*DD*'))
    (partition p1 values less than (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
    );  2    3    4    5
    INTERVAL (NUMTOYMINTERVAL(1,'DD'))
    ERROR at line 3:
    ORA-14752: Interval expression is not a constant of the correct typePlease suggest me to resolve this ORA-14752 error for using DAY or DD or HH24
    -Yasser

    Yes, for differenct partitions for different months.
    interval (numtoyminterval(1,'MONTH'))
    store in (TS1,TS2,TS3)
    This code will store data in partitions in tablespaces TS1, TS2, and TS3 in a round robin manner.
    for Day wise day yes you can store
    INTERVAL (NUMTODSINTERVAL(1,'day')) or
    INTERVAL (NUMTODSINTERVAL(2,'day')) or
    INTERVAL (NUMTODSINTERVAL(3,'day')) or
    INTERVAL (NUMTODSINTERVAL(4,'day')) or
    INTERVAL (NUMTODSINTERVAL(5,'day')) or
    INTERVAL (NUMTODSINTERVAL(n,'day'))

  • Advaced compression in oracle 11g

    Hi,
    We are migrating databases from oracel 10g to 11g and we are using advance compression, i have few question please help me to understand
    1. if i enable compression on tables is index also get compressed if not how i can enable compression on indexes
    2.For table compression i will take the DDL of tables from oracle 10g databases and i create the tables in oracle 11g with COMPRESS FOR ALL OPERATIONS is this the right approach
    Appreciated the inputs
    thanks

    Hi,
    I checked for one of the table ALTER TABLE MOVE COMPRESS FOR ALL OPERATIONS after upgrading to 11g from 10g and rebuild the index
    SQL> select index_name,COMPRESSION,STATUS from dba_indexes where table_name='POSITION_CUBE';
    INDEX_NAME                     COMPRESS STATUS
    TEST                           DISABLED VALIDstill compress column in dba_indexes show disabled
    so i need to compress index also , how i can achive this
    Thanks

  • Locating user tables in an Oracle 11g database

    Excuse my ignorance on this subject
    But our company has an Oracle 11g database that drives one of our business applications. I am not an oracle admin and there is very little documentation on the application itself, however the application seems to have its own set of explicit login (username and password) credentials so I am guessing they are hashed somewhere in the database tables.
    My question would be – are there any default oracle tables where user credentials would typically be? or tips on tracking down where the password hashes may be? Or can this differ from application to application? Any tips welcome. Apologies for the naivity of the question. My goal is to identify which database accounts can query the table the hashes are in, as we have some users who can access the database for data analysis purposes - but I dont want them to have access to the table.

    user599292 wrote:
    EdStevens wrote:
    user599292 wrote:
    Excuse my ignorance on this subject
    But our company has an Oracle 11g database that drives one of our business applications. I am not an oracle admin and there is very little documentation on the application itself, however the application seems to have its own set of explicit login (username and password) credentials so I am guessing they are hashed somewhere in the database tables.
    My question would be – are there any default oracle tables where user credentials would typically be? or tips on tracking down where the password hashes may be? Or can this differ from application to application? Any tips welcome. Apologies for the naivity of the question. My goal is to identify which database accounts can query the table the hashes are in, as we have some users who can access the database for data analysis purposes - but I dont want them to have access to the table.The information relative to the user accounts is revealed in the view DBA_USERS, which normal users should not have a need to see. However, the passwords are stored in a true hash. It cannot be used directly, and cannot be reversed. So being able to see the hashed password does not in itself constitute a security risk.
    When a user is being authenticated, the procedure that oracle uses is NOT to 'decrypt' the stored password to see if it matches the password presented by the user. Rather, the password presented by the user is hashed, and that hash value is compared against the stored value.My concern was if they could extract those password hash values, there are many free password crackers where if they run dictionary values against those hash values and if any match they then have some passwords to gain perhaps elevated access in the application.Such a method would have to assume a password, know how oracle 'salts' the password, hash the result, then compare to the hashed values from the table. If you employ even a modicum of password complexity enforcement, I doubt that your developers are going to have access to the kind of computing capacity that would be required to get a positive result within your lifetime.
    You need to do three things
    First and foremost, adhere to the principle of 'least privilege'. Do not grant a user account any privileges that are not required for that account to complete it's business task. That includes access to any tables or views. Be wary of any "--ANY---" privileges.
    Second, use the password complexity function to enforce a reasonable level of password complexity.
    Third, Set the user's profile to expire the pasword after 'x' number of days and prevent the reuse of a password until after 'y' iterations.

  • Database growth following index key compression in Oracle 11g

    Hi,
    We have recently implemented index key compression in our sap R3 environments, but unexpectedly this has not resulted in any reduction of index growth rates.
    What I mean by this is that while the indexes have compressed on average 3 fold (over the entire DB), we are not seeing this with the DB growth going forward.
    ie We were experiencing ~15GB/month growth in our database prior to compression, but this figure doesnt seem to have changed much in the 2-3months that we have implemented in our production environments.
    Our trial with ACO compression seemed to yield reduction of table growth rates that corresponded to the compression ratio (ie table data growth rates dropped to a third after compression), but we havent seen this with index compression.
    Does anyone know if a rebuild with index key compression  will it compress any future records inserted into the tables once compression is enabled (as I assumed) or does it only compress whats there already?
    Cheers
    Theo

    Hello Theo,
    Does anyone know if a rebuild with index key compression will it compress any future records inserted into the tables once compression is enabled (as I assumed) or does it only compress whats there already?
    I wrote a blog about index key compression internals long time ago ([Oracle] Index key compression), but now i noticed that one important statement is missing. Yes future entries are compressed too - index key compression is a "live compression" feature.
    We were experiencing ~15GB/month growth in our database prior to compression, but this figure doesnt seem to have changed much in the 2-3months that we have implemented in our production environments.
    Do you mean that your DB size still increases ~15GB per month overall or just the index segments? Depending on the segment type growth - maybe indexes are only a small part of your system at all.
    If you have enabled compression and perform a reorg of them, you can run into one-time effects like 50/50 block splits due to fully packed blocks, etc. It also depends on the way the data is inserted/updated and which indexes are compressed.
    Regards
    Stefan

  • Table Management in oracle 11g

    I am using 11g database, I have to release some space tablespace level.
    Here this is the situation.
    One of the big table (CAMPAIGN_REPORT_RAW), i guess more fragmenation is there in that table.
    It is created under INCIH_DATA tablespace (having 10 datafiles).INCHIH_DATA tablespace occupying 33 GB , in that 25 GB used space 8 GB freespace. We are using filesystem management.
    In that respective filesystem we have to get 10 GB space for creating new tablespace. Unfortunately we dont have that much space in that filesystem, For that my plan is shrink the CAMPAIGN_REPORT_RAW table, resize the datafile!!
    ALTER TABLE CAMPAIGN_REPORT_RAW SHRINK SPACE;
    ALTER TABLE CAMPAIGN_REPORT_RAW SHRINK SPACE CASCADE;
    alter database datafile '<full_file_name>' resize <size>M;
    for that
    I need your help to get the command for
    1) how to find the size of the table
    2) how to find the used size of the table
    3) how to find the Hight Water Mark level of the table
    4) how to find this table occupying which datafile
    Thanks

    Take a guideline from http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/schema003.htm#ADMIN10161
    Size of a table can be seen by,
    select * from dba_segments where segment_name='<your table name>';
    Above query is the 'used' size of the table. Minimum unit of allocating space in Oracle is an 'extent'. So, even if 90% of the extent is empty, you cannot reclaim those blocks, so there is no point in finding empty/used blocks of data.
    You can only find out which tablespace the table belongs to. We cannot find out which 'datafile' the table is in and we do not need to know that as well. Tell me why you want to know it?

  • Basic vs Advanced Compression in Oracle 11g

    Hi,
    We are going to install Oracle 11gR2 in one new database server. Since the database will be used for Data Warehouse purposes and our company has rejected to pay licenses for "Oracle Advanced Compression", I wanted to know which options of "Basic Compression" are suitable for us, if we want to compress the biggest tables in our environment.
    Thanks in advance for your feedback.
    Regards,
    Rubén

    Hi;
    OAC mean you need to pay extra money to oracle. We are using basic compression in our env. Please check this search. Check first 4 links which is explain already how&what basic compress
    PS: This is installation related forum site. For your similar issue please use Oracle Discussion Forums » Oracle Database » General Questions forum part
    Regard
    Helios

  • Create Table Trigger to replicate data from MSSQL2K5 to Oracle 11G on Linux

    I am trying to create a trigger on my MSSQL 2k5 server so that when a record is inserted, a replicated record is created in a table on an Oracle 11g database on a Linux server (Oracle Linux 6).
    Creating the trigger is easy, but when I test it I am getting an error stating the following:
    .NetSqlClient Data Provider The operation could not be performed because OLE DB Provider 'OraOLEDB.Oracle' for linked server "<myserver>" was unable to begin the distributed transaction.
    OLEDB Provider "OraOLEDB.Oracle" for linked server "<myserver>" returned: "New transaction cannot enlist in the specified transaction coordinator"
    Here is the trigger (MSSQL):
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE PROCEDURE insert_aban8_state
        @an8 int,
        @st nvarchar(3)
    AS
    BEGIN
        SET NOCOUNT ON;
        declare @c numeric
        select @c = count(*) from [e9db]..[CRPDTA].[ABAN8_STATE$] where alan8=@an8 and aladds=@st
        if(@c =0)
         begin
            insert into [e9db]..[CRPDTA].[ABAN8_STATE$]
            values(@an8, @st)
         end
        END
    GO
    After reviewing the MS Transaction Coordinator, I am now totally confused. I checked the services and have the MS DTC enabled and running, but am not sure what to do on the Linux side.
    Does the Oracle Services for Microsoft Transaction Server (OraMTS) work on Linux? I could only find references for this for Oracle 11g on Windows.
    What do I need to do to enable this replication via mssql table trigger to Oracle11g on Linux?

    nsidev wrote:
    While I would agree in part, it appears from the message that the trigger is requiring the Transaction Service to be enabled on both the host and target. The point of this post is to determine what, if anything, I need to do on my Oracle DB to allow the trigger to complete successfully.
    There are many posts found with Google concerning the OraMTS service on the Oracle system, but they all appear to be for Windows based systems. My question is, is this service part of the Linux based Oracle DB and if so, how do I initialize it?
    If I am mistaken and this is truly an issue with the MSSQL server, I will replicate the post in those forums. I am just looking for direction and help.
    1) I have NEVER heard that Oracle has, knows about, or supports any "Transaction Service".
    2) Consider what I previously posted regarding the flavor of client source.
    If your assertion about this mythical service were correct, then the Oracle DB would have to be able to "know" that this client connection was originated by SQL Server.
    I don't understand how or why Oracle should behave differently depending upon whether INSERT is done inside or outside a MS SQL Server trigger.
    Please explain & elaborate why Oracle should behave different depending upon the source of any INSERT statement.
    3) From Oracle DB standpoint an INSERT is an INSERT; regardless of the client.

  • Oracle 10g: Table Compress

    Guys,
    I was reading an article about table compression that I was reading for data warehousing environment.
    http://www.oracle.com/technology/products/bi/db/10g/pdf/twp_data_compression_10gr2_0505.pdf
    I didnt understand couple of things like
    Oracle’s compression algorithm is based upon eliminating duplicate values in each block - what does eliminating duplicate values in each block mean
    ALTER TABLE ... MOVE COMPRESS works in 10g what is its equivalent in Oracle 9i.
    Also is there a concept of table compression in Oracle 9i
    Any inputs/suggestions would help
    Thanks

    what does eliminating duplicate values in each block meanThat a compression method. Have only once the same info. That doesn't drop from the table duplicate rows. Important phrase is here :
    "Duplicate values in all the rows and columns in a block are stored once at the beginning of the block, in what is called a symbol table for that block. All occurrences of such values are replaced with a short reference to the symbol table."
    Also is there a concept of table compression in Oracle 9iThere is such thing :
    http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_73a.htm#2128735
    Nicolas.

  • Generating SQL Script for Existing Tables and DBs

    Hello,
    is it possible to generate automatically a SQL-Script from an existing table or oracle database ?
    I want to export an existing table from an Oracle DB (g11) if its possible with the data.
    Perhaps somebody could me explain how to to do this.
    I am using the "SQL Developer 2.1" and the "enterprise manager konsole".
    I'm a rookie in using this tools.
    Thank you for any informations.
    N. Wylutzki

    If you want to export data, you should use the export utility. This is documented:
    http://tinyurl.com/23b7on

  • How to connect from Oracle 11g to SQL Server 2008 R2

    Hi,
    Is it possible to connect from Oracle 11g on AIX to SQL Server 2008 R2? If so, what is the preferred method?
    SQL Server has the original table. From Oracle 11g, we want to access data which is in SQL Server real time.
    Thank You
    Sarayu

    Hi,
    Have a look at these Oracle notes for the full information on the gateways -
    Master Note for Oracle Gateway Products (Doc ID 1083703.1)
    Functional Differences Between DG4ODBC and Specific Database Gateways (Doc ID 252364.1)
    Gateway and Generic Connectivity Licensing Considerations (Doc ID 232482.1)
    How to Setup DG4MSQL (Oracle Database Gateway for MS SQL Server) 64bit Unix OS (Linux, Solaris, AIX,HP-UX) (Doc ID 562509.1)
    How to Configure DG4ODBC on 64bit Unix OS (Linux, Solaris, AIX, HP-UX Itanium) to Connect to Non-Oracle Databases Post Install (Doc ID 561033.1)
    The Database Gateway for SQL*Server (DG4MSQL) needs a separate license but the Database Gateway for ODBC (DG4ODBC) is included in your RDBMS license. You only need to provide the third party ODBC driver needed by DG4ODBC.
    Regards,
    Mike

  • Simple way to connect Oracle 11g XE with MS SQL Server 2000

    Is there a simple way to access SQL server database/ Tables within from Oracle 11g XE (Windows-32bit) on same machine. I am a novice so kindly keep it simple. Thanks

    To connect to a SQL Server you need to use an Oracle product called Database gateway for ODBC which uses a 3rd party ODBC driver to connect to the SQL Server.
    The easiest set up is to install DG4ODBC release 11.2 on the SQL Server. How to configure the Database Gateway for ODBC is described in note:
    How to Configure DG4ODBC (Oracle Database Gateway for ODBC) on Windows 32bit to Connect to Non-Oracle Databases Post Install          [Document 466225.1]     
    when you install DG4ODBC on a 32bit Windows operating system and the instructions for a 64bit Widnows operating system can be found in this note:
    How to Configure DG4ODBC (Oracle Database Gateway for ODBC) on 64bit Windows Operating Systems to Connect to Non-Oracle Databases Post Install          [Document 1266572.1]     
    The database gateway for ODBC is available for free from here:
    http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html
    Please make sure you select the 32bit or 64bit Windows operating system depending on the platform where you've installed the SQL Server and on which you now install the gateway and download the <win32/64>_11gR2_gateways.zip CD.
    Once downloaded, unzip it and install it using the Oracle Universal installer. Make sure you select the product Database Gateway for ODBC (there's also a dedicated SQL server gateway called Database Gateway for MS SQL Server - this gateway is NOT for free and it requires a separate license).

  • Create partition to existing table

    I have a existing table which is not partitioned. How can partition my existing table?

    oops.... This is more better....
    Partitioning an Existing Table
    http://www.oracle-base.com/articles/misc/PartitioningAnExistingTable.php
    Other method
    (1) create new_table with oen or more range partition.
    (2) alter table new_table exchange partition with old_table.
    (3) rename or drop old_table.
    (4) rename new_table to old_table.
    (5) split or add partition.
    -- Examples
    --(1)
    create table part_tab
    partition by range (col)
    (partition p1 values less than (maxvalue))
    as select * from org_tab where 1=0;
    --(2)
    alter table part_tab
    exchange partition p1 with table org_tab
    without validation
    --(3)
    rename org_tab to backup_table;
    --(4)
    rename part_tab to org_tab;
    --(5)
    alter table org_tab
    split partition p1 at (100)
    into (partition p1, partition p2);
    alter table org_tab
    split partition p2 at (200)
    into (partition p2, partition p3);
    -- Results
    SQL> select * from org_tab partition(p1);
    COL        VC
    99         abc
    SQL> c/p1/p2
      1* select * from org_tab partition(p2)
    SQL> /
    COL        VC
    199        def
    Original is written in Japanese language (OTN Japan)
    http://otn.oracle.co.jp/forum/message.jspa?messageID=3045618?
    Message was edited by:
    ushitaki

  • Oracle 11g imp erroneously tries to recreate existing tables with CLOBs?

    I have a shell script for loading database dumps from both Datapump and the older exp/imp.
    Often when loading dumps, I need to rename the schema owner and tablespace names (which is handled by REMAP_SCHEMA and REMAP_TABLESPACE in Datapump).
    However I have a whole bunch of dumps created with exp at this point and not that many Datapump dumps yet. As such the old style dumps are handled by the shell script in this way:
    1) A first pass imp is run using INDEXFILE to generate a file with the SQL to create tables and indexes. Options also include FROMUSER and TOUSER.
    2) A series of sed command edit the SQL file to change the tablespace names (which are schema owner specific in our case).
    3) The editted SQL file is run with sqlplus to create the tables and indexes.
    4) A second pass imp is run to load the table rows as well as triggers, stored procedures, views, etc. Options include FROMUSER, TOUSER, COMMIT=Y, IGNORE=Y, BUFFER, STATISTICS=NONE, CONSTRAINTS=N
    This shell script has been working great for loading exp dump files into Oracle 9 and Oracle 10 databases, but now that I'm trying to load these dumps into Oracle 11, it fails.
    The problem is in step 4, the imp program is trying to create some of the tables that already were created with sqlplus in step 2. The problematic tables all seem to have CLOB columns in them. The table creation fails because it tries to use the tablespace names from the dump file, which do not exist in the destination database. And when the table creation fails, imp then decides not to load the rows for those table.
    This seems like a bug in the Oracle 11 imp program. I don't understand why it thinks it needs to recreate tables that already exist when those tables have CLOB columns. Is there something different about CLOB columns in Oracle 11 that I should know about that might be confusing imp into thinking that it needs to create tables when they already exist? Maybe I need to do something to those tables in SQL so that imp does not think it needs to recreate them?
    I know that the tables with the CLOBs were created correctly because I was trying to find some way to workaround this. For step 4, I tried using DATA_ONLY=Y, in which case imp does not try to create the tables and just loads the table rows. Of course using DATA_ONLY, I don't get a lot of other things like triggers, view and stored procedures. I started to try to get around that by doing 3 passes with imp, so that I could pick up the missing pieces by using an imp pass with ROWS=N, but strangely that has the same problem of trying to recreate the existing tables.

    The only solution I've found so far as a workaround is rather convoluted.
    1. I took an export using datapump's expdp of SCHEMA1 (in 10g it will skip the table with the xmltype).
    2. I imported the data to my empty schema (SCHEMA2) using impdp. To avoid the error that the type already exists with another OID, I used the TRANSFORM=oid:n parameter e.g.
    impdp user/pwd dumpfile=noxmltable.dmp logfile=importallbutxmltable.log remap_schema=SCHEMA1:SCHEMA2 TRANSFORM=oid:n directory=MYDUMPDIR
    3. I then manually created my xmltype table in the SCHEMA2 and did a select into to load it (make sure you have the select privileges to do so):
    INSERT INTO SCHEMA2.XMLTABLE2 SELECT * FROM SCHEMA1.XMLTABLE1;
    4. I am still taking an export with exp of the xmltable as well even though I'm not sure I can do anything with it.
    Thanks!
    Edited by: stacyz on Jul 28, 2009 9:49 AM

Maybe you are looking for

  • Dropped WD external hard drive will not mount

    Hi there, First ever post on this forum! I have a mid-2010 macbook running Mountain Lion A 2TB Western Digital Hard Drive dropped to the floor while plugged in and spinning. I reconnected it, and after a few minutes mounted, showing all my files. The

  • SOME FIELDS OF WORK AREA ARE CONVERTED TO ***** AT AT NEW STATEMENT

    HELLO SOME OF THE FIELDS OF WORK AREA ARE CONVERTED INTO *********** AT AT NEW STATEMENT .FIND BELOW THE CODE *& Report  YTEST_OBJECTS7 REPORT  YTEST_OBJECTS7. Program Description Author.........: Jitendra Dash                                       *

  • New Feature (Filter on Reports)

    I've just noticed that there's been a new feature added in the latest OATS v9.10 patch so that you can filter reports based on a time range. This is extremely useful, particularly if you want to 'ignore' any ramp-up/ramp-down periods and just take a

  • Cannot open in Excel 2010 in Browser (32bit IE9) Windows 7

    I have a user that has the following set-up: Windows 7 (64-bit) IE9 (64 bit) Office 2010 When she tries to open Excel in the browser she gets this error: "sharepoint 2010 to open this workbook, your computer must have a version of Microsoft Excel ins

  • Getting new Hard Disk. Need help with moving files from old one.

    Hey guys so here is my dilema. So I have a Lacie 500GB Hard Drive and made the mistake of accidently deleting all my backups.db into the trash but thankfully never emptied that. There was no way to put it back into the backups folder so it caused me