EXP/IMP..of table having LOB column to export and import using expdp/impdp

we have one table in that having colum LOB now this table LOB size is approx 550GB.
as per our knowldge LOB space can not be resused.so we have alrady rasied SR on that
we are come to clusion that we need to take backup of this table then truncate this table and then start import
we need help on bekow ponts.
1)we are taking backup with expdp using parellal parameter=4 this backup will complete sussessfully? any other parameter need to keep in expdp while takig backup.
2)once truncate done,does import will complete successfully..?
any SGA PGA or undo tablespace size or undo retention we need to increase.. to completer susecfully import? because its production Critical database?
current SGA 2GB
PGA 398MB
undo retention 1800
undo tbs 6GB
please any one give suggestion to perform activity without error...also suggest parameter need to keep during expdp/impdp
thanks an advance.

Hi,
From my experience be prepared for a long outage to do this - expdp is pretty quick at getting lobs out but very slow at getting them back in again - a lot of the speed optimizations that may datapump so excellent for normal objects are not available for lobs. You really need to test this somewhere first - can you not expdp from live and load into some test area - you don't need the whole database just the table/lob in question. You don;t want to find out after you truncate the table that its going to take 3 days to import back in....
You might want to consider DBMS_REDEFINITION instead?
Here you precreate a temporary table (with same definitiion as the existing one), load the data into it from the existing table and then do a dictionary switch to swap them over - giving you minimal downtime. I think this should work fine with LOBS at 10g but you should do some research and see if it works fine. You'll need a lot of extra tablespace (temporarily) for this approach though.
Regards,
Harry

Similar Messages

  • Why is my Mozilla Version 6.0.1 having problems with Bookmark Export and Import (html) ?

    I have used this feature numerous times with no problems. Starting yesterday, when I export html bookmark file, I get a delay - and then the html file is extraordinarily large (16,400 KB versus 782 KB normal file size).
    Also, when I attempt to import a saved html file, the new file will not save. Again, when I hit the "Export" or "Import"button, there is a prolonged delay.
    Should I delete Mozilla and reinstall ?

    Hi mrpetesix,
    Thanks for the insight. I seem to have solved the problem and can now sync iCal with my phone's calander as I used to without using iCloud. I am running the same setup as you (iTunes 11.0.1, osx 10.6.8, iOS 6.0.1) so, if you want, I'm sure you can do this too (although it seems you now hae found a good solution yourself). Anyway, here it is for the archives:
    When my phone was connected to iTunes, iTunes was showing that my Calander was being updated over the air by iCloud. It was not giving me any other options to choose which calanders to sync etc. So...
    On your iPhone, go to settings>>iCloud and turn calandars OFF.
    Now when you connect it to iTunes you can see all the old options under calandars and syncing between the two devices is back to 'normal'.
    I don't know when this iCloud setting got switched on my phone (I didn't even know it existed before this episode), probably when I updated iOS/iTunes recently.
    Hope that's useful to somebody. Cheers.

  • When comparing database tables with lob columns via "Database diff" in different environments indexes are shown as different

    When using "Database diff" selecting other schemas only for compare own objects are shown too!Hi!
    For tables with lob columns (clob, blob, etc.) indexes with system names are automatically created per lob column.
    If I am on different database instances (eg. dev/test) these system names can differ and are shown as differences, but these is a false positive.
    Unfortunately there is now way to influence the index names.
    Any chance to fix this in sql developer?
    Best regards
    Torsten

    Only the Sql Dev team can respond to that question.
    Such indexes should ONLY be created by Oracle and should NOT be part of any DDL that you, the user, maintains outside the database since they will be created by Oracle when the table is created and will be named at that time.
    It is up to the Sql Dev team to decide whether to deal with that issue and how to deal with it.

  • Cannot create temporary table having identity column

    Hi experts,
    I saw the above error msg while running the following statement:
           create local temporary column table #tmp_table (c1 int GENERATED by default AS IDENTITY (start with 1 increment by 1), c2 int)
         Could not execute 'create local temporary column table #tmp_table(c1 int GENERATED by default AS IDENTITY (start with ...'
         SAP DBTech JDBC: [7]: feature not supported: cannot create temporary table having identity column: C1: line 1 col 48 (at pos 47)
    I understand we can support normal column table creation with identity column, but don't know why cannot support temporary column tables with identity column. Is there any configuration that can enable it for temporary column table? Or what can I do to support it indirectly, like writing a trigger to support it or something else?
    If not, then is there any future plan for this feature?
    Regards,
    Hubery

    Hi Hubery,
    I've heard this trail of arguments before...
    Customer has a solution... they want it on HANA... but they don't want to change the solution.
    Well, fair call, I'd say.
    The problem here is: there's a mix-up of solution and implementation here.
    It should be clear now, that changing DBMS systems (in any direction) will require some effort in changing the implementation. Every DBMS works a bit different than the others, given "standard" SQL or not.
    So I don't agree with the notion of "we cannot change the implementation".
    In fact, you will have to change the implementation anyhow.
    Rather than imitating the existing solution implementation on ASE, implement it on SAP HANA.
    Filling up tons of temporary tables is not a great idea in SAP HANA - you would rather try to create calculation views that present the data ad hoc in the desired way.
    That's my 2 cts on that.
    - Lars

  • Oracle 11.2 - Perform parallel DML on a non partitioned table with LOB column

    Hi,
    Since I wanted to demonstrate new Oracle 12c enhancements on SecureFiles, I tried to use PDML statements on a non partitioned table with LOB column, in both Oracle 11g and Oracle 12c releases. The Oracle 11.2 SecureFiles and Large Objects Developer's Guide of January 2013 clearly says:
    Parallel execution of the following DML operations on tables with LOB columns is supported. These operations run in parallel execution mode only when performed on a partitioned table. DML statements on non-partitioned tables with LOB columns continue to execute in serial execution mode.
    INSERT AS SELECT
    CREATE TABLE AS SELECT
    DELETE
    UPDATE
    MERGE (conditional UPDATE and INSERT)
    Multi-table INSERT
    So I created and populated a simple table with a BLOB column:
    SQL> CREATE TABLE T1 (A BLOB);
    Table created.
    Then, I tried to see the execution plan of a parallel DELETE:
    SQL> EXPLAIN PLAN FOR
      2  delete /*+parallel (t1,8) */ from t1;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3718066193
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |  2048 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |  2048 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement (level=2)
    And I finished by executing the statement.
    SQL> commit;
    Commit complete.
    SQL> alter session enable parallel dml;
    Session altered.
    SQL> delete /*+parallel (t1,8) */ from t1;
    2048 rows deleted.
    As we can see, the statement has been run as parallel:
    SQL> select * from v$pq_sesstat;
    STATISTIC                      LAST_QUERY SESSION_TOTAL
    Queries Parallelized                    1             1
    DML Parallelized                        0             0
    DDL Parallelized                        0             0
    DFO Trees                               1             1
    Server Threads                          5             0
    Allocation Height                       5             0
    Allocation Width                        1             0
    Local Msgs Sent                        55            55
    Distr Msgs Sent                         0             0
    Local Msgs Recv'd                      55            55
    Distr Msgs Recv'd                       0             0
    11 rows selected.
    Is it normal ? It is not supposed to be supported on Oracle 11g with non-partitioned table containing LOB column....
    Thank you for your help.
    Michael

    Yes I did it. I tried with force parallel dml, and that is the results on my 12c DB, with the non partitionned and SecureFiles LOB column.
    SQL> explain plan for delete from t1;
    Explained.
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |     4 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |     4 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    The DELETE is not performed in Parallel.
    I tried with another statement :
    SQL> explain plan for
    2        insert into t1 select * from t1;
    Here are the results:
    11g
    | Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT         |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  LOAD TABLE CONVENTIONAL | T1       |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR         |          |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)   | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR    |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL   | T1       |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    12c
    | Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT                   |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)              | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    LOAD AS SELECT                  | T1       |       |       |            |          |  Q1,00 | PCWP |            |
    |   4 |     OPTIMIZER STATISTICS GATHERING |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    |   5 |      PX BLOCK ITERATOR             |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    It seems that the DELETE statement has problems but not the INSERT AS SELECT !

  • ASSM and table with LOB column

    I have a tablespace created with ASSM option. I've heard about that tables with LOB columns can't take the advantage of ASSM.
    I made a test : create a table T with BLOB column in a ASSM tablespace. I succeeded!
    Now I have some questions:
    1. Since the segments of table T can't use ASSM to manage its blocks, what's the actrual approach? The traditional freelists??
    2. Will there be some bad impacts on the usage of the tablespace if table T becomes larger and larger and is used frequently?
    Thanks in advance.

    Can you explain what you mean by #1 because I believe it is incorrect and it does not make sense in my personal opinion. You can create a table in an ASSM tablespace that has a LOB column from 9iR2 on I believe (could be wrong). LOBs don't follow the traditional PCTFREE/PCTUSED scenario. They allocate data in what are called "chunks" that you can define at the time you create the table. In fact I think the new SECUREFILE LOBs actually require ASSM tablespaces.
    HTH!

  • Shrink table with LOB column

    Hello,
    I have a table with 1.000.000 BLOB records. I updated almost a half of the records with NULL. Now I try to reclaim the free space using:
    ALTER TABLE table MODIFY LOB (column) (SHRINK SPACE);
    It's still running from some time, but what I am surprised about is that this operation generates a lot of redo logs (the full table had 30Gb, after the update it should have 15Gb, and by now I already have about 8Gb of generated archive logs).
    Do you know why this operation generates redo logs?
    Thank you,
    Adrian

    The REDO stream that Oracle generates is full of physical addresses (i.e. ROWIDs). If you run an update statement
    UPDATE some_table
      SET some_column = 4
    WHERE some_key = 12345;Oracle actually records in the REDO the logical equivalent of
    UPDATE some_table
      SET some_column = 4
    WHERE ROWID = <<some ROWID>>That is, Oracle converts your logical SQL statement into a series of updates to a series of physical addresses. That's a really helpful thing if the REDO has to be re-applied at a later date because Oracle doesn't have to do all the work of processing the logical SQL statement again (this would be particularly useful if your UPDATE statement were running a bunch of queries that took minutes or hours to return).
    But that means that if you are physically moving rows around, you have to record that fact in the redo stream. Otherwise, if you had to re-apply the redo information (or undo information) in the future, the physical addresses stored in the redo logs may not match the physical addresses in the database. That is, if you move the row with SOME_KEY = 12345 from ROWID A to ROWID B and move the row with SOME_KEY = 67890 from ROWID C to ROWID A, you have to record both of those moves in the redo stream so that the statement
    UPDATE some_table
      SET some_column = 4
    WHERE ROWID = <<ROWID A>>updates the correct row.
    Justin

  • Export table with LOB column

    Hi!
    I have to export table with lob column (3 GB is the size of lob segment) and then drop that lob column from table. Table has about 350k rows.
    (I was thinking) - I have to:
    1. create new tablespace
    2. create copy of my table with CTAS in new tablespace
    3. alter new table to be NOLOGGING
    4. insert all rows from original table with APPEND hint
    5. export copy of table using transport tablespace feature
    6. drop newly created tablespace
    7. drop lob column and rebuild original table
    DB is Oracle 9.2.0.6.0.
    UNDO tablespace limited on 2GB with retention 10800 secs.
    When I tried to insert rows to new table with /*+append*/ hint operation was very very slow so I canceled it.
    How much time should I expect for this operation to complete?
    Is my UNDO sufficient enough to avoid snapshot too old?
    What do you think?
    Thanks for your answers!
    Regards,
    Marko Sutic

    I've seen that document before I posted this question.
    Still I don't know what should I do. Look at this document - Doc ID:     281461.1
    From that document:
    FIX
    Although the performance of the export cannot be improved directly, possible
    alternative solutions are:
    +1. If not required, do not use LOB columns.+
    or:
    +2. Use Transport Tablespace export instead of full/user/table level export.+
    or:
    +3. Upgrade to Oracle10g and use Export DataPump and Import DataPump.+
    I just have to speed up CTAS little more somehow (maybe using parallel processing).
    Anyway thanks for suggestion.
    Regards,
    Marko

  • Table with LOB column

    Hi,
    I have a proble. How to move a table with LOB colum? How to create a table with LOB column by specifying another tablespace for LOB column?
    Please help me.
    Regards,
    Mathew

    What is it that you are not able to find?
    The link that I provided was answer to your second question.

  • Protected memory exception during bulkcopy of table with LOB columns

    Hi,
    I'm using ADO BulkCopy to transfer data from a SqlServer database to Oracle. In some cases, and it seems to only happen on some tables with LOB columns, I get the following exception:
    System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
    at Oracle.DataAccess.Client.OpsBC.Load(IntPtr opsConCtx, OPOBulkCopyValCtx* pOPOBulkCopyValCtx, IntPtr pOpsErrCtx, Int32& pBadRowNum, Int32& pBadColNum, Int32 IsOraDataReader, IntPtr pOpsDacCtx, OpoMetValCtx* pOpoMetValCtx, OpoDacValCtx* pOpoDacValCtx)
    at Oracle.DataAccess.Client.OracleBulkCopy.PerformBulkCopy()
    at Oracle.DataAccess.Client.OracleBulkCopy.WriteDataSourceToServer()
    at Oracle.DataAccess.Client.OracleBulkCopy.WriteToServer(IDataReader reader)
    I'm not sure exactly what conditions trigger this exception; perhaps only when the LOB data is large enough?
    I'm using Oracle 11gR2.
    Has anyone seen this or have an idea how to solve it?
    If I catch the exception and attempt row-by-row copying, I then get "ILLEGAL COMMIT" exceptions.
    Thanks,
    Ben

    From the doc:
    Data Types Supported by Bulk Copy
    The data types supported by Bulk Copy are:
    ORA_SB4
    ORA_VARNUM
    ORA_FLOAT
    ORA_CHARN
    ORA_RAW
    ORA_BFLOAT
    ORA_BDOUBLE
    ORA_IBDOUBLE
    ORA_IBFLOAT
    ORA_DATE
    ORA_TIMESTAMP
    ORA_TIMESTAMP_TZ
    ORA_TIMESTAMP_LTZ
    ORA_INTERVAL_DS
    ORA_INTERVAL_YM
    I can't find any documentation on these datatypes (I'm guessing these are external datatype constants used by OCI??). This list suggests ADO.NET bulk copy of LOBs isn't supported at all (although it works fine most of the time), unless I'm misreading it.
    The remaining paragraphs don't appear to apply to me.
    Thanks,
    Ben

  • How to export and import LOB

    In my user_object, I have an object_type of LOB. How do I export and import object of this type.

    If you are on 10g try use data pump then you don't need to worry about anything.
    If you can only use exp/imp.
    You can export as usual, when import
    either you have the tablespace with exact name precreated for LOB storage
    or if you want to change LOB storage tablespace, you can precreate the tables having LOB type.

  • Export and import XMLType table

    Hi ,
    I want to export one table which contain xmltype column form oracle 11.2.0.1.0 and import into 11.2.0.2.0 version.
    I got following errors when i export the table , when i tried with exp and imp utility
    EXP-00107: Feature (BINARY XML) of column ZZZZ in table XXXX.YYYY is not supported. The table will not be exported.
    then i tried export and import pump.Exporting pump is working ,following is the log
    Export: Release 11.2.0.1.0 - Production on Wed Oct 17 17:53:41 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    ;;; Legacy Mode Active due to the following parameters:
    ;;; Legacy Mode Parameter: "log=<xxxxx>Oct17.log" Location: Command Line, Replaced with: "logfile=T<xxxxx>_Oct17.log"
    ;;; Legacy Mode has set reuse_dumpfiles=true parameter.
    Starting "<xxxxx>"."SYS_EXPORT_TABLE_01": <xxxxx>/******** DUMPFILE=<xxxxx>Oct172.dmp TABLES=<xxxxx>.<xxxxx> logfile=<xxxxx>Oct17.log reusedumpfiles=true
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 13.23 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "<xxxxx>"."<xxxxx>" 13.38 GB 223955 rows
    Master table "<xxxxx>"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
    Dump file set for <xxxxx>.SYS_EXPORT_TABLE_01 is:
    E:\ORACLEDB\ADMIN\LOCALORA11G\DPDUMP\<xxxxx>OCT172.DMP
    Job "<xxxxx>"."SYS_EXPORT_TABLE_01" successfully completed at 20:30:14
    h4. I got error when i import the pump using following command
    +impdp sys_dba/***** dumpfile=XYZ_OCT17_2.DMP logfile=import_vmdb_XYZ_Oct17_2.log FROMUSER=XXXX TOUSER=YYYY CONTENT=DATA_ONLY TRANSFORM=oid:n TABLE_EXISTS_ACTION=append;+
    error is :
    h3. KUP-11007: conversion error loading table "CC_DBA"."XXXX"*
    h3. ORA-01403: no data found*
    h3. ORA-31693: Table data object "XXX_DBA"."XXXX" failed to load/unload and is being skipped due to error:*
    Please help me to get solution for this.

    CREATE UNIQUE INDEX "XXXXX"."XXXX_XBRL_XMLINDEX_IX" ON "CCCC"."XXXX_XBRL" (EXTRACTVALUE(SYS_MAKEXML(128,"SYS_NC00014$"),'/xbrl'))above index is created by us because we are storing file which like
    <?xml version="1.0" encoding="UTF-8"?>
    <xbrl xmlns="http://www.xbrl.org/2003/instance" xmlns:AAAAAA="http://www.AAAAAA.COM/AAAAAA" xmlns:ddd4217="http://www.xbrl.org/2003/ddd4217" xmlns:link="http://www.xbrl.org/2003/linkbase" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    <link:schemaRef xlink:href="http://www.fsm.AAAAAA.COM/AAAAAA-xbrl/v1/AAAAAA-taxonomy-2009-v2.11.xsd" xlink:type="simple" />
    <context id="Company_Current_ForPeriod"> ...I tried to export pump with and without using DATA_OPTIONS=XML_CLOBS too.Both time exporting was success but import get same KUP-11007: conversion error loading table "tab_owner"."bbbbb_XBRL" error.
    I tried the import in different ways
    1. Create table first then import data_only (CONTENT=DATA_ONLY)
    2. Import all table
    In both way table ddl is created successfully ,but it fail on import data.Following is the log when i importing
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "aaaaa"."SYS_IMPORT_TABLE_02" successfully loaded/unloaded
    Starting "aaaaa"."SYS_IMPORT_TABLE_02":  aaaaa/********@vm_dba TABLES=tab_owner.bbbbb_XBRL dumpfile=bbbbb_XBRL_OCT17_2.DMP logfile=import_vmdb
    bbbbb_XBRL_Oct29.log DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    *KUP-11007: conversion error loading table "tab_owner"."bbbbb_XBRL"*
    *ORA-01403: no data found*
    ORA-31693: Table data object "tab_owner"."bbbbb_XBRL" failed to load/unload and is being skipped due to error:
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-26062: Can not continue from previous errors.
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Job "aaaaa"."SYS_IMPORT_TABLE_02" completed with 1 error(s) at 18:42:26

  • Exporting and importing just table definitions

    Hi,
    I have this production database that has a huge amount of data in it. I was asked to set up a test database based on the exact same schema as the live database. When I tried to do an export (from live) and import (to test), with the parameters rows=N and compress=y, the target (test database) data file will still grow enormously, presumably because of the huge number of extents already allocated to the table in the live database. My test database of course, has a limited hard-disk space.
    Is there a way to export and import the table definitions without having the target database experiencing a huge growth in the size of the tablespace?
    Thanks,
    Chris.

    If an export with compress=n is still creating initial extents that a too large, you can still build with the import file but it will take a little work.
    run imp with indexfile=somefile.sql
    when imp is finished, edit somefile.sql by:
    1. remove all the REM statements.
    2. remove all the storage clauses (tables and indexes)
    Make sure your tablespaces have a small (say 1k) default initial extent.
    run imp again with rows=n
    All your tables and indexes will be created with the default tablespace initial extent.

  • Internal table export and import in ECC 5.0 version

    Hi friends,
    I am trying to export and import internal table from one program to other program.
    The below… export and import commands are not working when I run the program in background (using SUBMIT zxxxx via JOB name NUMBER number…..)
    EXPORT ITAB TO MEMORY id 'ZMATERIAL_CREATE'.
    IMPORT ItAB FROM MEMORY ID 'ZMATERIAL_CREATE'.
    Normally it should work. Since It’s not working I am trying with another alternative..
    i.e EXPORT (ptab) INTERNAL TABLE itab.
    My sap version is ECC 5.0….
    For your information, here I am forwarding sap help. Pls have a look and explain how to declare ptab internal table.
    +Extract from SAP help+
    In the dynamic case the parameter list is specified in an index table ptab with two columns. These columns can have any name and have to be of the type "character". In the first column of ptab, you have to specify the names of the parameters and in the second column the data objects. If the second column is initial, then the name of the parameter in the first column has to match the name of a data object. The data object is then stored under its name in the cluster. If the first column of ptab is initial, an uncatchable exception will be raised.
    Outside of classes you can also use a single-column internal table for parameter_list for the dynamic form. In doing so, all data objects are implicitly stored under their name in the data cluster.
    My internal table having around 45 columns.
    pls help me.
    Thanks in advance
    raghunath

    The export/import should work the way you are using it. Just make sure you are using same memory id and make sure its unique - meaning u are using it only for this itab purpose and not overwriting it with other values. Check itab is not initial before you export in program 1 - then import it in prog2 with same memory id...also check case, I am not sure if its case sensitive...
    Here is how you use the second variant...
    Two fields with two different identifications "P1" and "P2" with the dynamic variant of the cluster definition are written to the ABAP Memory. After execution of the statement IMPORT, the contents of the fields text1 and text2 are interchanged.
    TYPES:
      BEGIN OF tab_type,
        para TYPE string,
        dobj TYPE string,
      END OF tab_type.
    DATA:
      id    TYPE c LENGTH 10 VALUE 'TEXTS',
      text1 TYPE string VALUE `IKE`,
      text2 TYPE string VALUE `TINA`,
      line  TYPE tab_type,
      itab  TYPE STANDARD TABLE OF tab_type.
    line-para = 'P1'.
    line-dobj = 'TEXT1'.
    APPEND line TO itab.
    line-para = 'P2'.
    line-dobj = 'TEXT2'.
    APPEND line TO itab.
    EXPORT (itab)     TO MEMORY ID id.
    IMPORT p1 = text2
           p2 = text1 FROM MEMORY ID id.

  • Export and import of data not table and data ????

    hii brothers and sister
    plz i have a a quetion about Export and import of data in oracle forms
    i have created 02 boutons one for export his trigger like this :
    eclare
    alrt number;
    v_directory varchar2(200) := 'c:\backup'; --- that if the C Drive not the Drive that the windows had installed in it.
    path varchar2(100):='back_up'
    ||to_char(sysdate,'dd_mm_yyyy-hh24_mi_ss');
    v_exp varchar2(200) := 'exp hamada/hamada2013@orcl file = '
    ||v_directory
    ||'\'
    ||path
    ||'.dmp';
    begin
    host(v_exp);
    alrt:=show_alert('MSG');
    end;
    and the bouton import is like this
    declare
    alrt number ;
    v_ixp varchar2(200) := 'imp userid=hamada/hamada2013@orcl file =c:\backup2\back.dmp full=yes';
    begin
    host(v_ixp);
    alrt:=show_alert('MSG');
    ref_list;
    end;
    i have just one table "phone"
    this code is correct he expot not only the data but also the creation of the table ....for exemple i do export and evry thing is good until now and i find the .dmp in the folder backup .. but when i deleted all data from my app and try to import this .dmp iit show me error it tell me thet the table phone is already created ....
    so plz help wanna just export the data of phone not the creation of table and data ???? or how can i import just the data from this .dmp ??

    Pl post OS and database versions.
    You will likely need to use the IGNORE flag - http://docs.oracle.com/cd/E11882_01/server.112/e22490/original_import.htm#sthref2201
    Imp utility (by default) will try and create the table first. Since the table already exists, imp reports an error. Use the IGNORE flag in your imp command to not report such errors.
    HTH
    Srini

Maybe you are looking for

  • Creation of query

    Hi Friends, Consider the following scenario.Following are the stock values on each day in an ODS. <b>DATE     MATERIAL     QTY 1-Jan     A     100 1-Jan     C     1000 5-Jan     A     110 8-Jan     B     50 9-Jan     B     75 9-Jan     C     1100 11-

  • TryParse not passing correct values?

    All, I'm trying to parse text box input to an integer.  Then based on the value of that integer a sub will set values for various other variables (integers).  Then the integers that are obtained from the sub are passed to a sb.appendline. I am doing

  • Header level Pricing condition

    Hi guys I have a header condition type which is calculated based on some values from line items but the header conditions should be always be calculated in barrels but my line items can have products in different units like kegs, barrels. What I need

  • Seeing the month and date in the finder bar?

    In the 'finder bar(?)' I have the day of the week and the time. Is there anyway to have the month and date there too? Instead of having to click on it and seeing it below? Thanks, Jordan

  • A vector problem

    I am trying to write a source in witch i save data of my vector in a file. It worked a while ago, but it seems i�ve changed something without me knowing it. This is a part of my source code can anyone find my problem and solve it? import java.util.*;