Tablespace DDL

Hi,
I have to create tablespace for a a user named MCG who has his data spread in 200 tablespaces.. so i need to recreate 200 TS on the new server before importing its data...
now, i want the script (i.e CREATE TABLESPACE... command) for each of those 200 TS.....
i did full export , cancelled it in betn and got the create tablespace stmts but the issue is that it contains STORAGE clause which I don't want as we have strict policy not to specify storage cluase while creating it on new server...
Can anybody guide me how can i obtain create tablespace script for all the 200 TS without storage clause..
thx

Which STORAGE clause are you referring to?
You can create your DDL statements using DBMS_METADATA like
set long 100000
select dbms_metadata.get_ddl('TABLESPACE',TABLESPACE_NAME) from dba_tablespaces; Message was edited by:
Jens Petersen

Similar Messages

  • How to generate a create tablespace ddl?

    Hi guys
    I would like to know where in Designer I can generate a ddl for JUST tablespaces. It looks like every time I try to generate it from the DbAdmin tab I get
    this " CDS-11312 Warning: Tablespace 'MYTABLESPACE_DATA' property COMPLETE is 'N' " warning.
    Can anybody help on this
    Thanks
    -A

    I go it. You need to set your property pallet "complete" attribute to yes (F4)
    Thanks

  • View create tablespace DDL?

    How about is there a way to see the DDL used to create a tablespace in 9i?

    How about reading up on the same dbms_metadata?
    SQL>  select dbms_metadata.get_ddl('TABLESPACE', 'USERS') from dual;
    DBMS_METADATA.GET_DDL('TABLESPACE','USERS')
      CREATE TABLESPACE "USERS" DATAFILE
    <rest skipped for security purposes ;)>Gints Plivna
    http://www.gplivna.eu

  • Using Data Pump to get a copy of all the CREATE TABLSPACE DDL in a database

    I have been looking through the documentation for a way to export.
    I was thinking I could get the DDL using the below and them don an IMPDP SQLFILES=yyy.sql
    cat expddl.sql
    DIRECTORY=dpdata1
    JOB_NAME=EXP_JOB
    CONTENT=METADATA_ONLY
    FULL=yes
    DUMPFILE=export_${ORACLE_SID}.dmp
    LOGFILE=export_${ORACLE_SID}.log
    I would have thought you could just do an INCUDE on the tablespace but it does not like that.
    10.2.0.3

    I would like to know why you want create tablespace DDLs ? As if you have to import it on the same machine (or the machine with similar mount points) the import should do that automatically for you. And if you have to do it on some other machine, you can always use SQLFILE option to generate DDL from the export dump.
    Regards,
    Amardeep Sidhu

  • I can't import a table contains BLOB column  from one user to another user.

    1) I create two user both have connect role,and each has its own tablespace, DDL:
    create user d2zd identified by d2zd default tablespace d2zd quota unlimited on d2zd account unlock;
    grant connect to d2zd;
    create user d3zd identified by d3zd default tablespace d3zd quota unlimited on d3zd account unlock;
    grant connect to d3zd;
    2)Then enter oracle as d2zd and create a table contains BLOB column and insert data to the table.
    3) export d2zd as follow:
    exp d2zd/d2zd file=d2zd.dmp
    4) import to d3zd as follow:
    imp d3zd/d3zd fromuser=d2zd touser=d3zd file=d2zd.dmp
    the question is the table with BOLB colum can't be import,
    it says:have no privilege on tablespace d2zd.
    How can I import a table contains BLOB column from one user to another user?

    Hi - the reason for as our friend already told ist that a blob can be stored outside of the table segment, in another Tablespace, This is for performance reason.
    Sou you would need to have Quota on two tablespaces.
    the one which holds the table segment the other which holds the blob(segment).
    Regards
    Carl
    Message was edited by:
    kreitsch

  • Autoextend on next size

    hi experts,
    I would like to know what is the best tablespace "autoextend on next" size for the performance of the db?
    oracle 11gR2 on redhat
    users.dbf (is the only tablespace for tables) size is 330 gb
    daily db growth is between 1 gb and 2 gb
    tablespace ddl is:
    CREATE BIGFILE TABLESPACE "USERS" DATAFILE
    'path/users01.dbf' SIZE 268435456000
    AUTOEXTEND ON NEXT 1310720 MAXSIZE 33554431M
    LOGGING ONLINE PERMANENT BLOCKSIZE 8192
    EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT NOCOMPRESS SEGMENT SPACE MANAGEM
    ENT AUTO
    ALTER DATABASE DATAFILE
    'path/users01.dbf' RESIZE 351655690240
    do I undestand it right that the unit of "AUTOEXTEND ON NEXT 1310720" is byte?
    this meand that the minimum autoextend size is 1,25 mb.
    what is the maximum autoextend size per 1 autoextend operation? I havent found any info on this in the documentation.
    how often should the autoextend operation happen for the good performance in the db with much DML ?
    thanks in advance!

    920748 wrote:
    1) there is a SYSTEM EVENT trigger available for extent allocation ?No.
    2) the next (maybe increasing) extent size is predictable for segments in LM tablespaces with ALLOCATION_TYPE=SYSTEM ?Yes, it is predictable. The algorithm isn't published and potentially changes in different versions of Oracle. You can easily enough, though, create a test table, fill it with tons of data, and see the sizes of the extents. However, at the point that you want to do this, you're probably at a point where you shouldn't be using automatic extent allocation. If you want to be able to predict the size of the next extent, use uniform extents.
    Here's an example from askTom that discusses the algorithm in Oracle 9.2.
    3) Oracle will allocate smaller extents if the requested size is not available continuously?No.
    I would like to check remaining tablespace after every extent allocationIt seems highly unlikely that you really want to do this sort of thing every time a new extent is allocated. Presumably, you know roughly how quickly your application's tables are supposed to grow. So, presumably, it's not terribly hard to ensure that you have, say, a month's worth of growth allocated as free space in the tablespace. If you keep a reasonable amount of free space around in general, you can then monitor the free space on a periodic basis (daily maybe even hourly) via a scheduled job and then add more free space at a convenient time whenever your free space drops below whatever threshold you set (in this example, a month). That should give you plenty of time between the threshold being tripped and the system actually running short of space and give you plenty of buffer in case you get swamped by a legitimate increase in activity.
    Justin

  • Peoplesoft EPM 9.1 biee certification

    Hello,
    Is BIEE11.1.1.6 certified with peoplesoft EPM 9.1 ? I know it is certified with 11.1.1.5.0
    Thanks
    BR

    Whatever the module, the steps to create the database are exactly the same. Of course, you should take care which tablespace ddl script you are running, and maybe some additional configuration steps given in the supplemental installation that is provided with every module.
    NIcolas.

  • How to Generate DDL Statement for PSAPROLL Tablespace on Oracle9i-HP_UX

    Hi,
    I am having SAP R3 46C installed with Oracle 9i on HP_UX 11i system.
    I want to generate required DDL statements for PSAPROLL Tablespace/Rollback Segments as Backup/recovery purpose, before converting it into PSAPUNDO.
    Its easy to find & store required DDL statements using OEM. But, I am not able to locate oemapp  (under $ORACLE_HOME/bin directory) application in the installation.
    The current patch level of installed oracle is
    I want to start OEM in HP_UX system, if it is possible.
    otherwise,
    I want to know the other alternatives to generate the required DDL statements.
    Thanks in advance.
    Regards,
    Bhavik G. Shroff

    Hi Stefan,
    On windows Platform it is very easy to get DDL info for any object of database.
    But, I was not able to start OEM in HP_UX system.
    I have done my job complete using this good package of Oracle9i, in this way.
    SQL> SET heading off
    SQL> SET long 10000
    SQL> SET pages 100
    SQL> SELECT dbms_metadata.get_ddl(u2019TABLESPACEu2019,'PSAPROLLu2019)
    FROM DBA_TABLESPACES;
    O/P:
    CREATE TABLESPACE "PSAPROLL" DATAFILE
    '/oracle/RQ1/sapdata1/roll_1/roll.data1' SIZE 681574400 REUSE ,
    '/oracle/RQ1/sapdata1/roll_2/roll.data2' SIZE 3145719808 REUSE
    LOGGING ONLINE PERMANENT BLOCKSIZE 8192
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576 SEGMENT SPACE MANAGEMENT MANUAL
    SQL> SELECT dbms_metadata.get_ddl(u2019ROLLBACK_SEGMENTu2019,'PRS_0u2019)
    FROM DBA_SEGMENTS;
    CREATE ROLLBACK SEGMENT "PRS_0" TABLESPACE "PSAPROLL"
    STORAGE(INITIAL 10485760 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 32765
    Also follow this useful links:
    [DDL Generation--Oracle's Answer to Save You Time and Money|http://www.dbasupport.com/oracle/ora9i/DDLgen2.shtml]
    [DDL Generation--Oracle's Answer to Save You Time and Money|http://www.dbasupport.com/oracle/ora9i/DDLgen3.shtml]
    Good.
    Thanks a lot.

  • Undo tablespace/redo logs/ DML /DDL/ truncate/ delete

    1st scenario:Delete
    10 rows in a table
    Delete 5 rows
    5 rows in the table
    savepoint sp1
    Delete 3 rows
    2 rows in the table
    rollback to savepoint sp1
    5 rows in the table
    So all DML affected values are noted in the undotablespace and present in the undotablespace until a commit is issued.And also redo logs make note of DML statements.AM I RIGHT??????
    2nd scenario-truncate
    10 rows in table
    savepoint sp1
    truncate
    0 rows in table
    rollback to savepoint sp1 gives this error
    ORA-01086: savepoint 'SP2' never established
    So is truncate [are all DDL statements] noted in the undo tablespace and are they noted in the redologs????????
    I KNOW THAT A DML IS NOT AUTOCOMMIT AND A DDL IS AN AUTOCOMMIT

    When you issue a delete is the data really deleted ?WHen you issue a delete, there is a before image of the data recorded to the undo area. (And that undo information itself is forwarded to the redo.) Then the data is actually deleted from the 'current' block as represented in memory.
    Therefore, the data is actually deleted, but can be recovered by rolling back until a commit occurs.
    It can also be recovered using flashback techniques which simply rebuild from the undo.
    When you issue a truncate is the data really deleted
    or is the high water mark pointer for a datablock lost?The data is not deleted. Therefore there is no undo record of the data 'removal' to be rolled back.
    The high water mark pointer is reset. It's old value is in the undo, but the truncate is a DDL command, and it is preceded and followed by an implicit commit, voiding any potential rollback request.
    I mean you can always rollback a delete and not
    rollback a truncate?Correct - using standard techniques, deletes can be rolled back and truncates can not.

  • Index on Partitioned Table with Some ReadOnly Tablespaces

    We have a warehouse with fact tables range partitioned on date - daily partitions with each month worth of partitions put into a specific monthly tablespace. Each month, we set the prior month's tablespace to READONLY. So our table ends up having data in readonly and read-write tablespaces.
    We now have a change we need to make to one of the fact tables - we need to add a new column AND add an index to that column. But because we have partitions in readonly state, Oracle doesn't let us create the index and it also doesn't let us update the local unique key (unique index).
    Is there a way we can do this without having to put the tablespaces in read-write mode? As importantly, what happens when we offline or drop some of the older tablespaces (for archiving purposes)? We need to find a way to add the index on just the read-write partitions.
    Thanks.

    We have a warehouse with fact tables range
    partitioned on date - daily partitions with each
    month worth of partitions put into a specific monthly
    tablespace. Each month, we set the prior month's
    tablespace to READONLY. So our table ends up having
    data in readonly and read-write tablespaces.
    We now have a change we need to make to one of the
    fact tables - we need to add a new column AND add an
    index to that column. But because we have partitions
    in readonly state, Oracle doesn't let us create the
    index and it also doesn't let us update the local
    unique key (unique index).
    Is there a way we can do this without having to put
    the tablespaces in read-write mode? As importantly,
    what happens when we offline or drop some of the
    older tablespaces (for archiving purposes)? We need
    to find a way to add the index on just the read-write
    partitions.
    Thanks.Hi,
    Improvements in Oracle 10g to maintain local-partitioned indexes when you use partition DDL commands:
    add partition, split partition, merge partiton, move partition.
    ALSO, the associated indexes NO LONGER have to be stored on the same tablespace as the table (i.e. answer to your question).
    On Oracle 9i: Local indexes are recommended on data warehouse platforms. In an OLTP system, global indexes are more common. On a data data warehouse, problems can be isoloted to one partition, the partitions moved, made r/o (like yours), no local indexes need to be rebuilt
    Regarding your issue:
    We now have a change we need to make to one of the
    fact tables - we need to add a new column AND add an
    index to that columnTo maintain the simplicity + functionality of your DW configuration, I think you need to change the TS to R/W, update the objects, then alter to R/O.
    fyi
    http://www.oracle.com/technology/deploy/availability/htdocs/online_ops.html

  • XML to DDL

    HI
    I'm using DBMS_METADATA.GET_SXML to retreive the dll in xml. Its in XML becuase I need a way to process each table's column, constraints etc, inidividually and xml nodes helps do this. Once processed the node(s) need to be transformed back into ddl format. Tthere can be many variations with regards to the data content witin the nodes e.g. in the xml below "COUNTRY_ID" has a CONSTRAINT NOT NULL and "COUNTRY_NAME" with no CONSTRAINTS?, this indicates xslt has to be aware of these factors. Perhaps someone has a more efficient approach, any suggestions and tips will be helpful. Thanks
    <TABLE xmlns="http://xmlns.oracle.com/ku" version="1.0">
    <SCHEMA>HR</SCHEMA>
    <NAME>COUNTRIES</NAME>
    <RELATIONAL_TABLE>
    <COL_LIST>
    <COL_LIST_ITEM>
    <NAME>COUNTRY_ID</NAME>
    <DATATYPE>CHAR</DATATYPE>
    <LENGTH>2</LENGTH>
    <NOT_NULL>
    <NAME>COUNTRY_ID_NN</NAME>
    </NOT_NULL>
    </COL_LIST_ITEM>
              <COL_LIST_ITEM>
    <NAME>COUNTRY_NAME</NAME>
    <DATATYPE>VARCHAR2</DATATYPE>
    <LENGTH>40</LENGTH>
    </COL_LIST_ITEM>
    <PRIMARY_KEY_CONSTRAINT_LIST>
    <PRIMARY_KEY_CONSTRAINT_LIST_ITEM>
    <NAME>COUNTRY_C_ID_PK</NAME>
    <COL_LIST>
    <COL_LIST_ITEM>
    <NAME>COUNTRY_ID</NAME>
    </COL_LIST_ITEM>
    </COL_LIST>
    </PRIMARY_KEY_CONSTRAINT_LIST_ITEM>
    </PRIMARY_KEY_CONSTRAINT_LIST>
    <FOREIGN_KEY_CONSTRAINT_LIST>
    <FOREIGN_KEY_CONSTRAINT_LIST_ITEM>
    <NAME>COUNTR_REG_FK</NAME>
    <COL_LIST>
    <COL_LIST_ITEM>
    <NAME>REGION_ID</NAME>
    </COL_LIST_ITEM>
    </COL_LIST>
    <REFERENCES>
    <SCHEMA>HR</SCHEMA>
    <NAME>REGIONS</NAME>
    <COL_LIST>
    <COL_LIST_ITEM>
    <NAME>REGION_ID</NAME>
    </COL_LIST_ITEM>
    </COL_LIST>
    </REFERENCES>
    </FOREIGN_KEY_CONSTRAINT_LIST_ITEM>
    </FOREIGN_KEY_CONSTRAINT_LIST>

    Hi,
    Not sure what you're asking.
    Do you want to convert back the XML to DDL once you've made your changes?
    If so, the Metadata APIs already provide such functionality, provided the structure is still valid of course.
    For example :
    SQL> DECLARE
      2    -- handles
      3    h            number;
      4    th           number;
      5
      6    source_doc   XMLType := XMLType(dbms_metadata.get_sxml('TABLE', 'COUNTRIES', 'HR'));
      7    target_doc   XMLType;
      8
      9    ddl          clob;
    10
    11  BEGIN
    12
    13   -- this increases the length of VARCHAR2 columns by 10 :
    14   select xmlquery(
    15   'declare default element namespace "http://xmlns.oracle.com/ku"; (: :)
    16    copy $d := /TABLE
    17    modify (
    18      for $i in $d/RELATIONAL_TABLE/COL_LIST/COL_LIST_ITEM
    19      where $i/DATATYPE = "VARCHAR2"
    20      return replace value of node $i/LENGTH with xs:integer($i/LENGTH)+10
    21    )
    22    return $d'
    23    passing source_doc
    24    returning content
    25    )
    26    into target_doc
    27    from dual;
    28
    29    h := dbms_metadata.openw('TABLE');
    30    th := dbms_metadata.add_transform(h, 'SXMLDDL');
    31
    32    dbms_lob.createtemporary(ddl, false);
    33    dbms_metadata.convert(h, target_doc, ddl);
    34    dbms_metadata.close(h);
    35
    36    dbms_output.put_line(ddl);
    37    dbms_lob.freetemporary(ddl);
    38
    39  END;
    40  /
    CREATE TABLE "HR"."COUNTRIES"
       (    "COUNTRY_ID" CHAR(2) CONSTRAINT "COUNTRY_ID_NN" NOT NULL ENABLE,
            "COUNTRY_NAME" VARCHAR2(50),
            "REGION_ID" NUMBER,
            CONSTRAINT "COUNTRY_C_ID_PK" PRIMARY KEY
    ("COUNTRY_ID") ENABLE,
            CONSTRAINT "COUNTR_REG_FK" FOREIGN KEY ("REGION_ID")
             REFERENCES "HR"."REGIONS" ("REGION_ID") ENABLE
       ) ORGANIZATION INDEX NOCOMPRESS PCTFREE 10 INITRANS 2 NOLOGGING
    STORAGE( INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE
    "EXAMPLE"
      PCTTHRESHOLD 50
    Procédure PL/SQL terminée avec succès.

  • Large DDL causes ORA-06502 in several Applications

    I Created a Table with
    CREATE TABLE large_table (
    c001 VARCHAR(2) NOT NULL
    and added columns with
    ALTER TABLE large_table ADD c002 VARCHAR(2) NOT NULL;
    ALTER TABLE large_table ADD c060 VARCHAR(2) NOT NULL;
    The DDL of this table cannot be executed eg. in SqlDeveloper, sqlplus, ... and it cannot be imported from a dump with exp! Importing causes:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP and Data Mining options
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in WE8ISO8859P15 character set and AL16UTF16 NCHAR character set
    . importing RLODEVEL's objects into RLODEVEL
    . importing RLODEVEL's objects into RLODEVEL
    IMP-00017: following statement failed with ORACLE error 604:
    "CREATE TABLE "LARGE_TABLE" ("C001" VARCHAR2(2) NOT NULL ENABLE, "C002" VARC"
    "HAR2(2) NOT NULL ENABLE, "C003" VARCHAR2(2) NOT NULL ENABLE, "C004" VARCHAR"
    "2(2) NOT NULL ENABLE, "C005" VARCHAR2(2) NOT NULL ENABLE, "C006" VARCHAR2(2"
    ") NOT NULL ENABLE, "C007" VARCHAR2(2) NOT NULL ENABLE, "C008" VARCHAR2(2) N"
    "OT NULL ENABLE, "C009" VARCHAR2(2) NOT NULL ENABLE, "C010" VARCHAR2(2) NOT "
    "NULL ENABLE, "C011" VARCHAR2(2) NOT NULL ENABLE, "C012" VARCHAR2(2) NOT NUL"
    "L ENABLE, "C013" VARCHAR2(2) NOT NULL ENABLE, "C014" VARCHAR2(2) NOT NULL E"
    "NABLE, "C015" VARCHAR2(2) NOT NULL ENABLE, "C016" VARCHAR2(2) NOT NULL ENAB"
    "LE, "C017" VARCHAR2(2) NOT NULL ENABLE, "C018" VARCHAR2(2) NOT NULL ENABLE,"
    " "C019" VARCHAR2(2) NOT NULL ENABLE, "C020" VARCHAR2(2) NOT NULL ENABLE, "C"
    "021" VARCHAR2(2) NOT NULL ENABLE, "C022" VARCHAR2(2) NOT NULL ENABLE, "C023"
    "" VARCHAR2(2) NOT NULL ENABLE, "C024" VARCHAR2(2) NOT NULL ENABLE, "C025" V"
    "ARCHAR2(2) NOT NULL ENABLE, "C027" VARCHAR2(2) NOT NULL ENABLE, "C026" VARC"
    "HAR2(2) NOT NULL ENABLE, "C028" VARCHAR2(2) NOT NULL ENABLE, "C029" VARCHAR"
    "2(2) NOT NULL ENABLE, "C030" VARCHAR2(2) NOT NULL ENABLE, "C031" VARCHAR2(2"
    ") NOT NULL ENABLE, "C032" VARCHAR2(2) NOT NULL ENABLE, "C033" VARCHAR2(2) N"
    "OT NULL ENABLE, "C034" VARCHAR2(2) NOT NULL ENABLE, "C035" VARCHAR2(2) NOT "
    "NULL ENABLE, "C036" VARCHAR2(2) NOT NULL ENABLE, "C037" VARCHAR2(2) NOT NUL"
    "L ENABLE, "C038" VARCHAR2(2) NOT NULL ENABLE, "C039" VARCHAR2(2) NOT NULL E"
    "NABLE, "C040" VARCHAR2(2) NOT NULL ENABLE, "C041" VARCHAR2(2) NOT NULL ENAB"
    "LE, "C042" VARCHAR2(2) NOT NULL ENABLE, "C043" VARCHAR2(2) NOT NULL ENABLE,"
    " "C044" VARCHAR2(2) NOT NULL ENABLE, "C045" VARCHAR2(2) NOT NULL ENABLE, "C"
    "046" VARCHAR2(2) NOT NULL ENABLE, "C047" VARCHAR2(2) NOT NULL ENABLE, "C048"
    "" VARCHAR2(2) NOT NULL ENABLE, "C049" VARCHAR2(2) NOT NULL ENABLE, "C050" V"
    "ARCHAR2(2) NOT NULL ENABLE, "C051" VARCHAR2(2) NOT NULL ENABLE, "C052" VARC"
    "HAR2(2) NOT NULL ENABLE, "C053" VARCHAR2(2) NOT NULL ENABLE, "C054" VARCHAR"
    "2(2) NOT NULL ENABLE, "C055" VARCHAR2(2) NOT NULL ENABLE, "C056" VARCHAR2(2"
    ") NOT NULL ENABLE, "C057" VARCHAR2(2) NOT NULL ENABLE, "C058" VARCHAR2(2) N"
    "OT NULL ENABLE, "C059" VARCHAR2(2) NOT NULL ENABLE, "C060" VARCHAR2(2) NOT "
    "NULL ENABLE) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL"
    " 65536 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS"
    "" LOGGING NOCOMPRESS"
    IMP-00003: ORACLE error 604 encountered
    ORA-00604: error occurred at recursive SQL level 1
    ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ORA-06512: at line 25
    Import terminated successfully with warnings.
    I'm quite sure, this problem is well known, but I did not find a hint in metalink or the rest of the internet.

    Sorry, the example has not been tested well :(
    But I just had to duplicate the number of columns to get the error both in SqlDeveloper, SqlPlus and (still) imp. That sqlplus and imp have different limits in DDL-size confuses me even more :-/
    CREATE TABLE LARGE_TABLE (
    C001 VARCHAR2(2) NOT NULL , C002 VARCHAR2(2) NOT NULL , C003 VARCHAR2(2) NOT NULL , C004 VARCHAR2(2) NOT NULL ,
    C005 VARCHAR2(2) NOT NULL , C006 VARCHAR2(2) NOT NULL , C007 VARCHAR2(2) NOT NULL , C008 VARCHAR2(2) NOT NULL ,
    C009 VARCHAR2(2) NOT NULL , C010 VARCHAR2(2) NOT NULL , C011 VARCHAR2(2) NOT NULL , C012 VARCHAR2(2) NOT NULL ,
    C013 VARCHAR2(2) NOT NULL , C014 VARCHAR2(2) NOT NULL , C015 VARCHAR2(2) NOT NULL , C016 VARCHAR2(2) NOT NULL ,
    C017 VARCHAR2(2) NOT NULL , C018 VARCHAR2(2) NOT NULL , C019 VARCHAR2(2) NOT NULL , C020 VARCHAR2(2) NOT NULL ,
    C021 VARCHAR2(2) NOT NULL , C022 VARCHAR2(2) NOT NULL , C023 VARCHAR2(2) NOT NULL , C024 VARCHAR2(2) NOT NULL ,
    C025 VARCHAR2(2) NOT NULL , C027 VARCHAR2(2) NOT NULL , C026 VARCHAR2(2) NOT NULL , C028 VARCHAR2(2) NOT NULL ,
    C029 VARCHAR2(2) NOT NULL , C030 VARCHAR2(2) NOT NULL , C031 VARCHAR2(2) NOT NULL , C032 VARCHAR2(2) NOT NULL ,
    C033 VARCHAR2(2) NOT NULL , C034 VARCHAR2(2) NOT NULL , C035 VARCHAR2(2) NOT NULL , C036 VARCHAR2(2) NOT NULL ,
    C037 VARCHAR2(2) NOT NULL , C038 VARCHAR2(2) NOT NULL , C039 VARCHAR2(2) NOT NULL , C040 VARCHAR2(2) NOT NULL ,
    C041 VARCHAR2(2) NOT NULL , C042 VARCHAR2(2) NOT NULL , C043 VARCHAR2(2) NOT NULL , C044 VARCHAR2(2) NOT NULL ,
    C045 VARCHAR2(2) NOT NULL , C046 VARCHAR2(2) NOT NULL , C047 VARCHAR2(2) NOT NULL , C048 VARCHAR2(2) NOT NULL ,
    C049 VARCHAR2(2) NOT NULL , C050 VARCHAR2(2) NOT NULL , C051 VARCHAR2(2) NOT NULL , C052 VARCHAR2(2) NOT NULL ,
    C053 VARCHAR2(2) NOT NULL , C054 VARCHAR2(2) NOT NULL , C055 VARCHAR2(2) NOT NULL , C056 VARCHAR2(2) NOT NULL ,
    C057 VARCHAR2(2) NOT NULL , C058 VARCHAR2(2) NOT NULL , C059 VARCHAR2(2) NOT NULL , C060 VARCHAR2(2) NOT NULL ,
    C101 VARCHAR2(2) NOT NULL , C102 VARCHAR2(2) NOT NULL , C103 VARCHAR2(2) NOT NULL , C104 VARCHAR2(2) NOT NULL ,
    C105 VARCHAR2(2) NOT NULL , C106 VARCHAR2(2) NOT NULL , C107 VARCHAR2(2) NOT NULL , C108 VARCHAR2(2) NOT NULL ,
    C109 VARCHAR2(2) NOT NULL , C110 VARCHAR2(2) NOT NULL , C111 VARCHAR2(2) NOT NULL , C112 VARCHAR2(2) NOT NULL ,
    C113 VARCHAR2(2) NOT NULL , C114 VARCHAR2(2) NOT NULL , C115 VARCHAR2(2) NOT NULL , C116 VARCHAR2(2) NOT NULL ,
    C117 VARCHAR2(2) NOT NULL , C118 VARCHAR2(2) NOT NULL , C119 VARCHAR2(2) NOT NULL , C120 VARCHAR2(2) NOT NULL ,
    C121 VARCHAR2(2) NOT NULL , C122 VARCHAR2(2) NOT NULL , C123 VARCHAR2(2) NOT NULL , C124 VARCHAR2(2) NOT NULL ,
    C125 VARCHAR2(2) NOT NULL , C127 VARCHAR2(2) NOT NULL , C126 VARCHAR2(2) NOT NULL , C128 VARCHAR2(2) NOT NULL ,
    C129 VARCHAR2(2) NOT NULL , C130 VARCHAR2(2) NOT NULL , C131 VARCHAR2(2) NOT NULL , C132 VARCHAR2(2) NOT NULL ,
    C133 VARCHAR2(2) NOT NULL , C134 VARCHAR2(2) NOT NULL , C135 VARCHAR2(2) NOT NULL , C136 VARCHAR2(2) NOT NULL ,
    C137 VARCHAR2(2) NOT NULL , C138 VARCHAR2(2) NOT NULL , C139 VARCHAR2(2) NOT NULL , C140 VARCHAR2(2) NOT NULL ,
    C141 VARCHAR2(2) NOT NULL , C142 VARCHAR2(2) NOT NULL , C143 VARCHAR2(2) NOT NULL , C144 VARCHAR2(2) NOT NULL ,
    C145 VARCHAR2(2) NOT NULL , C146 VARCHAR2(2) NOT NULL , C147 VARCHAR2(2) NOT NULL , C148 VARCHAR2(2) NOT NULL ,
    C149 VARCHAR2(2) NOT NULL , C150 VARCHAR2(2) NOT NULL , C151 VARCHAR2(2) NOT NULL , C152 VARCHAR2(2) NOT NULL ,
    C153 VARCHAR2(2) NOT NULL , C154 VARCHAR2(2) NOT NULL , C155 VARCHAR2(2) NOT NULL , C156 VARCHAR2(2) NOT NULL ,
    C157 VARCHAR2(2) NOT NULL , C158 VARCHAR2(2) NOT NULL , C159 VARCHAR2(2) NOT NULL , C160 VARCHAR2(2) NOT NULL
    PCTFREE 10
    PCTUSED 40
    INITRANS 1
    MAXTRANS 255
    STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE USERS
    LOGGING
    NOCOMPRESS
    CREATE TABLE LARGE_TABLE (
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ORA-06512: at line 25

  • What are differences between the target tablespace and the source tablespac

    The IMPDP command create so manay errors. But the EXAMPLE tablespace is transported to the target database successfully. It seems that the transported tablespace is no difference with the source tablespace.
    Why create so many errors?
    How to avoid these errors?
    What are differences between the target tablespace and the source tablespace?
    Is this datapump action really successfull?
    Thw following is the log output:
    [oracle@hostp ~]$ impdp system/oracle dumpfile=user_dir:demo02.dmp tablespaces=example remap_tablespace=example:example
    Import: Release 10.2.0.1.0 - Production on Sunday, 28 September, 2008 18:08:31
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Master table "SYSTEM"."SYS_IMPORT_TABLESPACE_01" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_TABLESPACE_01": system/******** dumpfile=user_dir:demo02.dmp tablespaces=example remap_tablespace=example:example
    Processing object type TABLE_EXPORT/TABLE/TABLE
    ORA-39117: Type needed to create table is not included in this operation. Failing sql is:
    CREATE TABLE "OE"."CUSTOMERS" ("CUSTOMER_ID" NUMBER(6,0), "CUST_FIRST_NAME" VARCHAR2(20) CONSTRAINT "CUST_FNAME_NN" NOT NULL ENABLE, "CUST_LAST_NAME" VARCHAR2(20) CONSTRAINT "CUST_LNAME_NN" NOT NULL ENABLE, "CUST_ADDRESS" "OE"."CUST_ADDRESS_TYP" , "PHONE_NUMBERS" "OE"."PHONE_LIST_TYP" , "NLS_LANGUAGE" VARCHAR2(3), "NLS_TERRITORY" VARCHAR2(30), "CREDIT_LIMIT" NUMBER(9,2), "CUST_EMAIL" VARCHAR2(30), "ACCOUNT_MGR_ID" NU
    ORA-39117: Type needed to create table is not included in this operation. Failing sql is:
    ORA-39117: Type needed to create table is not included in this operation. Failing sql is:
    CREATE TABLE "IX"."ORDERS_QUEUETABLE" ("Q_NAME" VARCHAR2(30), "MSGID" RAW(16), "CORRID" VARCHAR2(128), "PRIORITY" NUMBER, "STATE" NUMBER, "DELAY" TIMESTAMP (6), "EXPIRATION" NUMBER, "TIME_MANAGER_INFO" TIMESTAMP (6), "LOCAL_ORDER_NO" NUMBER, "CHAIN_NO" NUMBER, "CSCN" NUMBER, "DSCN" NUMBER, "ENQ_TIME" TIMESTAMP (6), "ENQ_UID" VARCHAR2(30), "ENQ_TID" VARCHAR2(30), "DEQ_TIME" TIMESTAMP (6), "DEQ_UID" VARCHAR2(30), "DEQ_
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "SH"."CUSTOMERS" 9.850 MB 55500 rows
    . . imported "SH"."SUPPLEMENTARY_DEMOGRAPHICS" 695.9 KB 4500 rows
    . . imported "OE"."PRODUCT_DESCRIPTIONS" 2.379 MB 8640 rows
    . . imported "SH"."SALES":"SALES_Q4_2001" 2.257 MB 69749 rows
    . . imported "SH"."SALES":"SALES_Q1_1999" 2.070 MB 64186 rows
    . . imported "SH"."SALES":"SALES_Q3_2001" 2.129 MB 65769 rows
    . . imported "SH"."SALES":"SALES_Q1_2000" 2.011 MB 62197 rows
    . . imported "SH"."SALES":"SALES_Q1_2001" 1.964 MB 60608 rows
    . . imported "SH"."SALES":"SALES_Q2_2001" 2.050 MB 63292 rows
    . . imported "SH"."SALES":"SALES_Q3_1999" 2.166 MB 67138 rows
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."REGIONS" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."REGIONS" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."COUNTRIES" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."COUNTRIES" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."LOCATIONS" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."LOCATIONS" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."DEPARTMENTS" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."DEPARTMENTS" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."JOBS" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."JOBS" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."EMPLOYEES" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."EMPLOYEES" TO "EXAM_03"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USER1' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."JOB_HISTORY" TO "USER1"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'EXAM_03' does not exist
    Failing sql is:
    GRANT SELECT ON "HR"."JOB_HISTORY" TO "EXAM_03"
    ORA-39112: Dependent object type OBJECT_GRANT:"OE" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type OBJECT_GRANT:"OE" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    ORA-39112: Dependent object type INDEX:"OE"."CUSTOMERS_PK" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type INDEX:"OE"."CUST_ACCOUNT_MANAGER_IX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type INDEX:"OE"."CUST_LNAME_IX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type INDEX:"OE"."CUST_EMAIL_IX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type INDEX:"PM"."PRINTMEDIA_PK" skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    ORA-39112: Dependent object type CONSTRAINT:"OE"."CUSTOMER_CREDIT_LIMIT_MAX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"OE"."CUSTOMER_ID_MIN" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"OE"."CUSTOMERS_PK" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"PM"."PRINTMEDIA__PK" skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    ORA-39112: Dependent object type CONSTRAINT:"IX"."SYS_C005192" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUSTOMERS_PK" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUST_ACCOUNT_MANAGER_IX" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUST_LNAME_IX" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUST_EMAIL_IX" creation failed
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"PM"."PRINTMEDIA_PK" creation failed
    Processing object type TABLE_EXPORT/TABLE/COMMENT
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    ORA-39112: Dependent object type REF_CONSTRAINT:"OE"."CUSTOMERS_ACCOUNT_MANAGER_FK" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39083: Object type REF_CONSTRAINT failed to create with error:
    ORA-00942: table or view does not exist
    Failing sql is:
    ALTER TABLE "OE"."ORDERS" ADD CONSTRAINT "ORDERS_CUSTOMER_ID_FK" FOREIGN KEY ("CUSTOMER_ID") REFERENCES "OE"."CUSTOMERS" ("CUSTOMER_ID") ON DELETE SET NULL ENABLE
    ORA-39112: Dependent object type REF_CONSTRAINT:"PM"."PRINTMEDIA_FK" skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    Processing object type TABLE_EXPORT/TABLE/TRIGGER
    ORA-39082: Object type TRIGGER:"HR"."SECURE_EMPLOYEES" created with compilation warnings
    ORA-39082: Object type TRIGGER:"HR"."SECURE_EMPLOYEES" created with compilation warnings
    ORA-39082: Object type TRIGGER:"HR"."UPDATE_JOB_HISTORY" created with compilation warnings
    ORA-39082: Object type TRIGGER:"HR"."UPDATE_JOB_HISTORY" created with compilation warnings
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    ORA-39112: Dependent object type INDEX:"OE"."CUST_UPPER_NAME_IX" skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type INDEX:"OE"."CUST_UPPER_NAME_IX" creation failed
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"OE"."CUSTOMERS" creation failed
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"PM"."PRINT_MEDIA" creation failed
    ORA-39112: Dependent object type TABLE_STATISTICS skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    Processing object type TABLE_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/POST_INSTANCE/PROCACT_INSTANCE
    ORA-39112: Dependent object type PROCACT_INSTANCE skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39083: Object type PROCACT_INSTANCE failed to create with error:
    ORA-01403: no data found
    ORA-01403: no data found
    Failing sql is:
    BEGIN
    SYS.DBMS_AQ_IMP_INTERNAL.IMPORT_SIGNATURE_TABLE('AQ$_ORDERS_QUEUETABLE_G');COMMIT; END;
    Processing object type TABLE_EXPORT/TABLE/POST_INSTANCE/PROCDEPOBJ
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."AQ$_ORDERS_QUEUETABLE_V" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."ORDERS_QUEUE_N" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."ORDERS_QUEUE_R" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."AQ$_ORDERS_QUEUETABLE_E" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    ORA-39112: Dependent object type PROCDEPOBJ:"IX"."ORDERS_QUEUE" skipped, base object type TABLE:"IX"."ORDERS_QUEUETABLE" creation failed
    Job "SYSTEM"."SYS_IMPORT_TABLESPACE_01" completed with 63 error(s) at 18:09:14

    Short of trying to then reverse-engineer the objects that are in the dump file (I believe Data Pump export files contain some XML representations of DDL in addition to various binary bits, making it potentially possible to try to scan the dump file for the object definitions), I would tend to assume that the export didn't include those type definitions.
    Since it looks like you're trying to set up the sample schemas, is there a reason that you wouldn't just run the sample schema setup scripts on the destination database? Why are you using Data Pump in the first place?
    Justin

  • How can i get the SQL of a tablespace from the database

    Hello All,
    I am using Oracle 11g R2. I want to get the SQL of some tablespaces on my database. in the same way i get the DDL of the table using the GET_DDL function.
    How can i get that ?
    Regards,

    try this please
    select dbms_metadata.get_ddl('TABLESPACE',tb.tablespace_name) from dba_tablespaces tb;or
    select 'create tablespace ' || df.tablespace_name || chr(10)
    || ' datafile ''' || df.file_name || ''' size ' || df.bytes
    || decode(autoextensible,'N',null, chr(10) || ' autoextend on maxsize '
    || maxbytes)
    || chr(10)
    || 'default storage ( initial ' || initial_extent
    || decode (next_extent, null, null, ' next ' || next_extent )
    || ' minextents ' || min_extents
    || ' maxextents ' ||  decode(max_extents,'2147483645','unlimited',max_extents)
    || ') ;'
    from dba_data_files df, dba_tablespaces t
    where df.tablespace_name=t.tablespace_name Edited by: Mahir M. Quluzade on Mar 14, 2011 4:51 PM

  • Extract DDL for Partitioned tables in Oracle 8i

    Hi
    I am currently working on an Oracle 8i database. I have a need to extract the DDL of the existing tables & indexes. Dont need a schema level DDL extract, i just need it for a couple of tables and the corresponding indexes. I am currently using PL/SQL Developer a third party tool which is okay for extracting DDL for Non Partitioned tables, but when it comes to getting the DDL for PARTITIONED tables, it doesnt give me the partition information nor the tablespace information. We dont have a license for Toad or any other tools to get the DDL's. I also dont have the export/import privs on the DB. I need a free ware that can give me the DDL for the existing partitioned tables or atleast a query that I can run against the regular DBA views, which can give me the DDL along with Storage clause, the tablespace, indexes, grants & constraints.
    Thanks in Advance
    Chandra

    I also dont have the export/import privs on the DB. I need a
    free ware that can give me the DDL for the existing
    partitioned tables or atleast a query that I can run
    against the regular DBA views, which can give me the
    DDL along with Storage clause, the tablespace,
    indexes, grants & constraints.But you (or the owner or the tables you connect with) should have export/import privs on its on tables (i.e the two tables). So use the User Views instead of DBA Views.
    USER_TABLES, USER_TAB_PARTITIONS etc

Maybe you are looking for

  • HT1386 My device (iphone) does not appear in ITunes. How do I sync?

    Why is my device (IPhone) not showing in ITunes?  I cant sync.

  • Macbook mid-2007 -- original OS shipped? Trying to run AHT.

    What was it? I'm thinking 10.3 Panther, as I later upgraded to 10.4 Tiger. Reason for the question: I'm trying to upgrade to 10.6 Sow Leopard, but that OS won't install. Ran Repair Permissions, trying to run Apple Hardware tests but no longer have th

  • Use Java Timer to move a panel smoothly, need help!

    I am using JBuilder to build an Windows Application. I put everything on the contentPane. There is one Panel I call it "jPanelDemo" and another panel "jPanelText". Now I want to start a timer, and move jPanelDemo panel from the right side of jPanelTe

  • Mac Manager: Unable to open databases

    I'm running the 10.4 upgrade and Mac Manager. I'm suddenly unable to open the MM admin program and am given the following error: Unable to open databases on "OurServer" because of an unknown error: A disk error has occurred I've tried rebuilding the

  • Running a page without showing

    All, is it possible to run a page in the background without showing that page in apex 4.2? i want to run a page process but not to show that page? thanks.