DDL extract

Hi ALL,
SQL> select username from dba_users where username='PREMIER';
USERNAME
PREMIER
SQL> slee
SP2-0042: unknown command "slee" - rest of line ignored.
SQL> select table_name,owner from dba_tables where table_name='PWS_API' and owner='PREMIER';
TABLE_NAME OWNER
PWS_API PREMIER
SQL> SELECT DBMS_METADATA.get_ddl ('TABLE', PWS_API, PREMIER) FROM dba_tables
2 WHERE owner = 'PREMIER'
3 and
4 table_name = 'PWS_API';
SELECT DBMS_METADATA.get_ddl ('TABLE', PWS_API, PREMIER) FROM dba_tables
ERROR at line 1:
ORA-00904: "PREMIER": invalid identifier
whats wrong.. why am I getting this error..

SELECT DBMS_METADATA.get_ddl ('TABLE', 'PWS_API', 'PREMIER') FROM dual

Similar Messages

  • How ddl extract works?

    Hi,
    I wonder how the extract capture the ddls, I didnt find about the in the pdfs

    It does. DML is changes to data in existing objects. DDL involves the objects themselves, so how would that be tracked? That is going to impact data dictionary, not necessarily data within a table. Read through the ddl_setup.sql script - you'll see all the things that are checked in the trigger (which is fired based on before DDL on database).

  • DDL extraction problem

    Every time I click on the 'SQL' tab to see the DDL for an object the first two lines are:
    -- DBMS_METADATA was unable to generate SQL. Now using internal DDL generator
    -- To Test DBMS_METADATA : select dbms_metadata.get_ddl('TABLE','DUAL','SYS') from dual;
    Execuring the suggested test statement gives:
    ORA-31603: object "DUAL" of type TABLE not found in schema "SYS"
    However "select 1 from dual" and "select 1 from sys.dual" both work fine.

    I think it is a privilege issue. I can run that statement without problems when I connect with near dba permissions on 9iR2 and 10gXE, but it gives the same error when I execute under a normal user.

  • Extract DDL for Partitioned tables in Oracle 8i

    Hi
    I am currently working on an Oracle 8i database. I have a need to extract the DDL of the existing tables & indexes. Dont need a schema level DDL extract, i just need it for a couple of tables and the corresponding indexes. I am currently using PL/SQL Developer a third party tool which is okay for extracting DDL for Non Partitioned tables, but when it comes to getting the DDL for PARTITIONED tables, it doesnt give me the partition information nor the tablespace information. We dont have a license for Toad or any other tools to get the DDL's. I also dont have the export/import privs on the DB. I need a free ware that can give me the DDL for the existing partitioned tables or atleast a query that I can run against the regular DBA views, which can give me the DDL along with Storage clause, the tablespace, indexes, grants & constraints.
    Thanks in Advance
    Chandra

    I also dont have the export/import privs on the DB. I need a
    free ware that can give me the DDL for the existing
    partitioned tables or atleast a query that I can run
    against the regular DBA views, which can give me the
    DDL along with Storage clause, the tablespace,
    indexes, grants & constraints.But you (or the owner or the tables you connect with) should have export/import privs on its on tables (i.e the two tables). So use the User Views instead of DBA Views.
    USER_TABLES, USER_TAB_PARTITIONS etc

  • Goldengate Extracts reads slow during Table Data Archiving and Index Rebuilding Operations.

    We have configured OGG on a  near-DR server. The extracts are configured to work in ALO Mode.
    During the day, extracts work as expected and are in sync. But during any dialy maintenance task, the extracts starts lagging, and read the same archives very slow.
    This usually happens during Table Data Archiving (DELETE from prod tables, INSERT into history tables) and during Index Rebuilding on those tables.
    Points to be noted:
    1) The Tables on which Archiving is done and whose Indexes are rebuilt are not captured by GoldenGate Extract.
    2) The extracts are configured to capture DML opeartions. Only INSERT and UPDATE operations are captured, DELETES are ignored by the extracts. Also DDL extraction is not configured.
    3) There is no connection to PROD or DR Database
    4) System functions normally all the time, but just during table data archiving and index rebuild it starts lagging.
    Q 1. As mentioned above, even though the tables are not a part of capture, the extracts lags ? What are the possible reasons for the lag ?
    Q 2. I understand that Index Rebuild is a DDL operation, then too it induces a lag into the system. how ?
    Q 3. We have been trying to find a way to overcome the lag, which ideally shouldn't have arised. Is there any extract parameter or some work around for this situation ?

    Hi Nick.W,
    The amount of redo logs generated is huge. Approximately 200-250 GB in 45-60 minutes.
    I agree that the extract has to parse the extra object-id's. During the day, there is a redo switch every 2-3 minutes. The source is a 3-Node RAC. So approximately, 80-90 archives generated in an hour.
    The reason to mention this was, that while reading these archives also, the extract would be parsing extra Object ID's, as we are capturing data only for 3 tables. The effect of parsing extract object id's should have been seen during the day also. The reason being archive size is same, amount of data is same, the number of records to be scanned is same.
    The extract slows down and read at half the speed. If normally it would take 45-50 secs to read an archive log of normal day functioning, then it would take approx 90-100 secs to read the archives of the mentioned activities.
    Regarding the 3rd point,
    a. The extract is a classic extract, the archived logs are on local file system. No ASM, NO SAN/NAS.
    b. We have added  "TRANLOGOPTIONS BUFSIZE" parameter in our extract. We'll update as soon as we see any kind of improvements.

  • Import DDL

    Is there a method to import a file containing DDL? I've been given a file of DDL extracted from DB2 and been asked to work with it. It has multiple schemas interspersed in it. Any suggestions or recommendations would be appreciated.
    Thanks,

    DDL for DB2, we dont do yet, but we aim to start it soon. We do allow this for SQL Server , Sybase and Microsoft Access right now.
    Barry

  • Replacing "" from scripts which has table DDLs

    HI,
    I use an oracle version of Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production.
    I have few scripts consisting of DDLs for table. But  almost all CREATE TABLE syntax  consist of double quotes "" for the column name and table name like below example:
    After architecture review, we came to know that we need to remove all the double quotes from all scripts for column names and table names making sure other double quote usage are not impacted in any part of those scripts. But doing it manually by going through all scripts/tables seems to be huge effort. So am seeking suggestion if there is any way we can achieve this in a simpler way.
    Thanks

    After architecture review, we came to know that we need to remove all the double quotes from all scripts for column names and table names making sure other double quote usage are not impacted in any part of those scripts. But doing it manually by going through all scripts/tables seems to be huge effort. So am seeking suggestion if there is any way we can achieve this in a simpler way.
    Yes, there are at least two much simpler ways:
    1. Replace the architecture review team with people who use logic and reason as the basis for the standards that they adopt.
    2. Replace the 'remove all the double quotes' requirement from the standards list.
    If you plan to adopt a standard then you should adopt an industry standard. If you use any other standard then there should not only be a rational justification for NOT using the industry standard (in this case Oracle's) but there should also be some rational, and demonstrable, value to the standard that you adopt. I cannot think of any justification for NOT using the Oracle standard not am I aware of any value whatsoever to removing all double quotes from scripts.
    In addition, in almost 30 years of working in the field I am not aware of ANY organization that has adopted such a standard.
    Oracle both provides, and uses, extensive functionality for creating and using DDL. The DBMS_METADATA package is used by Oracle for both export and import and the standard used by that package is to enclose object names in double quotes. The reasons for this standard have already been stated by others. One example is the possible use of mixed-case for object names. But mixed-case is also used for path names in DIRECTORY objects, TABLESPACE objects, database links and for extracted passwords.
    Any extraction of DDL your org does will likely etiher use the DBMS_METADATA package or a tool that follows the Oracle standard.
    So your teams 'standard' not only contravenes the industry standard but imparts a substantial penalty in terms of the resources needed to constantly identify and correct the any DDL extracted from your systems.
    Here is a trivial example of using the Oracle package to extract DDL for the SCOTT.EMP table.
    select dbms_metadata.get_ddl('TABLE', 'EMP', 'SCOTT') from dual
    CREATE TABLE "SCOTT"."EMP"
    ( "EMPNO" NUMBER(4,0),
    "ENAME" VARCHAR2(10),
    "JOB" VARCHAR2(9),
    "MGR" NUMBER(4,0),
    "HIREDATE" DATE,
    "SAL" NUMBER(7,2),
    "COMM" NUMBER(7,2),
    "DEPTNO" NUMBER(2,0),
    CONSTRAINT "PK_EMP" PRIMARY KEY ("EMPNO")
    USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
    TABLESPACE "USERS" ENABLE,
    CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO")
    REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE
    ) SEGMENT CREATION IMMEDIATE
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
    TABLESPACE "USERS"
    If this were me I would use ALL of my political capital to fight the battle against this non-sensical 'standard'. There is simply no rhyme or reason for it, it will waste a lot of valuable personal resources and the cost/benefit ratio simply can't justify it.
    Those resources would be MUCH better used on other tasks.

  • Experts: ****Urgent**** help needed.

    We need to give a demo to a customer tomorrow.
    We need to store data in the Oracle 10g DB in Persian.
    Our development DB which we inserted data in English has a WE8ISO8859P1 character set. There is no data in Persian in this DB.
    We created a new 10 DB with char set AL32UTF8.
    Now we need to import data (From WE8ISO8859P1 dev DB which has data only in English) to AL32UTF8 DB.
    We will start entering in Persian only in the AL32UTF8 DB.
    All our columns in tables are just default VARCHAR2 columns.
    Will we have to change the columns of our tables in AL32UTF8 DB to NVARCHAR2?
    Will there be any problems exp/imp???
    Will there be any problems in entering data in Persian in AL32UTF8 DB?
    Will a column in WE8ISO8859P1 which is VARCHAR2(10), allow to enter 10 Persian characters in the AL32UTF8 DB?

    Thanks Experts for the answers.
    I finally got hold of a AL32UTF8 DB, changed my machine's language to Farsi and started inserting records and discovered these things which you also have mentioned.
    (1.) One Farsi character will take up 2 bytes.
    (2.) In AL32UTF8 if you declare a field as VARCHAR2(1 CHAR) it will allocate 4 bytes for that. If you just say VARCHAR2(1) only one byte will be allocated.
    (3.) You can declare a field as (maximum) VARCHAR2(4000 CHAR). Then maximum bytes you can store is 4000. i.e. only 2000 Farsi characters. If you enter 2001 Farsi characters it will give "value too large for column" error.
    (4.) So, if you have a WE8ISO8859P1, and have a table which as a VARCHAR2(10) field. You use a DDL script from WE8ISO8859P1 and run it just as it is in AL32UTF8. Now, if you EXPORT from WE8ISO8859P1 and IMPORT to AL32UTF8 there wont be any problem, since WE8ISO8859P1 is a single byte character set.
    (5.) Here is the problem. There is a NAME VARCHAR2(100) field in AL32UTF8. Now in AL32UTF8 you will be only be able to enter 50 Farsi characters into that. Solutions:
    (5.1) In AL32UTF8 recreate the column as NAME VARCHAR2(100 CHAR). I did it this way for the demo.
    (5.2) When creating the AL32UTF8 DB create it with NLS_LENGTH_SEMANTICS set to CHAR. So you can just use DDL extracted from WE8ISO8859P1 (i.e. NAME VARCHAR2(100)) in the AL32UTF8, since in AL32UTF8 100 will go as 100 CHAR. This way you don't have to modify the DDL scripts. Only drawback is for VARCHAR2 fields which need more than 2000 Fasi characters to be input, you will have to make those VARCHAR2 fields to CLOB. i.e. any field which is VARCHAR2(2000) or greater in WE8ISO8859P1 will have to be changed in AL32UTF8 to CLOB.
    I find this method (5.2) the better method. But I read somewhere that it is NOT ADVISIBLE TO set the NLS_LENGTH_SEMANTICS to CHAR at DB creation time.
    Is this true? If so why??

  • Schema refresh.

    I have a database server that supports a number of seperate applications, each application having its own schema and tablespace(s). to refresh the test database from live (or dev from test) we currently use transportable tablespaces to copy the data, and a datapump export/import to take care of objects held in the data dictionary (e.g. packages/procedures).
    Recently we hit a problem as the deveopers have started to use oracle text, when retesting our refresh method we have found that the users stop lists do not get recreated in the database being refreshed. thjis appears to be true of any similar pieces of information held in the CTSSYS schema, as the current usage of this feature is quite limited this isn't a problem at the moment as the stop lists canbe recreated manually what I would like to know is has anyone come accross similar features where a datapump export/import will not transfer the objects/information.
    Chris

    Hi ,
    The main theme of the Initial Load is to Synchronize the Source and Target data.
    Before executing an initial load, disable DDL extraction and replication. DDL processing is controlled by the DDL parameter in the Extract and Replicat parameter files.
    Please refer the below link., The pre-requisites are clearly mentioned in this.
    http://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_initsync.htm
    in this check the sub topic ----> 16.1.2 Prerequisites for Initial Load
    Regards,
    Veera

  • Three questions on Partitioning

    11.2.0.3 on Solaris 10
    Question 1.
    For RANGE partitioning or LIST partitioning or a composite of RANGE and LIST partitioning is it recommended to have global Indexes ?
    Question 2.
    For the below table we have created an index named IDX_NVH_CRTN_CLT_DTL_P as shown below in the CREATE INDEX statement.
    This is called "Local index" because for each table partition an Index segment will be created. Right?
    Question 3.
    In table's DDL (extracted using Toad) , I can see that the first partition is user defined because of its naming (NVH_CRTN_CLT_DTL_P1 ) . The next partition names seems to system generated names like SYS_ . Why is that ?
    -- Table with Range partitioning based on Month_id Column
    create table NVH_CRTN_CLT_DTL_P
      month_id                 NUMBER(6),
      client_id                VARCHAR2(400 CHAR),
      client_subprofile_id     NUMBER(15),
      client_mgmt_country_code VARCHAR2(5 CHAR),
      product_area_code        VARCHAR2(25 CHAR),
      rolling_rvn_amt          NUMBER(20,10),
      avg_rwa_amt              NUMBER(20,10),
      group_id                 VARCHAR2(25 CHAR),
      csb_id                   VARCHAR2(400 CHAR),
      nvh_rgn_cde              VARCHAR2(8 CHAR)
    partition by range (MONTH_ID)
      partition NVH_CRTN_CLT_DTL_P1 values less than (201101)
        tablespace WMNH_TBS
        pctfree 10
        initrans 1
        maxtrans 255
        storage
          initial 64K
          next 1M
          minextents 1
          maxextents unlimited
      partition SYS_P4592 values less than (201102)
        tablespace WMNH_TBS
        pctfree 10
        initrans 1
        maxtrans 255
        storage
          initial 64K
          next 1M
          minextents 1
          maxextents unlimited
      partition SYS_P4590 values less than (201103)
        tablespace WMNH_TBS
        pctfree 10
        initrans 1
        maxtrans 255
        storage
          initial 64K
          next 1M
          minextents 1
          maxextents unlimited
    --- Index is based on MONTH_ID and GROUP_ID columns
    create index IDX_NVH_CRTN_CLT_DTL_P on NVH_CRTN_CLT_DTL_P (MONTH_ID, GROUP_ID) local;

    >
    Question 1.
    For RANGE partitioning or LIST partitioning or a composite of RANGE and LIST partitioning is it recommended to have global Indexes ?
    >
    Contrary to previous advice global indexes can be used for all table types, not just RANGE.
    You should review the VLDB and Partitioning Guide for information about global and local indexes
    http://docs.oracle.com/cd/B28359_01/server.111/b32024/partition.htm
    See the Overview of Partitioned Indexes and the Local Partitioned Indexes sections
    >
    Question 2.
    For the below table we have created an index named IDX_NVH_CRTN_CLT_DTL_P as shown below in the CREATE INDEX statement.
    This is called "Local index" because for each table partition an Index segment will be created. Right?
    >
    Yes - this is explained in the Local Partitioned Indexes section I referred to above
    >
    Local partitioned indexes are easier to manage than other types of partitioned indexes. They also offer greater availability and are common in DSS environments. The reason for this is equipartitioning: each partition of a local index is associated with exactly one partition of the table. This enables Oracle to automatically keep the index partitions in sync with the table partitions, and makes each table-index pair independent. Any actions that make one partition's data invalid or unavailable only affect a single partition.
    Local partitioned indexes support more availability when there are partition or subpartition maintenance operations on the table. A type of index called a local nonprefixed index is very useful for historical databases. In this type of index, the partitioning is not on the left prefix of the index columns.
    >
    Note the 'equipartition' mentioned in the third sentence: one table partition => one index partition
    >
    Question 3.
    In table's DDL (extracted using Toad) , I can see that the first partition is user defined because of its naming (NVH_CRTN_CLT_DTL_P1 ) . The next partition names seems to system generated names like SYS_ . Why is that ?
    >
    If you create a partitioned table and do not provide names for some or all of the partitions they will be given system-generated names. If you split a partition and don't name the new partition it will also be given a system-generated name.
    Odds are that your table was created using code similar to this example
    create table NVH_CRTN_CLT_DTL_P
      month_id                 NUMBER(6),
      client_id                VARCHAR2(400 CHAR),
      client_subprofile_id     NUMBER(15),
      client_mgmt_country_code VARCHAR2(5 CHAR),
      product_area_code        VARCHAR2(25 CHAR),
      rolling_rvn_amt          NUMBER(20,10),
      avg_rwa_amt              NUMBER(20,10),
      group_id                 VARCHAR2(25 CHAR),
      csb_id                   VARCHAR2(400 CHAR),
      nvh_rgn_cde              VARCHAR2(8 CHAR)
    partition by range (MONTH_ID)
      partition NVH_CRTN_CLT_DTL_P1 values less than (201101),
      partition values less than (201102),
      partition values less than (201103)
    )Note that the last two partition clauses do not provide names so, on my system, they were named SYS_P910 and SYS_P911

  • Why SQL Developer (aka Raptor)?

    Do we really need another graphical IDE for Oracle? Oracle itself has JDeveloper, OEM variants.
    There are tons of very capable tools out there, some of them even free (Allround Automation PL/SQL Developer, etc)
    So, why another one? TOAD is pretty feature rich, the latest TOAD version has so many features I stopped counting.
    What can it possibly have that is different to offer? The same Schema/object browser, DDL extract, run/edit scripts, etc, the same old stuff. Heck, even HTML DB 2.0 has pretty good IDE features in its SQL Workshop.
    Thanks.

    I guess my question really boils down to the age-old "fight" between the CLI and GUI paradigms.
    IMHO, the biggest advantage CLI has over a GUI is that it can be used in ways
    that the GUI designer didn't intend. A GUI is designed to have a pre-determined
    number of menus, drop downs, commands, tabs, etc. Fixed at design-time,
    unchangeable by the end-users.
    [Unfortunately, the Firefox XUL concept doesn't seem to have caught on much outside of Mozilla]
    On the other hand, Unix commands and scripts have a different philosophy (more
    powerful than a GUI, IMHO). Expose a set of low-level tools that can
    be combined and mix-n-match in various ways.
    For example, show me the 10 largest tables in the following schema and generate
    DDL for them. If I had (hypothetical) scripts for each of the above components
    like, for instance,
    $ user_segments -sort=size schema.% | head -10 | xargs -n1 get_ddl
    Clearly, this is much more powerful than any GUI can ever be. Given the right set of low-level scripts (building blocks), and the power of Unix, the productivity of this approach is phenomenal.
    Just my 2c.
    Message was edited by:
    Vikas

  • How to extract a ddl from a dump database.

    Hi
    I'm new to Oracle. I got a dump file from a Oracle database. I need to get from it the ddl to recreate the database on another Oracle server.
    How do i do it? Please show me details steps how to do it.
    Thanks.

    If you are in UNIX, save this code to a script and try it.
    # impshow2sql   Tries to convert output of an IMP SHOW=Y command into a
    #               usage SQL script.
    # To use:
    #               Start a Unix script session and import with show=Y thus:
    #               $ imp user/password file=exportfile show=Y log=/tmp/showfile
    #               You now have the SHOW=Y output in /tmp/showfile .
    #               Run this script against this file thus:
    #               $ ./impshow2sql /tmp/showfile > /tmp/imp.sql
    #               The file /tmp/imp.sql should now contain the main SQL for
    #               the IMPORT.
    #               You can edit this as required.
    # Note:         This script may split lines incorrectly for some statements
    #               so it is best to check the output.
    # CONSTRAINT "" problem:
    #               You can use this script to help get the SQL from an export
    #               then correct it if it includes bad SQL such as CONSTRAINT "".
    #               Eg:
    #                Use the steps above to get a SQL script and then
    #                $ sed -e 's/CONSTRAINT ""//' infile > outfile
    #               Now precreate all the objects and import the export file.
    # Extracting Specific Statements only:
    #     It is fairly easy to change the script to extract certain statements
    #     only. For statements you do NOT want to extract change N=1 to N=0
    #      Eg: To extract CREATE TRIGGER statements only:
    #          a) Change all lines to set N=0.
    #              Eg: / \"CREATE /    { N=0; }
    #                 This stops CREATE statements being output.
    #          b) Add a line (After the general CREATE line above):
    #               / \"CREATE TRIGGER/     { N=1; }
    #             This flags that we SHOULD output CREATE TRIGGER statements.
    #          c) Run the script as described to get CREATE TRIGGER statements.
    awk '  BEGIN     { prev=";" }
         / \"CREATE /      { N=1; }
         / \"ALTER /      { N=1; }
         / \"ANALYZE /      { N=1; }
         / \"GRANT /       { N=1; }
         / \"COMMENT /   { N=1; }
         / \"AUDIT /     { N=1; }
         N==1 { printf "\n/\n\n"; N++ }
         /\"$/ { prev=""
              if (N==0) next;
              s=index( $0, "\"" );
                       if ( s!=0 ) {
                   printf "%s",substr( $0,s+1,length( substr($0,s+1))-1 )
                   prev=substr($0,length($0)-1,1 );
              if (length($0)<78) printf( "\n" );
               }'  $*

  • Extract DDL

    how to extract the DDL of a database object in oracle 9201?
    Thankx in advance for help...

    Hi,
    how to extract the DDL of a database object in oracle 9201?Well, I think GET_DDL would do the job for you:
    SQL> set pagesize 0
    SQL> set long 90000
    SQL> set feedback off
    SQL> set echo off
    SQL> select DBMS_METADATA.GET_DDL('TABLE', 'SMALL_TABLE', 'REPLICA') from dual;
    CREATE TABLE "REPLICA"."SMALL_TABLE"
    ( "ID" NUMBER NOT NULL ENABLE,
    "NAME" VARCHAR2(4000),
    "AGE" NUMBER,
    PRIMARY KEY ("ID")
    USING INDEX PCTFREE 0 INITRANS 2 MAXTRANS 255
    STORAGE(INITIAL 204800 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "INDX" ENABLE
    ) PCTFREE 15 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 204800 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
    TABLESPACE "USERS"
    SQL>
    Cheers,
    Marcello M.

  • Extract DDL from Database

    Hi all,
    I am trying to extract the DDL statements for my database indexes. I am using Oracle 8i. The two option:
    (1) You could run the export utility with ROWS=NO, but the output was hard to re-use because of quoted strings.
    (2) DBMS_METADATA.GET_DDL but it is only available for Oracle 9i and above. So it does me not good.
    Are there any other option beside (1) that I can use in Oracle 8i. Any suggestion would be appreciated.
    Thanks,
    Ravi

    You could run the export utility with ROWS=NO, but the output was hard to re-use because of quoted strings.You haven't mentioned the OS.
    you can create the indexfile(ddl file) and remove the quoted strings using global replacement and you would have a clean SQL file with DDL's.

  • Extract DDL for all objects separately !

    Hi All
    I have a huge DB having 3000 tables (having indexes,triggers,synonym), 1200 functions/procedures, 1000 views etc.
    I need to extract only metadata and put into a version control repository.
    I need to extract all objects (tables,procs,functions,views) in separate files, with the grants for each procs/func/table/view in the respective procs/func/table/view file only.
    Indexes should be in respective table file only. Each object (procs/func/table/view etc) should have DROP command in the beginning.
    I tried sql developer, but it gives separate files for grants,indexes,triggers,drops etc and also gives option to generate one script for all tables, one for all views, one for all indexes and so on, so doesn't satisfy my requirement.
    Please suggest me a good tool (preferably FREE one) to extract the metadata in said fashion.
    Thanks.

    AnkitV wrote:
    Hi All
    I have a huge DB having 3000 tables (having indexes,triggers,synonym), 1200 functions/procedures, 1000 views etc.
    I need to extract only metadata and put into a version control repository.
    I need to extract all objects (tables,procs,functions,views) in separate files, with the grants for each procs/func/table/view in the respective procs/func/table/view file only.
    Indexes should be in respective table file only. Each object (procs/func/table/view etc) should have DROP command in the beginning.The hard part is the requirement of separating the items into different files.
    sb2075's answers are your best option. Write a PL/SQL script on the server to use DBMS_METADATA.GET_DDL or whatevever equivalent extraction routine will work for you. Loop through the objects you need from the data dictionary and writing the data using UTL_FILE. The simplified logic should look something like
    foreach table
      get the ddl
      generate filename
      open file
      write ddl
      close file>
    I tried sql developer, but it gives separate files for grants,indexes,triggers,drops etc and also gives option to generate one script for all tables, one for all views, one for all indexes and so on, so doesn't satisfy my requirement.You don't want to manually use SQL*Developer to do 3000 extractions anyways. The script should do it all for you.

Maybe you are looking for