Datapump - expdp.open creates tables in schema

Hi
     i am using datapump in Oracle 10g to archive old partitions from the main schema to another schema.
i notice that when dbms_datapump.open is called that a new table is created by dbms_datadpump for internal purposes. This is verified in the oracle documentation
http://docs.oracle.com/cd/B12037_01/appdev.101/b10802/d_datpmp.htm#997383
Usage Notes
When the job is created, a master table is created for the job under the caller's schema within the caller's default tablespace. A handle referencing the job is returned that attaches the current session to the job. Once attached, the handle remains valid until either an explicit or implicit detach occurs. The handle is only valid in the caller's session. Other handles can be attached to the same job from a different session by using the ATTACH procedure.
Does anybody know whether this table can be removed by a  "cleanup" Oracle dbms_datapump call, or whether it has to be cleaned up manually.

can confirm thats what you do
v_job_handle:= DBMS_DATAPUMP.OPEN('EXPORT', 'TABLE', NULL, v_job_name);
-- Set parallelism to 1 and add file
DBMS_DATAPUMP.SET_PARALLEL(v_job_handle, 1);
DBMS_DATAPUMP.ADD_FILE(v_job_handle, v_job_name || '_' || v_partition.partition_name || '.dmp', 'PARTITION_DUMPS');
-- Apply filter to process only a certain partition in the table
DBMS_DATAPUMP.METADATA_FILTER(v_job_handle, 'SCHEMA_EXPR', 'IN(''SIS_MAIN'')');
DBMS_DATAPUMP.METADATA_FILTER(v_job_handle, 'NAME_EXPR', 'LIKE ''' || t_archive_list(i) || '''');
DBMS_DATAPUMP.DATA_FILTER(v_job_handle, 'PARTITION_EXPR', 'IN (''' || v_partition.partition_name || ''')', t_archive_list(i), 'SIS_MAIN');
-- Use statistics (rather than blocks) to estimate time.
DBMS_DATAPUMP.SET_PARAMETER(v_job_handle,'ESTIMATE','STATISTICS');
-- Start the job. An exception is returned if something is not set up properly.
DBMS_DATAPUMP.START_JOB(v_job_handle);
-- The export job should now be running. We loop until its finished
v_percent:= 0;
v_job_state:= 'UNDEFINED';
WHILE (v_job_state != 'COMPLETED') and (v_job_state != 'STOPPED') LOOP
     DBMS_DATAPUMP.get_status(v_job_handle,DBMS_DATAPUMP.ku$_status_job_error + DBMS_DATAPUMP.ku$_status_job_status + DBMS_DATAPUMP.ku$_status_wip,-1,v_job_state,sts);
     js:= sts.job_status;
     -- As the percentage-complete changes in this loop, the new value displays.
   IF js.percent_done != v_percent THEN
          v_percent:= js.percent_done;
     END IF;
END LOOP;
-- When the job finishes, display status before detaching from job.
PRC_LOG(f1, t_archive_list(i) || ': Export complete with status: ' || v_job_state);
-- DBMS_DATAPUMP.DETACH(v_job_handle);
-- use STOP_JOB instead of DETACH otherwise the "master table" which is created when OPEN is called will not be removed.
DBMS_DATAPUMP.STOP_JOB(v_job_handle,0,0);

Similar Messages

  • Error while creating tables in schema since trigger already enabled

    Hi All,
    I am trying to insert newly created objects in userA,but am not able to insert "ora_dict_obj_type" type value in target table,
    Please see my error and script below:
    Error:
    Error report:
    SQL Error: ORA-00604: error occurred at recursive SQL level 1
    ORA-01858: a non-numeric character was found where a numeric was expected
    ORA-06512: at line 2
    00604. 00000 - "error occurred at recursive SQL level %s"
    *Cause:    An error occurred while processing a recursive SQL statement
    (a statement applying to internal dictionary tables).
    *Action:   If the situation described in the next error on the stack
    can be corrected, do so; otherwise contact Oracle Support.
    Script:I need to insert newly created objects into this table,here Object type is important because based object type i am going to create one more trigger to grant access automatically to other users using trigger.
    create table access_table
    (owner_name varchar2(30),
    object_name varchar2(30),
    object_type varchar2(30),
    created_time timestamp default current_timestamp);
    create or replace
    trigger access_trigger
    after create on schema
    begin
    insert into access_table values
    (ora_dict_obj_owner,
    ora_dict_obj_name,
    ora_dict_obj_type,
    current_timestamp);
    end access_trigger;
    Thanks
    Edited by: 983419 on Feb 9, 2013 7:20 AM

    insert into access_table values
    (ora_dict_obj_owner, --VARCHAR(30)
    ora_dict_obj_name, --VARCHAR(30)
    ora_dict_obj_type, --VARCHAR(20)
    current_timestamp --"TIMESTAMP WITH TIME ZONE"
    );The Oracle CURRENT_TIMESTAMP function will give you the current time with all details. It returns the current timestamp with time zone for the current session's time zone.
    --example
    SQL> select CURRENT_TIMESTAMP from dual;
    CURRENT_TIMESTAMP
    09-FEB-13 02.53.33.753000 PM 03:00You are inserting 'CURRENT_TIMESTAMP' ( TIMESTAMP WITH TIME ZONE datatype ) into the column with TIMESTAMP datatype.So what?? Can you please do some workout on what you said and post it here?
    Try this:
    create table access_table
    (owner_name varchar2(30),
    object_name varchar2(30),
    object_type varchar2(30),
    created_time TIMESTAMP WITH TIME ZONE default CURRENT_TIMESTAMP);
    NO NEED.
    Check this -
    ranit@XE11GR2>> create table access_table
      2  (owner_name varchar2(30),
      3  object_name varchar2(30),
      4  object_type varchar2(30),
      5  created_time timestamp default current_timestamp);
    Table created.
    Elapsed: 00:00:00.07
    ranit@XE11GR2>> insert into access_table(owner_name,object_name,object_type,created_time)
      2  values('rr','bb','xx',current_timestamp); -- "CURRENT_TIMESTAMP with TIME ZONE inserted"
    1 row created.
    Elapsed: 00:00:00.00
    ranit@XE11GR2>> insert into access_table(owner_name,object_name,object_type)
      2  values('rr2','bb2','xx2');
    1 row created.
    Elapsed: 00:00:00.01
    ranit@XE11GR2>> select *
      2  from access_table;
    OWNER_NAME                     OBJECT_NAME                    OBJECT_TYPE                    CREATED_TIME
    rr                             bb                             xx                             09-FEB-13 01.47.56.490000 PM
    rr2                            bb2                            xx2                            09-FEB-13 01.48.20.187000 PM
    Elapsed: 00:00:00.01Edited by: ranit B on Feb 10, 2013 3:20 AM
    -- code added

  • Problems on creating table under schema for BPM Demo project

    hi,
    i have not enough experience in database concepts, but i can create ,insert and modify the tables and data by using SQL query in oracle database 10 g express edition
    The reason behind send this qurery is i wanted to create the database tables under "Quote" schema., which is necessary to store and retrive data for my BPM demo project.
    Two important . Sql scirpt file namely create_user.sql and quote.sql used to create the database tables under "quote" schema available on e:/bpm/sql
    i could create "quote" schema by this command on commandline,
    >cd e:/bpm/sql
    >sqlplus sys/welcome as sysdba @create_user.sql quote quote
    but when creating database tables under "quote" schema , i got "SP2-0734 - unknown command exception thrown
    in commandline i was used like this to create table,
    >sqlplus quote quote @quote.sql
    Can anyone who familiar with database query concept help me to create tables on "quote " schema,
    thanks in advance

    Hello,
    Please find enclosed a complete chapter about the CREATE TABLE statement:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7002.htm#i2095331
    At the end there're many examples.
    Hope this help.
    Best regards,
    Jean-Valentin

  • Datapump API: Import all tables in schema

    Hi,
    how can I import all tables using a wildcard in the datapump-api?
    Thanks in advance,
    tensai

    _tensai_ wrote:
    Thanks for the links, but I already know them...
    My problem is that I couldn't find an example which shows how to perform an import via the API which imports all tables, but nothing else.
    Can someone please help me with a code-example?I'm not sure what you mean by "imports all tables, but nothing else". It could mean that you only want to import the tables, but not the data, and/or not the statistics etc.
    Using the samples provided in the manuals:
    DECLARE
      ind NUMBER;              -- Loop index
      h1 NUMBER;               -- Data Pump job handle
      percent_done NUMBER;     -- Percentage of job complete
      job_state VARCHAR2(30);  -- To keep track of job state
      le ku$_LogEntry;         -- For WIP and error messages
      js ku$_JobStatus;        -- The job status from get_status
      jd ku$_JobDesc;          -- The job description from get_status
      sts ku$_Status;          -- The status object returned by get_status
      spos NUMBER;             -- String starting position
      slen NUMBER;             -- String length for output
    BEGIN
    -- Create a (user-named) Data Pump job to do a "schema" import
      h1 := DBMS_DATAPUMP.OPEN('IMPORT','SCHEMA',NULL,'EXAMPLE8');
    -- Specify the single dump file for the job (using the handle just returned)
    -- and directory object, which must already be defined and accessible
    -- to the user running this procedure. This is the dump file created by
    -- the export operation in the first example.
      DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DATA_PUMP_DIR');
    -- A metadata remap will map all schema objects from one schema to another.
      DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','RANDOLF','RANDOLF2');
    -- Include and exclude
      dbms_datapump.metadata_filter(h1,'INCLUDE_PATH_LIST','''TABLE''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/C%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/F%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/G%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/I%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/M%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/P%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/R%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/TR%''');
      dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/STAT%''');
    -- no data please
      DBMS_DATAPUMP.DATA_FILTER(h1, 'INCLUDE_ROWS', 0);
    -- If a table already exists in the destination schema, skip it (leave
    -- the preexisting table alone). This is the default, but it does not hurt
    -- to specify it explicitly.
      DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');
    -- Start the job. An exception is returned if something is not set up properly.
      DBMS_DATAPUMP.START_JOB(h1);
    -- The import job should now be running. In the following loop, the job is
    -- monitored until it completes. In the meantime, progress information is
    -- displayed. Note: this is identical to the export example.
    percent_done := 0;
      job_state := 'UNDEFINED';
      while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
        dbms_datapump.get_status(h1,
               dbms_datapump.ku$_status_job_error +
               dbms_datapump.ku$_status_job_status +
               dbms_datapump.ku$_status_wip,-1,job_state,sts);
        js := sts.job_status;
    -- If the percentage done changed, display the new value.
         if js.percent_done != percent_done
        then
          dbms_output.put_line('*** Job percent done = ' ||
                               to_char(js.percent_done));
          percent_done := js.percent_done;
        end if;
    -- If any work-in-progress (WIP) or Error messages were received for the job,
    -- display them.
           if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
        then
          le := sts.wip;
        else
          if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
          then
            le := sts.error;
          else
            le := null;
          end if;
        end if;
        if le is not null
        then
          ind := le.FIRST;
          while ind is not null loop
            dbms_output.put_line(le(ind).LogText);
            ind := le.NEXT(ind);
          end loop;
        end if;
      end loop;
    -- Indicate that the job finished and gracefully detach from it.
      dbms_output.put_line('Job has completed');
      dbms_output.put_line('Final job state = ' || job_state);
      dbms_datapump.detach(h1);
    exception
      when others then
        dbms_output.put_line('Exception in Data Pump job');
        dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0,
                                  job_state,sts);
        if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
        then
          le := sts.error;
          if le is not null
          then
            ind := le.FIRST;
            while ind is not null loop
              spos := 1;
              slen := length(le(ind).LogText);
              if slen > 255
              then
                slen := 255;
              end if;
              while slen > 0 loop
                dbms_output.put_line(substr(le(ind).LogText,spos,slen));
                spos := spos + 255;
                slen := length(le(ind).LogText) + 1 - spos;
              end loop;
              ind := le.NEXT(ind);
            end loop;
          end if;
        end if;
        -- dbms_datapump.stop_job(h1);
        dbms_datapump.detach(h1);
    END;
    /This should import nothing but the tables (excluding the data and the table statistics) from an schema export (including a remapping shown here), you can play around with the EXCLUDE_PATH_EXPR expressions. Check the serveroutput generated for possible values used in EXCLUDE_PATH_EXPR.
    Use the DBMS_DATAPUMP.DATA_FILTER procedure if you want to exclude the data.
    For more samples, refer to the documentation:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_api.htm#i1006925
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • How to create table from one to another schema?

    Hi,
    There is two schema A and B. schema A want to create table on Schema. which privilege we need to provide? how to create?
    thanks in advance
    Thanks,

    user2017273 wrote:
    Hi,
    There is two schema A and B. schema A want to create table on Schema. which privilege we need to provide? how to create?
    thanks in advance
    Thanks,When you give CREATE ANY TABLE TO A then user A will create table on any schema.But you can create stored PROCEDURE on schema B for creating table and give GRANT EXECUTE <PROC NAME> to A.

  • Problem create table

    I am a bigner in 10g, my question is if I create table in SQL command line, can I open this table in database home page, or how can I insert in home page in my table which I have created in sql command line. furthermore is there any tutorial of 10g.
    khawaja

    We welcome
    Yes, you can open created table in WWW frontend. You have to login to Home Page and go to SQL Workshop -> Object Browser, select your table.
    If you need insert record you have to click Data -> Insert Row.
    Regards

  • How to create table for XML schema-based Interface form

    Hi All,
    With tcode SFP to crate  a XML schema-based Interface form, how to create a defined table can be listed in "Data View"?
    Just like APAP Dictonary- Based Interface form, that we can drag  a defined table from data view to the panel.

    Hi,
    Just follow these steps:
    1. Create interactive form UI element in your view.
    2. Now provide Datasource and PDFSOURCE to it in form properties.
    3. Now give a template name prefix with 'Z' or 'Y'.
    4. Double click on it. It will prompt for interface name.
    5. Provide interface name prefixed with 'Z' or 'Y'.
    6. Click on Context button in the Pop up window and provide the node you have selected as DATASOURCE.
    7. Click ok and it will open the form designer.
    8. In this way you can create a XML Schema based Form.
    9. Activate the interface and design the form providing layout type and other details.
    Hope it will help.
    Regards,
    Vaibhav

  • Expdp fail and create table SYS_EXPORT_SCHEMA_20

    Hi Gurus
    I am using Oracle 10.2.0.3 in AIX env
    My database size is around 1600 GB. Sometime my expdp fail and create table like SYS_EXPORT_SCHEMA_20, SYS_EXPORT_SCHEMA_05. As I run expdp from system user , I notice that it create this type of table into system tablespace. It time it consume around 5gb space. Now my system tablespace size is 68 GB.
    Can I drop those table? If I drop these table then it create any problem? This is my production database.
    Regards
    Rabi

    user13134974 wrote:
    Hi Gurus
    I am using Oracle 10.2.0.3 in AIX env
    Regards
    RabiThose tables you were mentioning, SYS_EXPORT_SCHEMA_nn , are the data pump master tables used for data pump jobs;their purpose is to
    hold the info about the job details.
    Once the job has finished table should be droped, but in case of a job failure table remains so every new dp job must create new SYS_EXPORT_SCHEMA_nn table
    with the +1 nn iteration depending on the name of the last master table that was left due to the dp job failure.
    Cleaning those tables can be done with the dbms_datapump STOP_JOB Procedure, check the docs about the details :
    http://download.oracle.com/docs/cd/B12037_01/server.101/b10825/dp_export.htm
    You can also go visit youroracle support to see examples and instructions for cleaning your db from those dismised master tables,
    How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? [ID 336014.1]

  • Exporting tables from schema by expdp

    I created user test having read write writes on directory mydir(at os level), there are 1000 tables in my test user schema, i wnt to export those tables in mydir (using expdp) which table names did not contain '9' ......
    i tried some below soln whcih didnt work
    $expdp userid=test/test directory=mydir dumpfile=tables_not9.dat query=tab:"where tname not like '%9%'"
    $expdp userid=tet/test directory=mydir dumpfile=tables_not9.da schema query=user_tables:"where table_name not like '%9%'"
    plz guide for correct solution

    915415 wrote:
    I created user test having read write writes on directory mydir(at os level), there are 1000 tables in my test user schema, i wnt to export those tables in mydir (using expdp) which table names did not contain '9' ......
    i tried some below soln whcih didnt work
    $expdp userid=test/test directory=mydir dumpfile=tables_not9.dat query=tab:"where tname not like '%9%'"
    $expdp userid=tet/test directory=mydir dumpfile=tables_not9.da schema query=user_tables:"where table_name not like '%9%'"
    plz guide for correct solutionSee below, exported all the objects except EMP table of schema SCOTT
    C:\Users\bn2676>expdp system/manager directory=data_pump_dir dumpfile=exp_scott2.dmp logfile=exp_scott2.log schemas=scott EXCLUDE=TABLE:\"LIKE \'EMP%\'\"
    Export: Release 11.2.0.1.0 - Production on Tue Feb 21 09:16:44 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/******** directory=data_pump_dir dumpfile=exp_scott2.dmp logfile=exp_scott2.log schemas=scott EXCLUDE=TABLE:"LIKE \'EMP%\'"
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 128 KB
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "SCOTT"."DEPT"                              5.937 KB       4 rows
    . . exported "SCOTT"."SALGRADE"                          5.867 KB       5 rows
    . . exported "SCOTT"."BONUS"                                 0 KB       0 rows
    Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
    Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
      C:\ORACLE\ADMIN\ORCL\DPDUMP\EXP_SCOTT2.DMP
    Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 09:17:04
    C:\Users\bn2676>

  • XML Schema Collection (SQL Server 2012): How to create an XML Schema Collection that can be used to Validate a field name (column title) of an existing dbo Table of a Database in SSMS2012?

    Hi all,
    I used the following code to create a new Database (ScottChangDB) and a new Table (marvel) in my SQL Server 2012 Management Studio (SSMS2012) successfully:
    -- ScottChangDB.sql saved in C://Documents/SQL Server XQuery_MacLochlainns Weblog_code
    -- 14 April 2015 09:15 AM
    USE master
    IF EXISTS
    (SELECT 1
    FROM sys.databases
    WHERE name = 'ScottChangDB')
    DROP DATABASE ScottChangDB
    GO
    CREATE DATABASE ScottChangDB
    GO
    USE ScottChangDB
    CREATE TABLE [dbo].[marvel] (
    [avenger_name] [char] (30) NULL, [ID] INT NULL)
    INSERT INTO marvel
    (avenger_name,ID)
    VALUES
    ('Hulk', 1),
    ('Iron Man', 2),
    ('Black Widow', 3),
    ('Thor', 4),
    ('Captain America', 5),
    ('Hawkeye', 6),
    ('Winter Soldier', 7),
    ('Iron Patriot', 8);
    SELECT avenger_name FROM marvel ORDER BY ID For XML PATH('')
    DECLARE @x XML
    SELECT @x=(SELECT avenger_name FROM marvel ORDER BY ID FOR XML PATH('Marvel'))--,ROOT('root'))
    SELECT
    person.value('Marvel[4]', 'varchar(100)') AS NAME
    FROM @x.nodes('.') AS Tbl(person)
    ORDER BY NAME DESC
    --Or if you want the completed element
    SELECT @x.query('/Marvel[4]/avenger_name')
    DROP TABLE [marvel]
    Now I am trying to create my first XML Schema Collection to do the Validation on the Field Name (Column Title) of the "marvel" Table. I have studied Chapter 4 XML SCHEMA COLLECTIONS of the book "Pro SQL Server 2008 XML" written by
    Michael Coles (published by Apress) and some beginning pages of XQuery Language Reference, SQL Server 2012 Books ONline (published by Microsoft). I mimicked  Coles' Listing 04-05 and I wanted to execute the following first-drafted sql in
    my SSMS2012:
    -- Reference [Scott Chang modified Listing04-05.sql of Pro SQL Server 2008 XML by Michael Coles (Apress)]
    -- [shcColes04-05.sql saved in C:\\Documents\XML_SQL_Server2008_code_Coles_Apress]
    -- [executed: 2 April 2015 15:04 PM]
    -- shcXMLschemaTableValidate1.sql in ScottChangDB of SQL Server 2012 Management Studio (SSMS2012)
    -- saved in C:\Documents\XQuery-SQLServer2012
    tried to run: 15 April 2015 ??? AM
    USE ScottChangDB;
    GO
    CREATE XML SCHEMA COLLECTION dbo. ComplexTestSchemaCollection_all
    AS
    N'<?xml version="1.0"?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema">
    <xsd:element name="marvel">
    <xsd:complexType>
    <xsd:all>
    <xsd:element name="avenger_name" />
    <xsd:element name="ID" />
    </xsd:all>
    </xsd:complexType>
    </xsd:element>
    </xsd:schema>';
    GO
    DECLARE @x XML (dbo. ComplexTestSchemaCollection_all);
    SET @x = N'<?xml version="1.0"?>
    <marvel>
    <avenger_name>Thor</name>
    <ID>4</ID>
    </marvel>';
    SELECT @x;
    GO
    DROP XML SCHEMA COLLECTION dbo.ComplexTestSchemaCollection_all;
    GO
    I feel that drafted sql is very shaky and it needs the SQL Server XML experts to modify to make it work for me. Please kindly help, exam the coding of my shcXMLTableValidate1.sql and modify it to work.
    Thanks in advance,
    Scott Chang

    Hi Scott,
    2) Yes, FOR XML PATH clause converts relational data to XML format with a specific structure for the "marvel" Table. Regarding validate all the avenger_names, please see below
    sample.
    DECLARE @x XML
    SELECT @x=(SELECT ID ,avenger_name FROM marvel FOR XML PATH('Marvel'))
    SELECT @x
    SELECT
    n.value('avenger_name[1]','VARCHAR(99)') avenger_name,
    n.value('ID[1]','INT') ID
    FROM @x.nodes('//Marvel') Tab(n)
    WHERE n.value('ID[1]','INT') = 1 -- specify the ID here
    --FOR XML PATH('Marvel')  --uncommented this line if you want the result as element type
    3)i.check the xml schema content
    --find xml schema collection
    SELECT ss.name,xsc.name collection_name FROM sys.xml_schema_collections xsc JOIN sys.schemas ss ON xsc.schema_id= ss.schema_id
    select * from sys.schemas
    --check the schema content,use the name,collection_name from the above query
    SELECT xml_schema_namespace(N'name',N'collection_name')
    3)ii. View can be viewed as virtual table. Use a view to list the XML schema content.
    CREATE VIEW XSDContentView
    AS
    SELECT ss.name,xsc.name collection_name,cat.content
    FROM sys.xml_schema_collections xsc JOIN sys.schemas ss ON xsc.schema_id= ss.schema_id
    CROSS APPLY(
    SELECT xml_schema_namespace(ss.name,xsc.name) AS content
    ) AS cat
    WHERE xsc.name<>'sys'
    GO
    SELECT * FROM XSDContentView
    By the way, it would be appreciated if you can spread your questions into posts. For any question, feel free to let me know.
    Eric Zhang
    TechNet Community Support

  • Creating a new schema or adding some more tables in existing schema ??

    Hi All,
    We have a new requirement from the client asking us to include a few new functionalities(pages--- all together a new application has to be embedded into our application) in our application. By adding these new functionalities, the number of hits to our application can be doubled.
    As per the above requirement, I would like to know that whether to create a new schema or to create some new tables in the existing database for the new functionalities.
    By what way, can I handle the number of requests in a better way? by creating the new schema or by creating the new tables for the new functionalities.
    Also I would like like to know what kind of factors will differ between creating a new schema or creating new tables for the new functionalities in the existing schema.
    FYI... We are using Cold Fusion as front-end and Oracle 9i for back-end

    This the forum for Oracle's SQL Developer tool. You will get better answers in the Database - General forum.
    The short answer is that from a performance point of view there will be no difference. The issues are more to do with maintenance.

  • How can i create table between different servers schema

    Hi,
    Can any advice me how I can create table on remote oracle schema.
    Thanks in advance
    Faheem Latif

    I am telling you what I know about remote table creation - it is impossible in Oracle, if you trust the documentation of course.
    ORA-02021: DDL operations are not allowed on a remote database
    Cause:     An attempt was made to use a DDL operation on a remote database. For example, "CREATE TABLE tablename@remotedbname ...".
    Action:     To alter the remote database structure, you must connect to the remote database with the appropriate privileges.

  • Create tables in different database schemas using EJB 3 Entity Persistent

    Hi All,
    I would like to find out how to get the following tasks done using EJB 3.0 Java Entity Persistent:
    ( i ) Create tables in different schemas such as STUDENT under EDUCATION schema and table PATIENT in HOSPITAL schema. We can then reference them in SQL as EDUCATION.STUDENT and HOSPITAL.PATIENT.
    ( ii ) Reference these tables uniquely once they are created.
    There are no pre-existing tables or naming conventions that needs to be adhere to in this situation.
    I have no problem creating tables on the current schema in EJB 3.0 Java Entity Persistent.
    Any suggestions would be appreciated.
    Thanks,
    Jack

    Use the schema attribute of the Table annotation:
    package javax.persistence;
    @Target({TYPE}) @Retention(RUNTIME)
    public @interface Table
       String name( ) default "";
       String catalog( ) default "";
       String schema( ) default "";
       UniqueConstraint
    uniqueConstraints( ) default {};
    }

  • CREATE TABLE for another OWNER/SCHEMA and in another TABLESPACE?

    I am logged in a SYSTEM user. Now I want to create a table aaa. The owner of this table should not be SYSTEM but user KARL. and the TABLESPACE should not be SYSTEM but the (existing) TABLESPACE tttt.
    As far as I know I can achieve this by issuing the following command:
    CREATE TABLE KARL.aaa ( a INTEGER, .......) TABLESPACE tttt;
    Regarding the TABLESPACE parameter I am not sure. Is it possible to allocate a TABLE for User Karl in a TABLESPACE which is not assigned to him?
    Furthermore I have an additional problem.
    I have a script with hundreds of CREATE TABLE + ALTER TABLE + CREATE INDEX DDL statements.
    All of them are not prepended with Schema/Owner and an TABLESPACE clause.
    Can I put somehow one single instruction at the top of the script which telles Oracle
    to use
    - OWNER Karl as Schema/Owner for all subsequent DDL stements
    - TABLESPACE tttt as TABLESPACE for all subsequent DDL stements
    In MYsql there is a "use <database>" statement. Is there soemthing similar for Oracle?
    Thank you
    Peter

    Yes... you can do that. Take for example a user A who has a secret password that you don't wish to give out... like an application schema. User B needs to make tables/objects in schema A and you want to track what user B is doing. First setup Oracles Fine Grained Auditing and then grant "connect through" from user A to user B as follows:
    SQL> create user b identified by abc123
      2    quota unlimited on users;
    User created.
    SQL> grant create session to b;
    SQL> create user a identified by abc123
      2    quota unlimited on users;
    User created.
    SQL> grant create table
      2      , create session
      3     to a;
    Grant succeeded.
    SQL> alter user a grant connect through b;
    User altered.
    SQL> connect b[a]/abc123@a486
    Connected.
    SQL> show user
    USER is "A"
    SQL> create table a.my_proxy_table
      2  ( c1 number
      3  , c2 varchar2(50)
      4  , c3 date
      5  );
    Table created.-----
    See that I connected using the "username[proxyuser]" syntax. Also notice that user B is by proxy user A, illustrated by my "show user" command. The connect through can be granted/revoked as needed without divulging A's password to B. In addition, your audit tables or xml logs will track the fact that user B created table a.my_proxy_table.
    Hope this helps,
    John

  • How to create table in another schema of same database

    Hi..
    I've a database DB1
    and has 2 schemas / USers in that..
    Usr1 and Usr2...
    And i created a TEMP table in Usr1 schema... and created
    Then tried the following statement in Usr2 schema...
    CREATE TABLE TEMP AS SELECT * FROM Usr1.TEMP;
    Then it's giving error that ...
    :00942 TABLE OR VIEW DOESN'T EXIST..
    What is the reason for that...
    Thank you

    josh1612 wrote:
    What other grants do i need to give so as to replicate the Primary Keys also...????That's not a matter of grants. It's the way the CREATE TABLE AS SELECT statement works. It does not copy over indexes, primary key constraints, unique constraints, foreign key constraints, etc.
    If you want to copy all that over, you would probably want to get the DDL from the original table (using the DBMS_METADATA package if you're in a recent version), modify the DDL with the new schema name, create the table, indexes, and constraints, and then do an INSERT ... SELECT to populate the data. Or do an export & import of the table from one schema to another.
    Justin

Maybe you are looking for

  • Is there a way to restrict two infoproviders of different characterisitcs by creating single variables

    Hi Experts, I am new to BEx. I am facing some issues in variable creation. I have two infoproviders(composite provider in my case). One infoprovider is loaded with ECC data and another infoprovider is loaded with transaction data. In reporting layer,

  • Error in Jheadstart migrated form

    I have migrated one of my existing forms from designer. I got some error in ResourceBundle files.. The error was that the migration had generated an extra .(dot) before the class name of the Resource bundle. After removing that error, Now I have an a

  • FI SUBSTITUTION throwing error

    Hi, I copied RGGBS000 into ZBGGBS000 and added two exits U102 and U103 to it  after that we did all the steps required  like going to GCX2 then OB28 .......steps that are required to have custom include for substitution. now when we are transporting

  • Does use of bluetooth headphones disable your iMac's headphone jack?

    I know that if you plug corded headphones into your iMac, your iMac's internal speakers are automatically disabled. But, if you connect (pair) bluetooth headphones to your iMac, does that disable your iMac's headphone jack?  Or could you use corded a

  • I want to use facetime but my iMac says I have to use a different email addys!

    I want to use facetime on multiple machines but my iMac says I have to use a different email addys!  Needing multiple email addresses for using Facetime on multiple Apple devices seems really ridiculous to me.  What is the way around that?