Gather stats on every table in a schema

Hi,
i have an CRM application running on 10g R2 db. it has 5000 tbls on which less than 10% of tables are dynamic. gather stats job runs every day at 2am successfully.
i was monitoring the statistics(dba_tables, dba_tab_modifications, dba_tab_statistics), noticed that only 28 tables r been update with latest stats every day for CRM schema and most of these tables are same. during query tunning i found that some tables has stale stats, but it does't figure in column stale of dba_tab_statistics, but it shows no of rows inserted, updated in tab_modifications.
my question is there any draw back in gathering stats for all the tables every day irrespective of data is loaded with 10% or not and but not for tables with no rows..

thanks for the quick response, it was helpful.
due to application vendor recommendations, for some tables stats were disabled and optimizer parameter were changed which causes dynamic sample not using dynamic stats gather for some queries as they use the tables with no stats. as per documentation it would be calculating the stats on fly when the query the tables which stats has not been updated.
as of now i am not gathering stats manually for this schema, as auto is scheduled. and will verify if indeed on 10% of data is loaded it updates the stats or not then i may manually gather stats for only those tables.

Similar Messages

  • Scheduled Job to gather stats for multiple tables - Oracle 11.2.0.1.0

    Hi,
    My Oracle DB Version is:
    BANNER Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE 11.2.0.1.0 Production
    TNS for Linux: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    In our application, we have users uploading files resulting in insert of records into a table. file could contain records ranging from 10000 to 1 million records.
    I have written a procedure to bulk insert these records into this table using limit clause. After the insert, i noticed my queries run slow against these tables if huge files are uploaded simultaneously. After gathering stats, the cost reduces and the queries executed faster.
    We have 2 such tables which grow based on user file uploads. I would like to schedule a job to gather stats during a non peak hour apart from the nightly automated oracle job for these two tables.
    Is there a better way to do this?
    I plan to execute the below procedure as a scheduled job using DBMS_SCHEDULER.
    --Procedure
    create or replace
    PROCEDURE p_manual_gather_table_stats AS
    TYPE ttab
    IS
        TABLE OF VARCHAR2(30) INDEX BY PLS_INTEGER;
        ltab ttab;
    BEGIN
        ltab(1) := 'TAB1';
        ltab(2) := 'TAB2';
        FOR i IN ltab.first .. ltab.last
        LOOP
            dbms_stats.gather_table_stats(ownname => USER, tabname => ltab(i) , estimate_percent => dbms_stats.auto_sample_size,
            method_opt => 'for all indexed columns size auto', degree =>
            dbms_stats.auto_degree ,CASCADE => TRUE );
        END LOOP;
    END p_manual_gather_table_stats;
    --Scheduled Job
    BEGIN
        -- Job defined entirely by the CREATE JOB procedure.
        DBMS_SCHEDULER.create_job ( job_name => 'MANUAL_GATHER_TABLE_STATS',
        job_type => 'PLSQL_BLOCK',
        job_action => 'BEGIN p_manual_gather_table_stats; END;',
        start_date => SYSTIMESTAMP,
        repeat_interval => 'FREQ=DAILY; BYHOUR=12;BYMINUTE=45;BYSECOND=0',
        end_date => NULL,
        enabled => TRUE,
        comments => 'Job to manually gather stats for tables: TAB1,TAB2. Runs at 12:45 Daily.');
    END;Thanks,
    Somiya

    The question was, is there a better way, and you partly answered it.
    Somiya, you have to be sure the queries have appropriate statistics when the queries are being run. In addition, if the queries are being run while data is being loaded, that is going to slow things down regardless, for several possible reasons, such as resource contention, inappropriate statistics, and having to maintain a read consistent view for each query.
    The default collection job decides for each table based on changes it perceives in the data. You probably don't want the default collection job to deal with those tables. You probably do want to do what Dan suggested with the statistics. But it's hard to tell from your description. Is the data volume and distribution volatile? You surely want representative statistics available when each query is started. You may want to use all the plan stability features available to tell the optimizer to do the right thing (see for example http://jonathanlewis.wordpress.com/2011/01/12/fake-baselines/ ). You may want to just give up and use dynamic sampling, I don't know, entire books, blogs and papers have been written on the subject. It's sufficiently advanced technology to appear as magic.

  • Triggers on every table in schema

    Hi All,
    I want to know every details that whenever someone performs a DML in schema A on any tables to logged into a table.
    I have attempted these:
    -Create log table
    CREATE TABLE audit_rst(
    user_id VARCHAR2(30),
    session_id NUMBER,
    host varchar2(30),
    ip_add varchar2(30),
    action CHAR(3),
    date_executed DATE);
    --Create a row based trigger for every tabel available
    --this exampel shows only one table
    CREATE OR REPLACE TRIGGER AUDIT_cc_2008
    AFTER INSERT OR DELETE OR UPDATE ON cc_2008 FOR EACH ROW
    DECLARE
    v_operation VARCHAR2(10) := NULL;
    BEGIN
    IF INSERTING THEN
       v_operation := 'I';
    ELSIF UPDATING THEN
       v_operation := 'U';
    ELSE
       v_operation := 'D';
    END IF;
    IF INSERTING OR UPDATING THEN
      INSERT INTO audit_rst(
      USER_ID,
      SESSION_ID,
      HOST,
      ACTION,
      DATE_EXECUTED)
      VALUES (
      user,
      sys_context('USERENV','SESSIONID'),
      sys_context('USERENV','HOST'),
      SYS_CONTEXT('USERENV', 'IP_ADDRESS', 15)
      v_operation, SYSDATE);
    ELSE
      INSERT INTO audit_rst (
      USER_ID,
      SESSION_ID,
      HOST,
      ACTION,
      DATE_EXECUTED)
      VALUES (
      user,
      sys_context('USERENV','SESSIONID'),
      sys_context('USERENV','HOST'),
      v_operation, SYSDATE);
    END IF;
    END;Do I need to create this trigger for every table available in teh schema...or is tehre a schema level trigger that can scan through the schema and pick up any dml changes to any table and log it into teh audit_rst table?

    fyi
    Schema-Level Events for Trigger Definitions Event
    SERVERERROR : Oracle fires the trigger whenever a server error message is logged.
    LOGON : Oracle fires the trigger after a client application logs on to the database successfully.
    LOGOFF : Oracle fires the trigger before a client application logs off the database.
    CREATE : Oracle fires the trigger whenever a CREATE statement adds a new database object to the schema.
    DROP : Oracle fires the trigger whenever a DROP statement removes an existing database object from the
    schema.
    ALTER : Oracle fires the trigger whenever an ALTER statement modifies an existing database object in the
    schema.

  • How to gather stats on the target table

    Hi
    I am using OWB 10gR2.
    I have created a mapping with a single target table.
    I have checked the mapping configuration 'Analyze Table Statements'.
    I have set target table property 'Statistics Collection' to 'MONITORING'.
    My requirement is to gather stats on the target table, after the target table is loaded/updated.
    According to Oracle's OWB 10gR2 User Document (B28223-03, Page#. 24-5)
    Analyze Table Statements
    If you select this option, Warehouse Builder generates code for analyzing the target
    table after the target is loaded, if the resulting target table is double or half its original
    size.
    My issue is that when my target table size is not doubled or half its original size then traget table DOES NOT get analyzed.
    I am looking for a way or settings in OWB 10gR2, to gather stats on my target table no matter its size after the target table is loaded/updated.
    Thanks for your help in advance...
    ~Salil

    Hi
    Unfortunately we have to disable automatic stat gather on the 10g database.
    My requirement needs to extract data from one database and then load into my TEMP tables and then process it and finally load into my datawarehouse tables.
    So I need to make sure to analyze my TEMP tables after they are truncated and loaded and subsequently updated, before I can process the data and load it into my datawarehouse tables.
    Also I need to truncate All TEMP tables after the load is completed to save space on my target database.
    If we keep the automatic stats ON my target 10g database then it might gather stats for those TEMP tables which are empty at the time of gather stat.
    Any ideas to overcome this issue is appreciated.
    Thanks
    Salil

  • Gather Stats on Newly Partitioned Table

    I partitioned an existing table containing 92 million rows. The method was using dbms_redefinition, whereby I started the redef and then added the indexes and constraints last. After partitioning, I did not gather stats on any of the partitions that were created and I did not analyze any of the indexes. Then I loaded an additional 4 million records into on of the partitions of the newly partitioned table. I ran dbms gather stats on this particular partition and it took over 15 hours. Normally it only takes 4 hours to run dbms gather stats on the individual partitions, so I stopped it after 15 hours. When I monitored it while it was running, it looked like it was taking a really long time gathering stats on the indexes. Is this normal for a newly partitioned table? Is there something I can to prevent it from taking so long when I run gather stats? Oracle Version 10.2.0.4

    -- Gather PARTITION Statistics
    SYS.DBMS_STATS.gather_table_stats(ownname => upper(v_table_owner), tabname => upper(v_table_name),
    partname =>v_table_partition_name, estimate_percent => 20, cascade=> FALSE,granularity => 'PARTITION');
    -- Gather GLOBAL INDEX Statistics
    for i in (select * from sys.dba_indexes where table_owner = upper(v_table_owner)
    and table_name = upper(v_table_name) and partitioned = 'NO'
    order by index_name)
    loop
    SYS.DBMS_STATS.gather_index_stats(ownname => upper(v_table_owner), indname => i.index_name,
    estimate_percent => 20, degree => NULL);
    end loop;
    -- Gather SUB-PARTITION Statistics
    SYS.DBMS_STATS.gather_table_stats(ownname => upper(v_table_owner), tabname => upper(v_table_name),
    partname =>v_table_subpartition_name, estimate_percent => 20, cascade=> TRUE,granularity => 'ALL');

  • Skip gather stats for a partition | table from the gather_stats_job in 10g

    Hi all,
    Can we skip gather statistics for a table or a partition in a partitioned-table from the GATHER_STATS_JOB in Oracle 10g ?
    (cause that partition store in an offline-datafile, so GATHER_STATS_JOB had errors when running in sheduled).
    Thanks.
    Edited by: user8710247 on Nov 26, 2011 6:41 PM

    GATHER_TABLE_STATS will default to GRANULARITY 'AUTO' which will include Global and Partition Statistics. Global Statistics have to be across all the Partitions -- so Oracle will attempt to read all the partitions for this !
    You need to run GATHER_TABLE_STATS with GRANULARITY 'PARTITION' and naming the other Partitions --- i.e. run it for each of the online partitions.
    See :
    SQL> create table XYZ (col_1  number, col_2 varchar2(5))
      2  partition by range (col_1)
      3  (partition P1 values less than (10) tablespace HEMANT,
      4  partition P2 values less than (100) tablespace USERS)
      5  /
    Table created.
    SQL> insert into XYZ values (5,'Five');
    1 row created.
    SQL> insert into XYZ values (50,'Fifty');
    1 row created.
    SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL');
    PL/SQL procedure successfully completed.
    SQL> select partition_name, tablespace_name , num_rows, sample_size from user_tab_partitions
      2  where table_name = 'XYZ'
      3  /
    PARTITION_NAME                 TABLESPACE_NAME                  NUM_ROWS
    SAMPLE_SIZE
    P1                             HEMANT                                  1
              1
    P2                             USERS                                   1
              1
    SQL>
    SQL> exec dbms_stats.lock_table_stats('','XYZ');
    PL/SQL procedure successfully completed.
    SQL> alter tablespace HEMANT offline;
    Tablespace altered.
    SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL');
    BEGIN dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL'); END;
    ERROR at line 1:
    ORA-20005: object statistics are locked (stattype = ALL)
    ORA-06512: at "SYS.DBMS_STATS", line 13159
    ORA-06512: at "SYS.DBMS_STATS", line 13179
    ORA-06512: at line 1
    SQL>
    SQL> exec dbms_stats.unlock_table_stats('','XYZ');
    PL/SQL procedure successfully completed.
    SQL> exec dbms_stats.lock_partition_stats('','XYZ','P1');
    PL/SQL procedure successfully completed.
    SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL');
    BEGIN dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,granularity=>'ALL'); END;
    ERROR at line 1:
    ORA-00376: file 2 cannot be read at this time
    ORA-01110: data file 2:
    '/usr/oracle/oradata/MONDB/datafile/o1_mf_hemant_7d6m8zkx_.dbf'
    ORA-06512: at "SYS.DBMS_STATS", line 13159
    ORA-06512: at "SYS.DBMS_STATS", line 13179
    ORA-06512: at line 1
    SQL>
    SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2');
    BEGIN dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2'); END;
    ERROR at line 1:
    ORA-00376: file 2 cannot be read at this time
    ORA-01110: data file 2:
    '/usr/oracle/oradata/MONDB/datafile/o1_mf_hemant_7d6m8zkx_.dbf'
    ORA-06512: at "SYS.DBMS_STATS", line 13159
    ORA-06512: at "SYS.DBMS_STATS", line 13179
    ORA-06512: at line 1
    SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2',granularity=>'GLOBAL AND PARTITION');
    BEGIN dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2',granularity=>'GLOBAL AND PARTITION'); END;
    ERROR at line 1:
    ORA-00376: file 2 cannot be read at this time
    ORA-01110: data file 2:
    '/usr/oracle/oradata/MONDB/datafile/o1_mf_hemant_7d6m8zkx_.dbf'
    ORA-06512: at "SYS.DBMS_STATS", line 13159
    ORA-06512: at "SYS.DBMS_STATS", line 13179
    ORA-06512: at line 1
    SQL>
    SQL> exec dbms_stats.gather_table_stats('','XYZ',estimate_percent=>100,partname=>'P2',granularity=>'PARTITION');
    PL/SQL procedure successfully completed.
    SQL>Hemant K Chitale

  • How to find accurate number of Rows, and size of all the tables of a Schema

    HI,
    How to find the accurate number of Rows, and size of all the tables of a Schema ????
    Thanks.

    SELECT t.table_name AS "Table Name",
    t.num_rows AS "Rows",
    t.avg_row_len AS "Avg Row Len",
    Trunc((t.blocks * p.value)/1024) AS "Size KB",
    t.last_analyzed AS "Last Analyzed"
    FROM dba_tables t,
    v$parameter p
    WHERE t.owner = Decode(Upper('&1'), 'ALL', t.owner, Upper('&1'))
    AND p.name = 'db_block_size'
    ORDER by 4 desc nulls last;
    ## Gather schema stats
    begin
    dbms_stats.gather_schema_stats(ownname=>'SYSLOG');
    end;
    ## Gather a particular table stats of a schema
    begin
    DBMS_STATS.gather_table_stats(ownname=>'syslog',tabname=>'logs');
    end;
    http://www.oradev.com/create_statistics.jsp
    Hope this will work.
    Regards
    Asif Kabir
    -- Mark the answer as correct/helpful

  • Refreshing mview is hanging after a database level gather stats

    hi guys,
    can you please help me identify the root cause of this issue.
    the scenario is this:
    1. we have a scheduled unix job that will refresh an mview everyday, from tuesday to saturdays.
    2. database maintenance being done during weekends (sundays), gathering stats at a database level.
    3. the refresh mview unix job apparently is hanging every tuesdays.
    4. our workaround is to kill the job, request for a schema gather stats, then re-run the job. and voila, refresh mview will be successful then.
    5. and the rest of the weekdays until saturdays, refresh mview is having no problems.
    we already identified during testing that the scenario where the refresh mview is failing is when after we are gathering stats in a database level.
    during gather stats in a schema level, refresh mview is successful.
    can you please help me understand why we are failing refreshing mview after we gather stats in the database level??
    we are using oracle 9i
    the creation of the mview goes something like below:
    create materialized view hanging_mview
    build deferred
    refresh on demand
    query rewrite disabled
    appreciate all your help.
    thanks a lot in advance.

    1. we have a scheduled unix job that will refresh an mview everyday, from tuesday to saturdays.
    2. database maintenance being done during weekends (sundays), gathering stats at a database level.
    3. the refresh mview unix job apparently is hanging every tuesdays.
    4. our workaround is to kill the job, request for a schema gather stats, then re-run the job. and voila, refresh mview will be successful then.
    5. and the rest of the weekdays until saturdays, refresh mview is having no problems.
    You know Tuesday's MV refresh "hangs".
    You don't know why it does not complete.
    You desire solution so that it does complete.
    You don't really know what is is doing on Tuesdays, but hope automagical solution will be offered here.
    The ONLY way I know how to possibly get some clues is SQL_TRACE.
    Only after knowing where time is being spent will you have a chance to take corrective action.
    The ball is in your court.
    Enjoy your mystery!

  • What privileges for GATHER_TABLE_STATS on table in other schema

    Hi,
    I'm trying to gather some histogram data for a table in another schema and run into the following error:
    begin
    DBMS_STATS.GATHER_TABLE_STATS('NGM101','NGG_BASISCOMPONENT', METHOD_OPT => 'FOR COLUMNS SIZE 75 tre_id_o');
    end;
    ORA-20000: Unable to analyze TABLE "NGM101"."NGG_BASISCOMPONENT", insufficient privileges or does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 13172
    ORA-06512: at "SYS.DBMS_STATS", line 13202
    ORA-06512: at line 2
    Statement works fine for the SYSTEM user so I guess I need some privileges to do this. But what privileges do I need? (9i and 10G)
    thanks Rene

    I am getting the same error.
    USER sysadm has ANALYZE ANY privilege,infact user has got dba privliege. but still i am getting the same error. what could be the issue here.
    SQL> select * from user_role_privs where username='SYSADM' ;
    USERNAME GRANTED_ROLE ADM DEF OS_
    SYSADM CONNECT NO YES NO
    SYSADM DBA NO YES NO
    SYSADM PSADMIN NO YES NO
    SYSADM RESOURCE NO YES NO
    SQL> connect sysadm/sysadm
    Connected.
    SQL> exec DBMS_STATS.GATHER_TABLE_STATS (ownname=>'sysadm', tabname => 'PS_SJT_PERSON');
    BEGIN DBMS_STATS.GATHER_TABLE_STATS (ownname=>'sysadm', tabname => 'PS_SJT_PERSON'); END;
    ERROR at line 1:
    ORA-20000: Unable to analyze TABLE "SYSADM"."PS_SJT_PERSON", insufficient
    privileges or does not exist
    ORA-06512: at "SYS.DBMS_STATS", line 13046
    ORA-06512: at "SYS.DBMS_STATS", line 13076
    ORA-06512: at line 1

  • Displaying the sizes of every table in a database

    Hello All,
    I need to get the table sizes for each and every table in a database.
    I would have checked it manually but there are many number of tables.
    I am using SQL Server 2008 R2.
    Thanks and Regards, Readers please vote for my posts if the questions i asked are helpful.

    Hi,
    Please make habit of searching net before posting or if you dont know how to write a query. A simple search would lead you to
    http://stackoverflow.com/questions/7892334/get-size-of-all-tables-in-database
    SELECT
    t.NAME AS TableName,
    s.Name AS SchemaName,
    p.rows AS RowCounts,
    SUM(a.total_pages) * 8 AS TotalSpaceKB,
    SUM(a.used_pages) * 8 AS UsedSpaceKB,
    (SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB
    FROM
    sys.tables t
    INNER JOIN
    sys.indexes i ON t.OBJECT_ID = i.object_id
    INNER JOIN
    sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
    INNER JOIN
    sys.allocation_units a ON p.partition_id = a.container_id
    LEFT OUTER JOIN
    sys.schemas s ON t.schema_id = s.schema_id
    WHERE
    t.NAME NOT LIKE 'dt%'
    AND t.is_ms_shipped = 0
    AND i.OBJECT_ID > 255
    GROUP BY
    t.Name, s.Name, p.Rows
    ORDER BY
    t.Name
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • How to delete all rows in all tables of a schema in Oracle?

    Hi all,
    I want to delete all records of all tables of a schema and I think there should be some statement for this but I don't know how?
    may you help?
    Edited by: user8105261 on Nov 25, 2009 11:06 PM

    user8105261 wrote:
    Hi all,
    I want to delete all records of all tables of a schema and I think there should be some statement for this but I don't know how?
    may you help?
    Edited by: user8105261 on Nov 25, 2009 11:06 PMA typical way to reset a schema (e.g. to recreate a schema on the test database) is totally different.
    1) Drop the user
    2) recreate the user including table scripts from
    2a) your version control system
    2b) from a export dumpfile using the "nodata" option while importing

  • How do I move a table from one schema to another schema on Oracle XE?

    How do I move a table from one schema to another schema on Oracle XE?

    Hi,
    I tried to use the insert/select statement that you had given, it did not work.
    The error is ORA-00913: too many values.
    But finally what I did was, I went into the system schema where the table was and generated the DDL through the utilities and afterwards I imported them into the schema that I am currently working on. It solved the problem!
    However I am still curious to know why the insert/select statement did not work? Do you know any site/tutorial which gives a real time example?
    Thank you
    Skye

  • How do I move a table from one schema to another schema?

    How do I move a table from one schema to another schema?

    Grant access to the table from the source schema to destination schema.
      GRANT SELECT ON <TABLE_NAME> TO  <DESTINATION SCHEMA>A simple way would be to use CREATE Table with select syntax (in destination schema)
      CREATE TABLE <TABLE_NAME> AS SELECT * FROM <SOURCE SCHEMA>.<TABLE_NAME><li>However, you would be in <b><u>trouble when the table has index,constraints and triggers</u></b>.
    So you can better of grab the DDL statement of the table(and any additional components) andd then create the table in the destination schema.You can use SQL developer, Toad or Apex's Object browser for this.
    After the table is created, Insert the records using SELECT.
    INSERT INTO <TABLE_NAME> SELECT * FROM <SOURCE SCHEMA>.<TABLE_NAME>This question is discussed in great detail in this <b>AskTom thread</b>

  • Sql statement in a table not accepting variable

    I have the following problem on 10.1.0.3.0 with varialbe in an execute immediate statement
    here is the code that I am using
    declare
    remote_data_link varchar2(25) := 'UDE_DATATRANSFER_LINK';
    FROM_SCHEMA VARCHAR2(40) := 'UDE_OLTP';
    l_last_process_date date := to_date(to_char(sysdate,'mm-dd-yyyy hh:mi:ss'),'mm-dd-yyyy hh:mi:ss') - 1;
    stmt varchar2(4000) := 'MERGE into applicant_adverseaction_info theTarget USING (select * from '||FROM_SCHEMA||'[email protected]'||remote_data_link||' where last_activity > :l_last_process_date ) theSource ON(theTarget.applicant_id = theSource.applicant_id) WHEN MATCHED THEN UPDATE SET theTarget.cb_used = theSource.cb_used, theTarget.cb_address = theSource.cb_address, theTarget.scoredmodel_id = theSource.scoredmodel_id, theTarget.last_activity = theSource.last_activity WHEN NOT MATCHED THEN INSERT(CB_USED, CB_ADDRESS, SCOREDMODEL_ID, APPLICANT_ID, LAST_ACTIVITY) values(theSource.cb_used, theSource.cb_address, theSource.scoredmodel_id, theSource.applicant_id, theSource.last_activity)';
    stmt2 varchar2(4000) := 'MERGE into edm_application theTarget USING (select * from '||from_schema||'[email protected]'||remote_data_link||' where last_activity > :l_last_process_date) theSource ON (theTarget.edm_appl_id = theSource.edm_appl_id) WHEN MATCHED THEN UPDATE SET theTarget.APP_REF_KEY = theSource.APP_REF_KEY, theTarget.IMPORT_REF_KEY = theSource.IMPORT_REF_KEY, theTarget.LAST_ACTIVITY = theSource.LAST_ACTIVITY WHEN NOT MATCHED THEN INSERT (EDM_APPL_ID, APP_REF_KEY, IMPORT_REF_KEY, LAST_ACTIVITY) values(theSource.EDM_APPL_ID, theSource.APP_REF_KEY, theSource.IMPORT_REF_KEY, theSource.LAST_ACTIVITY)';
    v_error varchar2(4000);
    T_MERGE VARCHAR2(4000);
    stmt3 varchar2(4000);
    BEGIN
    select merge_sql
    INTO T_MERGE
    from transfertables
    where table_name= 'edm_application';
    remote_data_link:= 'UDE_DATATRANSFER_LINK';
    FROM_SCHEMA := 'UDE_OLTP';
    --DBMS_OUTPUT.PUT_LINE(SUBSTR(stmt2,1,200));
    --STMT2 := T_MERGE;
    dbms_output.put_line(from_schema||' '||remote_data_link||' '||l_last_process_date);
    EXECUTE IMMEDIATE stmt2 using l_last_process_date;
    --execute immediate stmt3 ;
    dbms_output.put_line(from_schema||' '||remote_data_link||' '||l_last_process_date);
    dbms_output.put_line(substr(stmt2,1,200));
    commit;
    EXCEPTION
    WHEN OTHERS THEN
    V_ERROR := SQLCODE||' '||SQLERRM;
    v_ERROR := V_ERROR ||' '||SUBSTR(stmt2,1,200);
    DBMS_OUTPUT.PUT_LINE(V_ERROR);
    --dbms_output.put_line(substr(stmt2,1,200));
    END;
    This works perfectly
    but if I change it to get the same statement in a db table
    declare
    remote_data_link varchar2(25) := 'UDE_DATATRANSFER_LINK';
    FROM_SCHEMA VARCHAR2(40) := 'UDE_OLTP';
    l_last_process_date date := to_date(to_char(sysdate,'mm-dd-yyyy hh:mi:ss'),'mm-dd-yyyy hh:mi:ss') - 1;
    stmt varchar2(4000) := 'MERGE into applicant_adverseaction_info theTarget USING (select * from '||FROM_SCHEMA||'[email protected]'||remote_data_link||' where last_activity > :l_last_process_date ) theSource ON(theTarget.applicant_id = theSource.applicant_id) WHEN MATCHED THEN UPDATE SET theTarget.cb_used = theSource.cb_used, theTarget.cb_address = theSource.cb_address, theTarget.scoredmodel_id = theSource.scoredmodel_id, theTarget.last_activity = theSource.last_activity WHEN NOT MATCHED THEN INSERT(CB_USED, CB_ADDRESS, SCOREDMODEL_ID, APPLICANT_ID, LAST_ACTIVITY) values(theSource.cb_used, theSource.cb_address, theSource.scoredmodel_id, theSource.applicant_id, theSource.last_activity)';
    stmt2 varchar2(4000) := 'MERGE into edm_application theTarget USING (select * from '||from_schema||'[email protected]'||remote_data_link||' where last_activity > :l_last_process_date) theSource ON (theTarget.edm_appl_id = theSource.edm_appl_id) WHEN MATCHED THEN UPDATE SET theTarget.APP_REF_KEY = theSource.APP_REF_KEY, theTarget.IMPORT_REF_KEY = theSource.IMPORT_REF_KEY, theTarget.LAST_ACTIVITY = theSource.LAST_ACTIVITY WHEN NOT MATCHED THEN INSERT (EDM_APPL_ID, APP_REF_KEY, IMPORT_REF_KEY, LAST_ACTIVITY) values(theSource.EDM_APPL_ID, theSource.APP_REF_KEY, theSource.IMPORT_REF_KEY, theSource.LAST_ACTIVITY)';
    v_error varchar2(4000);
    T_MERGE VARCHAR2(4000);
    stmt3 varchar2(4000);
    BEGIN
    select merge_sql
    INTO T_MERGE
    from transfertables
    where table_name= 'edm_application';
    remote_data_link:= 'UDE_DATATRANSFER_LINK';
    FROM_SCHEMA := 'UDE_OLTP';
    --DBMS_OUTPUT.PUT_LINE(SUBSTR(stmt2,1,200));
    STMT2 := T_MERGE;
    dbms_output.put_line(from_schema||' '||remote_data_link||' '||l_last_process_date);
    EXECUTE IMMEDIATE stmt2 using l_last_process_date;
    --execute immediate stmt3 ;
    dbms_output.put_line(from_schema||' '||remote_data_link||' '||l_last_process_date);
    dbms_output.put_line(substr(stmt2,1,200));
    commit;
    EXCEPTION
    WHEN OTHERS THEN
    V_ERROR := SQLCODE||' '||SQLERRM;
    v_ERROR := V_ERROR ||' '||SUBSTR(stmt2,1,200);
    DBMS_OUTPUT.PUT_LINE(V_ERROR);
    --dbms_output.put_line(substr(stmt2,1,200));
    END;
    I get ora-00900 invalid sql statement
    can somebody explain why this happens
    Thanks

    I agree with jan and anthony. Your post is too long and ill-formatted. However here's my understanding of your problem (with examples though slightly different ones).
    1- I have a function that returns number of records in a any given table.
      1  CREATE OR REPLACE FUNCTION get_count(p_table varchar2)
      2     RETURN NUMBER IS
      3     v_cnt number;
      4  BEGIN
      5    EXECUTE IMMEDIATE('SELECT count(*) FROM '||p_table) INTO v_cnt;
      6    RETURN v_cnt;
      7* END;
    SQL> /
    Function created.
    SQL> SELECT get_count('emp')
      2  FROM dual
      3  /
    GET_COUNT('EMP')
                  14
    2- I decide to move the statement to a database table and recreate my function.
    SQL> CREATE TABLE test
      2  (stmt varchar2(2000))
      3  /
    Table created.
    SQL> INSERT INTO test
      2  VALUES('SELECT count(*) FROM p_table');
    1 row created.
    SQL> CREATE OR REPLACE FUNCTION get_count(p_table varchar2)
      2     RETURN NUMBER IS
      3     v_cnt number;
      4     v_stmt varchar2(4000);
      5  BEGIN
      6     SELECT stmt INTO v_stmt
      7     FROM test;
      8     EXECUTE IMMEDIATE(v_stmt) INTO v_cnt;
      9     RETURN v_cnt;
    10  END;
    11  /
    Function created.
    SQL> SELECT get_count('emp')
      2  FROM dual
      3  /
    SELECT get_count('emp')
    ERROR at line 1:
    ORA-00942: table or view does not exist
    ORA-06512: at "SCOTT.GET_COUNT", line 8
    ORA-06512: at line 1
    --p_table in the column is a string and has nothing to do with p_table parameter in the function. And since there's no p_table table in my schema function returns error on execution. I suppose this is what you mean by "sql statement in a table not accepting variable"
    3- I rectify the problem by recreating the function.
      1  CREATE OR REPLACE FUNCTION get_count(p_table varchar2)
      2     RETURN NUMBER IS
      3     v_cnt number;
      4     v_stmt varchar2(4000);
      5  BEGIN
      6     SELECT replace(stmt,'p_table',p_table) INTO v_stmt
      7     FROM test;
      8     EXECUTE IMMEDIATE(v_stmt) INTO v_cnt;
      9     RETURN v_cnt;
    10* END;
    SQL> /
    Function created.
    SQL> SELECT get_count('emp')
      2  FROM dual
      3  /
    GET_COUNT('EMP')
                  14
    Hope this gives you some idea.-----------------------
    Anwar

  • Indices and constraints on XML Tables/Columns (with Schema)

    Hi,
    I've read a lot of documents by know, but the more I read the more I got confused. So I hope you can help me.
    Frist my Oracle Server Version: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit
    I've manages to create a table with a column with the type SYS.XMLTYPE and the storage modle "Object Relational" with an XML Schema.
    I can insert data and I can execute XQuery statements.
    I can also create an XMLTYPE table with my Schema, althoug the tool SQL Developer keeps telling me, that the one column wich is generated within the table is of the type CLOB instead of object realtional (which is what I defined).
    The query for that is:
    CREATE TABLE ENTRY_XML OF XMLTYPE
    XMLTYPE STORE AS OBJECT RELATIONAL
    XMLSCHEMA "BBMRI_Entry.xsd" ELEMENT "Entry";
    That's where I am, now my questions:
    1. What's the difference? I'm aware of the obviouse difference, that with the first way I can add relational columns as well, but apart from that? If I only want to store the xml data, is the second approach always better (especially in regard to my next question)?
    2. My schema contains elements with attributes (which contain IDs), which have to be unique. So I tried to add a UNIQUE constraint, but failed. I found this (http://www.oracle.com/technology/sample_code/tech/java/codesnippet/xmldb/constraints/Specify_Constraints.html), but it just doesn't work.
    Query: "ALTER TABLE ENTRY_XML CONSTRAINT ENTRY_XML_SUBID_UNQIUE UNIQUE (xmldata."SubId");"
    Error: "ORA-01735: invalid ALTER TABLE option"
    3. I didn't try yet, but I need to specifiy foreign keys within the XML as well (which is explained in the link at question 2). I guess the solution to question 2 will make this possible as well.
    4. Since I can create a UNIQUE constaint for attributes (well, I can't yet, but I hope that this will change soon) I woundered if it would be possible to realize something like auto_increment. Although I see the problem with validating the xml input if the Ids are empty. Any suggestions on that problem? Do I have to query for the highest (free) id before I can insert the new documents?
    Well, that's enough for today, I hope someone can help me!
    Greetings, Florian

    I've read through all the literature (again) and found out, that I did most of the stuff right in the first place. I just missinterpreted the generated tables for the types and wondered why they only contained one column. Well, at least one mistery solved.
    But know to make it easier just one question, which might solve all problems I have:
    How can I create UNIQUE constraints and FOREIGN KEYS when I have a table or column of the type XmlType with a schema using the object relational storage method?
    I found a solution http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14259/xdb05sto.htm#i1042421 (Example 5-12), but it just does not work for me.
    I removed the FOREIGN KEY and tried it again and the UNIQUE Key works.
    So basically the question is how to retrieve the "AId" Attribute. "XMLDATA"."AId", "XMLDATA"."Attribute"."AId" and "XMLDATA"."Subject"."Attribute"."AId" all do not work.
    I've added my schema declarations at the bottom (which I've already successfully registred and used without foreign keys and constraints, so they work).
    After I've registered the two schema files 3 types and 11 tables where created. One type for the attribute, one for the study type and one probably to link the attributes to the study types. The tables are one for the attribute, 4 for the content*-elements, 2 for the study type (I don't really know why two) and 4 with strange names starting with "SYS_NT" and then some random numbers and letters (looks alot like some base64 encoded string).
    The Query I try to use to create the table is: (The table "Attribute" already exists and contains a field "ID", which is it's PK.)
    CREATE TABLE STUDYTYPE_XML
    OF XMLType (UNIQUE ("XMLDATA"."STId"),
    FOREIGN KEY ("XMLDATA"."AId") REFERENCES ATTRIBUTE(ID))
    XMLTYPE STORE AS OBJECT RELATIONAL
    ELEMENT "StudyType.xsd#StudyType";
    The error I get is:
    Error starting at line 1 in command:
    CREATE TABLE STUDYTYPE_XML
    OF XMLType (UNIQUE ("XMLDATA"."STId"),
    FOREIGN KEY ("XMLDATA"."AId") REFERENCES ATTRIBUTE(ID))
    ELEMENT "StudyType.xsd#StudyType"
    Error at Command Line:3 Column:37
    Error report:
    SQL Error: ORA-22809: nonexistent attribute
    22809. 00000 - "nonexistent attribute"
    Cause: An attempt was made to access a non-existent attribute of an
    object type.
    Action: Check the attribute reference to see if it is valid. Then retry
    the operation.
    Attribute-Schema (Attribute.xsd):
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" attributeFormDefault="unqualified">
         <xs:attribute name="AId">
              <xs:simpleType>
                   <xs:restriction base="xs:integer">
                        <xs:minInclusive value="0" />
                   </xs:restriction>
              </xs:simpleType>
         </xs:attribute>
         <xs:attribute name="Name" type="xs:string" />
         <xs:element name="ContentString" type="xs:string" />
         <xs:element name="ContentInteger" type="xs:integer" />
         <xs:element name="ContentDouble" type="xs:decimal" />
         <xs:element name="ContentDate" type="xs:date" />
         <xs:element name="Attribute">
              <xs:complexType>
                   <xs:choice minOccurs="0" maxOccurs="1">
                        <xs:element ref="ContentString" />
                        <xs:element ref="ContentInteger" />
                        <xs:element ref="ContentDouble" />
                        <xs:element ref="ContentDate" />
                   </xs:choice>
                   <xs:attribute ref="AId" use="required" />
                   <xs:attribute ref="Name" use="optional" />
              </xs:complexType>
         </xs:element>
    </xs:schema>
    Study Type Schema (StudyType.xsd):
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" attributeFormDefault="unqualified">
         <xs:include schemaLocation="Attribute.xsd" />
         <xs:attribute name="STId">
              <xs:simpleType>
                   <xs:restriction base="xs:integer">
                        <xs:minInclusive value="0" />
                   </xs:restriction>
              </xs:simpleType>
         </xs:attribute>
         <xs:element name="StudyType">
              <xs:complexType>
                   <xs:sequence>
                        <xs:element ref="Attribute" minOccurs="1" maxOccurs="unbounded" />
                        <xs:element ref="StudyType" minOccurs="0" maxOccurs="unbounded" />
                   </xs:sequence>
                   <xs:attribute ref="STId" use="required"/>
              </xs:complexType>
         </xs:element>
    </xs:schema>
    Edited by: alwaysspam on Sep 8, 2010 5:35 PM

Maybe you are looking for

  • Siri duplicate entries in calendar reminders

    I had some trouble with duplicate calendar entries in Calendar with iCloud and Siri, showing duplicate entries in my iPhone, but not on my macbook pro. I've now set the iphone to show just iCloud entries, which seems to have fixed duplicate entries i

  • JPA caching ResultSets

    I am experiencing something that make me wonder at JPA. I write a class that updates the status of a record and immediately query the table for a list that should contain the new updated query, but the list doesn't contain the record. Secondly, I use

  • Blu-Ray in Final Cut works well but,

    Hello Folks, I burned a Blu-Ray disc of a 17 min movie. It worked and looked great however I noticed when I changed the speed or the direction of the video in Final Cut, it burned to Blu-Ray pretty poorly. (Jumpy, slight freezes, etc) Any suggestions

  • Lost my 3G , MMS and when I try call every once in a while I get told my number is block , why ?

    I bought an iPhone 4 through a friend (who I trusts) friend about a year back and it had been fine up until a month ago. When I would go to call someone I would hear a message "this number has been blocked on the Optus network" I would simple turn it

  • PDF version of Docs - link broken

    Was really pleased when i found there was a pdf version of the docs, it seems the link isn't working. http://learn.adobe.com/wiki/spaces/exportspace.action?key=lccs any ideas? is there another link to the pdf. glenn tinylion