Statistics on a table to be collected daily.

Hi,
I have a scenario where a table gets truncated every day at 10 PM before being populated by a billion a two number of records. At 12 AM, the table will be used by warehousing system. How do we gather statistics for this table on a daily basis to see that Oracle accesses this table in the best possible way.
I know that a job something like this...
DECLARE
JOB USER_JOBS.JOB%TYPE;
BEGIN
DBMS_JOB.SUBMIT (JOB, 'BEGIN DBMS_STATS.GATHER_TABLE_STATS ('TABLE_NAME')', to_date ('24/11/2009 23:00', 'dd/mm/yyyy hh24:mi'), sysdate+1);
END;will get the statistics for me,
Is there any other approach??
Thanks,
Aswin.

PROCEDURE GATHER_TABLE_STATS
Argument Name                  Type                    In/Out Default?
OWNNAME                        VARCHAR2                IN   
TABNAME                        VARCHAR2                IN   
PARTNAME                       VARCHAR2                IN     DEFAULT
ESTIMATE_PERCENT               NUMBER                  IN     DEFAULT
BLOCK_SAMPLE                   BOOLEAN                 IN     DEFAULT
METHOD_OPT                     VARCHAR2                IN     DEFAULT
DEGREE                         NUMBER                  IN     DEFAULT
GRANULARITY                    VARCHAR2                IN     DEFAULT
CASCADE                        BOOLEAN                 IN     DEFAULT
STATTAB                        VARCHAR2                IN     DEFAULT
STATID                         VARCHAR2                IN     DEFAULT
STATOWN                        VARCHAR2                IN     DEFAULT
NO_INVALIDATE                  BOOLEAN                 IN     DEFAULT
STATTYPE                       VARCHAR2                IN     DEFAULT
FORCE                          BOOLEAN                 IN     DEFAULTYour post contains error.
Provide all mandatory arguments

Similar Messages

  • Gathering daily statistics of a table using DBMS_STATS..!!

    Hi all,
    Can please somebody help me for how to collect daily statistics of a table..!!
    executing the DBMS_stats is fine but when i assign as a job using OEM the job never starts. It just shows the status as job submitted but never executes..
    Is there any other way to execute the DBMS_STATS daily...??

    In 10g, it is executed daily at 22:00 by default.
    Otherwise, you can execute something like
    begin
    dbms_job.isubmit(job       => 1,
                     what      => 'BEGIN
    DBMS_STATS.GATHER_DATABASE_STATS (
    estimate_percent      => DBMS_STATS.AUTO_SAMPLE_SIZE,
    block_sample          => TRUE,
    method_opt            => ''FOR ALL COLUMNS SIZE AUTO'',
    degree                => 6,
    granularity           => ''ALL'',
    cascade               => TRUE,
    options               => ''GATHER STALE''
    END;',
                     next_date => sysdate + 2,
                     interval  => 'TRUNC(SysDate +1 ) +  22/24',
                     no_parse  => TRUE);
    end;
    commit;Make sure you commit.

  • Help,why brconnect do not collect statistics for mseg table?

    I found "MSEG" table`s statistics is too old.
    so i check logs in db13,and the schedule job do not collect statistics for "MSEG".
    Then i execute manually: brconnect -c -u system/system -f stats -t mseg  -p 4
    this command still do not collect for mseg.
    KS1DSDB1:oraprd 2> brconnect -c -u system/system -f stats -t mseg u2013f collect -p 4
    BR0801I BRCONNECT 7.00 (46)
    BR0154E Unexpected option value 'u2013f' found at position 8
    BR0154E Unexpected option value 'collect' found at position 9
    BR0806I End of BRCONNECT processing: ceenwjre.log 2010-11-12 08.41.38
    BR0280I BRCONNECT time stamp: 2010-11-12 08.41.38
    BR0804I BRCONNECT terminated with errors
    KS1DSDB1:oraprd 3> brconnect -c -u system/system -f stats -t mseg -p 4
    BR0801I BRCONNECT 7.00 (46)
    BR0805I Start of BRCONNECT processing: ceenwjse.sta 2010-11-12 08.42.04
    BR0484I BRCONNECT log file: /oracle/PRD/sapcheck/ceenwjse.sta
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.11
    BR0813I Schema owners found in database PRD: SAPPRD*, SAPPRDSHD+
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.12
    BR0807I Name of database instance: PRD
    BR0808I BRCONNECT action ID: ceenwjse
    BR0809I BRCONNECT function ID: sta
    BR0810I BRCONNECT function: stats
    BR0812I Database objects for processing: MSEG
    BR0851I Number of tables with missing statistics: 0
    BR0852I Number of tables to delete statistics: 0
    BR0854I Number of tables to collect statistics without checking: 0
    BR0855I Number of indexes with missing statistics: 0
    BR0856I Number of indexes to delete statistics: 0
    BR0857I Number of indexes to collect statistics: 0
    BR0853I Number of tables to check (and collect if needed) statistics: 1
    Owner SAPPRD: 1
    MSEG     
    BR0846I Number of threads that will be started in parallel to the main thread: 4
    BR0126I Unattended mode active - no operator confirmation required
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
    BR0817I Number of monitored/modified tables in schema of owner SAPPRD: 1/1
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
    BR0877I Checking and collecting table and index statistics...
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
    BR0879I Statistics checked for 1 table
    BR0878I Number of tables selected to collect statistics after check: 0
    BR0880I Statistics collected for 0/0 tables/indexes
    BR0806I End of BRCONNECT processing: ceenwjse.sta 2010-11-12 08.42.16
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.17
    BR0802I BRCONNECT completed successfully
    the log says:
    Number of tables selected to collect statistics after check: 0
    Could you give some advices?  thanks a lot.

    Hello,
    If you would like to force the creation of that stats for table MSEG you need to use the -f (force) switch.
    If you leave out the -f switch the parameter from stats_change_threshold is taken like you said correctly:
    [http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/02/0ae0c6395911d5992200508b6b8b11/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/02/0ae0c6395911d5992200508b6b8b11/content.htm]
    [http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/cb/f1e33a5bd8e934e10000000a114084/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/cb/f1e33a5bd8e934e10000000a114084/content.htm]
    You have tried to do this in your second example :
    ==> brconnect -c -u system/system -f stats -t mseg u2013f collect -p 4
    Therefore you received:
    BR0154E Unexpected option value 'u2013f' found at position 8
    BR0154E Unexpected option value 'collect' found at position 9
    This is the correct statement, however the hyphen in front of the f switch is not correct.
    Try again with the following statement (-f in stead of u2013f) you will see that it will work:
    ==> brconnect -c -u system/system -f stats -t mseg -f collect -p 4
    I hope this can help you.
    Regards.
    Wim

  • Statistics Analysis on Tables that are often empty

    Right now I'm dealing with a user application that was originally developed in Ora10g. Recently the database was upgraded to Ora11g, and the schema and data was imported successfully.
    However, since the user started using Ora11, some of their applications have been running slower and slower. I'm just wondering if the problem could be due to statistics.
    The application has several tables which contains temporary data. Usually these tables are empty, although when a user application runs they are populated, and queried against, and then at the end the data is deleted. (Its this program that's running slower and slower.)
    I'm just wondering if the problem could be due to a problem with user statistics.
    When I look at the 'last_analyzed' field in user_tables, the date goes back to the date of last import. I know Oracle regularly updates statistics, so what I suspect is happening is that, by luck, Oracle has only been gathering statistics when the tables are empty. (And since the tables are empty, the statistics are of no help in optimizing the DB.)
    Am I on the right track?
    And if so, is there a way to automatically trigger a statistics gather job when a table gets above a certain size?
    System details:
    Oracle: 11gR2 (64 bit) Standard version
    File System: ASM (GRID infrastructure)

    Usually these tables are empty, although when a user application runs they are populated, and queried against, and then at the end the data is deletedYou have three options (and depending on how the data changes, you might find that not all temporary tables work best with the same option) :
    1. Load representative data into the temporary table, collect statistics (including any histograms that you identify as necessary) and then lock the statistics
    2. Modify the job to re-gather statistics immediately after a temporary table is populated
    3. Delete statistics and then lock the statistics and check the results (execution plan and performance) when the optimizer uses dynamic sampling
    Note : It is perfectly reasonable to create indexes on temporary tables -- provided that you DO create the correct indexes. If jobs are querying the temporary tables for the full data set (all rows) indexes are a hindrance. If there are many separate queries against the temporary table, each query retrieiving a small set of rows, an index or two may be beneficiial. Also some designs do use unique indexes to enforce uniqueness when the tables are loaded.
    Hemant K Chitale

  • Calculating Statistics on DR$ tables

    I have several DR$ tables in my database. I think they come because I am using text indexes. This is third party application. When I run dbms_stats to do statistics on entire schema, it never calculates statistics on DR$ tables. Somewhere I had read that Oracle uses rule base optimizer , yes rule based optimizer for these tables. I am on Oracle 10.2.0.4. When I rebuild text indexes, it never computes statistics on these tables as well.
    What is stage that some of my queries involving text indexes run slow that database control (database tuning advisor) is recommending run statistics on these tables!!! Can someone provide their thoughts on it?

    I have several DR$ tables in my database. I think they come because I am using text indexes. This is third party application. When I run dbms_stats to do statistics on entire schema, it never calculates statistics on DR$ tables. Somewhere I had read that Oracle uses rule base optimizer , yes rule based optimizer for these tables. I am on Oracle 10.2.0.4. When I rebuild text indexes, it never computes statistics on these tables as well.
    What is stage that some of my queries involving text indexes run slow that database control (database tuning advisor) is recommending run statistics on these tables!!! Can someone provide their thoughts on it?
    You may want to review 'Performance Tuning' chapter in the Text Application Developer's Guide.
    http://docs.oracle.com/cd/B12037_01/text.101/b10729/aoptim.htm
    It discusses those tables and how to optimize the stats.
    It will also tell you this:
    7.1.1 Collecting Statistics
    By default, Oracle Text uses the cost-based optimizer to determine the best execution plan for a query. To allow the optimizer to better estimate costs, you can calculate the statistics on the table you query. To do so, issue the following statement:
    ANALYZE TABLE <table_name> COMPUTE STATISTICS;   
    Did you note that is says 'cost-based optimizer'?
    As for those DR$ tables:
    7.7.7 What tables are involved in queries?
    Answer: All queries look at the index token table. Its name has the form DR$indexname$I. This contains the list of tokens (column TOKEN_TEXT) and the information about the row and word positions where the token occurs (column TOKEN_INFO).
    The row information is stored as internal DOCID values. These must be translated into external ROWID values. The table used for this depends on the type of lookup: For functional lookups, the $K table, DR$indexname$K, is used. This is a simple Index Organized Table (IOT) which contains a row for each DOCID/ROWID pair.
    For indexed lookups, the $R table, DR$indexname$R, is used. This holds the complete list of ROWIDs in a BLOB column.
    Hence we can easily find out whether a functional or indexed lookup is being used by examining a SQL trace, and looking for the $K or $R tables.

  • Run optimizer statistics for one table

    dear all,
    i noticed an error in DB16 , that is  <b>Missing Statistics for a table </b> SAPGRP.MC03BF0SETUP.
    How to run/ generate the stats. for this table

    Dear Somckit
    here is the error msg.  from DB16.
                                                                                    Description         Table: SAPGRP.MC03BF0SETUP # Table or index has no optimizer
    Correction Type     D                                                           
    Corrective Action   Collect optimizer statistics                                
    Check Log           /oracle/GRP/sapcheck/cdwqhqnz.chk                           
    Single Messages                                                                      
    No.   Description                                                                    
    1     Table: SAPGRP.MC03BF0SETUP # Table or index has no optimizer statistics        
    2     Table: SAPGRP.MC03BX0SETUP # Table or index has no optimizer statistics        
    3     Table: SAPGRP.MC03UM0SETUP # Table or index has no optimizer statistics        
    4     Index: SAPGRP.MC03BF0SETUP~0 # Table or index has no optimizer statistics      
    5     Index: SAPGRP.MC03BX0SETUP~0 # Table or index has no optimizer statistics      
    6     Index: SAPGRP.MC03UM0SETUP~0 # Table or index has no optimizer statistics                                                                               
    Thank u.

  • 10g: parallel pipelined table func. using table(cast(SQL collect.))?

    Hi,
    i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
    by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
    But oracle always only uses a single table function instance :-(
    whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
    Could it be, that this is due to the fact, that my data are not
    globally available, but only in the main thread data?
    Can someone confirm, that it's not possible to start multiple parallel table functions
    for selecting on SQL data type TABLE OF <object>storages?
    Here's an example sqlplus program to show the issue:
    -------------------- snip ---------------------------------------------
    set serveroutput on;
    drop table test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
         begin
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   counter := counter + 1;
              end loop;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
    begin
         for i in 1..10000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
         dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
    end;
    -------------------- snap ---------------------------------------------
    Here's the output:
    -------------------- snip ---------------------------------------------
    copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '98' ): enter
    test_pkg.TF( sid => '98' ): exit, piped #10000 records
    ... saved #10000 records
    worker thread's sid list:
    sid #98 -- ONLY A SINGLE SID HERE!
    copy physical 'test_table' to 'mylist2' by streaming via table function:
    ... saved #10000 records
    worker thread's sid list:
    sid #128 -- A LIST OF SIDS HERE!
    sid #141
    sid #85
    sid #125
    sid #254
    sid #101
    sid #124
    sid #109
    sid #142
    sid #92
    PL/SQL procedure successfully completed.
    -------------------- snap ---------------------------------------------
    I posted it to newsgroup comp.databases.oracle.server.
    (summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
    But i didn't get a response.
    There i also wrote some background information about my application:
    -------------------- snip ---------------------------------------------
    My application has a #2 steps/stages data selection.
    A 1st select for minimal context base data
    - mainly to evaluate for due driving data records.
    And a 2nd select for all the "real" data to process a context
    (joining much more other tables here, which i don't want to do for non-due records).
    So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
    The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
    And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
    That worked.
    However there was a flaw:
    Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
    Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
    That forced me to use "cluster curStage2 by (theKey)".
    Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
    (so it needs to have #x records of stage #2 together).
    This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
    So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
    Then i don't need the "cluster curStage2 by (theKey)" anymore!
    But: i also need to do an update of the primary driving data!
    So the stage #1 select is a 'select ... for update ...'.
    But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
    So i have to do my stage #1 selection in two steps:
    1. 'select for update' by main session and collect result in SQL collection.
    2. pass collected data to parallel table functions
    And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
    As a work-around
    - if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
    i need to select again on the base tables - driven by the SQL collection data.
    But before i do so, i wanted to verify, if it's really not possible.
    Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
    -------------------- snap ---------------------------------------------
    - many thanks!
    rgds,
    Frank

    Hi,
    i try to distribute SQL data objects - stored in a SQL data type TABLE OF <object-Type> - to multiple (parallel) instances of a table function,
    by passing a CURSOR(...) to the table function, which selects from the SQL TABLE OF storage via "select * from TABLE(CAST(<storage> as <storage-type>)".
    But oracle always only uses a single table function instance :-(
    whatever hints i provide or setting i use for the parallel table function (parallel_enable ...)
    Could it be, that this is due to the fact, that my data are not
    globally available, but only in the main thread data?
    Can someone confirm, that it's not possible to start multiple parallel table functions
    for selecting on SQL data type TABLE OF <object>storages?
    Here's an example sqlplus program to show the issue:
    -------------------- snip ---------------------------------------------
    set serveroutput on;
    drop table test_table;
    drop type ton_t;
    drop type test_list;
    drop type test_obj;
    create table test_table
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_obj as object(
         a number(19,0),
         b timestamp with time zone,
         c varchar2(256)
    create or replace type test_list as table of test_obj;
    create or replace type ton_t as table of number;
    create or replace package test_pkg
    as
         type test_rec is record (
              a number(19,0),
              b timestamp with time zone,
              c varchar2(256)
         type test_tab is table of test_rec;
         type test_cur is ref cursor return test_rec;
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a));
    end;
    create or replace package body test_pkg
    as
         function TF(mycur test_cur)
    return test_list pipelined
    parallel_enable(partition mycur by hash(a))
    is
              sid number;
              counter number(19,0) := 0;
              myrec test_rec;
              mytab test_tab;
              mytab2 test_list := test_list();
         begin
              select userenv('SID') into sid from dual;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): enter');
              loop
                   fetch mycur into myRec;
                   exit when mycur%NOTFOUND;
                   mytab2.extend;
                   mytab2(mytab2.last) := test_obj(myRec.a, myRec.b, myRec.c);
              end loop;
              for i in mytab2.first..mytab2.last loop
                   -- attention: saves own SID in test_obj.a for indication to caller
                   --     how many sids have been involved
                   pipe row(test_obj(sid, mytab2(i).b, mytab2(i).c));
                   counter := counter + 1;
              end loop;
              dbms_output.put_line('test_pkg.TF( sid => '''|| sid || ''' ): exit, piped #' || counter || ' records');
         end;
    end;
    declare
         myList test_list := test_list();
         myList2 test_list := test_list();
         sids ton_t := ton_t();
    begin
         for i in 1..10000 loop
              myList.extend; myList(myList.last) := test_obj(i, sysdate, to_char(i+2));
         end loop;
         -- save into the real table
         insert into test_table select * from table(cast (myList as test_list));
         dbms_output.put_line(chr(10) || 'copy ''mylist'' to ''mylist2'' by streaming via table function...');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from table(cast (myList as test_list)) tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
         dbms_output.put_line(chr(10) || 'copy physical ''test_table'' to ''mylist2'' by streaming via table function:');
         select test_obj(a, b, c) bulk collect into myList2
         from table(test_pkg.TF(CURSOR(select /*+ parallel(tab,10) */ * from test_table tab)));
         dbms_output.put_line('... saved #' || myList2.count || ' records');
         select distinct(tab.a) bulk collect into sids from table(cast (myList2 as test_list)) tab;
         dbms_output.put_line('worker thread''s sid list:');
         for i in sids.first..sids.last loop
              dbms_output.put_line('sid #' || sids(i));
         end loop;
    end;
    -------------------- snap ---------------------------------------------
    Here's the output:
    -------------------- snip ---------------------------------------------
    copy 'mylist' to 'mylist2' by streaming via table function...
    test_pkg.TF( sid => '98' ): enter
    test_pkg.TF( sid => '98' ): exit, piped #10000 records
    ... saved #10000 records
    worker thread's sid list:
    sid #98 -- ONLY A SINGLE SID HERE!
    copy physical 'test_table' to 'mylist2' by streaming via table function:
    ... saved #10000 records
    worker thread's sid list:
    sid #128 -- A LIST OF SIDS HERE!
    sid #141
    sid #85
    sid #125
    sid #254
    sid #101
    sid #124
    sid #109
    sid #142
    sid #92
    PL/SQL procedure successfully completed.
    -------------------- snap ---------------------------------------------
    I posted it to newsgroup comp.databases.oracle.server.
    (summary: "10g: parallel pipelined table functions with cursor selecting from table(cast(SQL collection)) doesn't work ")
    But i didn't get a response.
    There i also wrote some background information about my application:
    -------------------- snip ---------------------------------------------
    My application has a #2 steps/stages data selection.
    A 1st select for minimal context base data
    - mainly to evaluate for due driving data records.
    And a 2nd select for all the "real" data to process a context
    (joining much more other tables here, which i don't want to do for non-due records).
    So it's doing stage #1 select first, then stage #2 select - based on stage #1 results - next.
    The first implementation of the application did the stage #1 select in the main session of the pl/sql code.
    And for the stage #2 select there was done a dispatch to multiple parallel table functions (in multiple worker sessions) for the "real work".
    That worked.
    However there was a flaw:
    Between records from stage #1 selection and records from stage #2 selection there is a 1:n relation (via key / foreign key relation).
    Means, for #1 resulting record from stage #1 selection, there are #x records from stage #2 selection.
    That forced me to use "cluster curStage2 by (theKey)".
    Because the worker sessions need to evaluate the all-over status for a context of #1 record from stage #1 and #x records from stage #2
    (so it needs to have #x records of stage #2 together).
    This then resulted in delay for starting up the worker sessions (i didn't find a way to get rid of this).
    So i wanted to shift the invocation of the worker sessions to the stage #1 selection.
    Then i don't need the "cluster curStage2 by (theKey)" anymore!
    But: i also need to do an update of the primary driving data!
    So the stage #1 select is a 'select ... for update ...'.
    But you can't use such in CURSOR for table functions (which i can understand, why it's not possible).
    So i have to do my stage #1 selection in two steps:
    1. 'select for update' by main session and collect result in SQL collection.
    2. pass collected data to parallel table functions
    And for 2. i recognized, that it doesn't start up multiple parallel table function instances.
    As a work-around
    - if it's just not possible to start multiple parallel pipelined table functions for dispatching from 'select * from TABLE(CAST(... as ...))' -
    i need to select again on the base tables - driven by the SQL collection data.
    But before i do so, i wanted to verify, if it's really not possible.
    Maybe i just miss a special oracle hint or whatever you can get "out of another box" :-)
    -------------------- snap ---------------------------------------------
    - many thanks!
    rgds,
    Frank

  • Disable Statistics for specific Tables

    Is it possible to disable statistics for specific tables???

    If you want to stop gathering statistics for certain tables, you would simply not call DBMS_STATS.GATHER_TABLE_STATS on those particular tables (I'm assuming that is how you are gathering statistics at the moment). The old statistics will remain around for the CBO, but they won't be updated. Is that really what you want?
    If you are currently using GATHER_SCHEMA_STATS to gather statistics, you would have to convert to calling GATHER_TABLE_STATS on each table. You'll probably want to have a table set up that lists what tables to exclude and use that in the procedure that calls GATHER_TABLE_STATS.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Import Table Statistics to another table

    Hi,
    Just like to know if I can use dbms_stats.import_table_stats to import table statistics to another table?
    Scenario:
    I exported the table statistics of the table (T1) using the command below.
    exec dbms_stats.export_table_stats('<user>','T1',NULL,'<stat table>');
    PL/SQL procedure successfully completed.
    And then, I have another table named T2, T1 and T2 are identical tables. T2 does not have statistics (I intentionally did not run gather statistics). I am wondering
    if I can import table statistics from T1 to T2 using dbms_stats package?.
    For what I understand, statistics can be imported back at the same table which is T1 but not for another table using dbms_stat package. If I am wrong, anyone can correct me.
    Thanks

    hi
    just try ;-) you lose nothing probably,
    check afterwards last_analyzed value for that table in user_tables
    if something is wrong, run regular stats

  • Create new CBO statistics for the tables

    Dear All,
    I am facing bad performance in server.In SM50 I see that the read and delete process at table D010LINC takes
    a longer time.how to  create new CBO statistics for the tables D010TAB and D010INC.  Please suggest.
    Regards,
    Kumar

    Hi,
    I am facing problem in when save/activating  any problem ,so sap has told me to create new CBO statistics for the tables D010TAB and D010INC
    Now as you have suggest when tx db20
    Table D010LINC
    there error comes  Table D010LINC does not exist in the ABAP Dictionary
    Table D010TAB
         Statistics are current (|Changes| < 50 %)
    New Method           C
    New Sample Size
    Old Method           C                       Date                 10.03.2010
    Old Sample Size                              Time                 07:39:37
    Old Number                51,104,357         Deviation Old -> New       0  %
    New Number                51,168,679         Deviation New -> Old       0  %
    Inserted Rows                160,770         Percentage Too Old         0  %
    Changed Rows                       0         Percentage Too Old         0  %
    Deleted Rows                  96,448         Percentage Too New         0  %
    Use                  O
    Active Flag          P
    Analysis Method      C
    Sample Size
    Please suggest
    Regards,
    Kumar

  • How to Update the statistics of a table

    Dear Experts,
    I want to update the statistics of a table D010INC table.How can i update it.
    Plz provide me detailed steps.
    Regards,
    Farook.
    Edited by: farook shaik on Dec 15, 2008 1:04 PM

    check this SAP help
    http://help.sap.com/saphelp_nw04/Helpdata/EN/df/455e97747111d6b25100508b6b8a93/content.htm

  • How to build statistics for a table, its urgent, points will be rewarded

    Hi friends,
    I want to create a statistics for MSEG table in production server, because its not up to date.
    Total no. of entries available in table is 2,30,000.
    My O/S windows2003 (cluster)/oracle9.2/ ECC 5.
    I need a step by step procedure to build or create statistics using DB20.
    Regards,
    s.senthil kumar

    Hi stefan,
    Thanks 4 ur reply.
    I need some more clarification on db20.
    Wt are the values I have to give on following fields.
    New method and new sample size.
    anyfields or chek box I need to fill before create new statistics?
    My current screen values on following fields.
    new method = 'C'.  new sample size = '  '.
    old method  = 'C'. old sample size = '   '.
    Old Number   5,965         Deviation Old -> New   3,669
    New Number  24,834       Deviation New -> Old      97-
    Inserted Rows 0              Percentage Too Old         0
    Changed Rows  0            Percentage Too Old         0
    Deleted Rows     0           Percentage Too New         0
    Use                  A         
    Active Flag        A
    Analysis Method      C
    and history check box is marked.
    Can I run DB20 while server is running or at peak time?
    Regards,
    s.senthil kumar

  • Unable to generate statistics for the table

    I have got a staging table of more than 600 columns which has got range portioning. Size of the table is 4GB. The average size of the row is around 3 MB. I have created a Functional index on one of the column ABC VARCHAR2(50) and it has only number values. Now when I try generating statistics for this table through ANALYZE or DBMS_STAT, it gives Invalid Number error but when I drop this index and try analyzing, it works.
    Executed TO_NUMBER(ABC) query on the table and it works fine.
    I have got Functional Indexes on other tables also but I don't get such problem with those tables. I tried dropping the index and re-creating it but it didn't work out.
    I was suspecting DATA BLOCK CORRUPTION so checked ALERT LOG and TRACE FILES but found nothing.
    So what is this magic called?

    I am using TO_NUMBER on the column.
    I have checked the MetaLink for the same problem but could not find anything on that. I still suspect datafile error which I am unable to get in ALERT LOG or TRACE FILES. So I am going to try it this way:
    1) Create new Tablespace with New Datafile
    2) Transfer the table from existing Tablespace to new Tablespace
    3) Create the functional index in the same new Tablespace
    4) Try generating the statistics.
    5) If it works then create seprate Tablespace for data and Index.
    Hope it works !!
    Thanks for the reply guys.

  • Display statistics history of tables and indexes

    hi,
    in display statistics history of tables and indexes i can see only one year old table history, how we can check the table history for more than one year?
    Thanks,
    Nithin

    Welcome to SDN,
    Please search on help.sap.com or SDN before you post your query to avoid duplicate thread.
    I think below link will clear your doubt. help.sap.com/saphelp_nw70ehp1/helpdata/en/a6/8c933bf687e85de10000000a11402f/frameset.htm

  • Statistics on temporary table issue

    We have a temporary table that causes sometimes a performance issue as the table statistics are not correct. The solution might be to compute the object statistics just before several select statements using the temporary table are executed (this is currently not the case). I think there can still be issues with this approach:
    step 1.) Job 1 fills temporary table and gathers statistics on the table
    step 2.) Job 1 executes first sql on temporary table
    step 3.) Job 2 fills temporary table and gathers statistics on the table
    step 4.) Job 2 executes first sql on temporary table using the new statistics
    step 5.) Job 1 executes second sql on the table - but has now new (= wrong) table statistics of job 2
    Job 1 executes for mandant 1 and job 2 for mandant 2, etc. Some of the "heap-organized" tables are partitioned by mandant.
    How can we solve this problem? We consider to partition the temporary table into partitions by mandant, too. Are there other / better solutions?
    (Oracle 10.2.0.4)

    Hello,
    If you don't have statistics on your Temporary Table Oracle will use dynamic sampling instead.
    So, you may try to delete the statistics on this Table and then Lock them, as follow:
    execute dbms_stats.delete_table_stats('{color:red}schema{color}','{color:red}temporary_table{color}');
    execute dbms_stats.lock_table_stats('{color:red}schema{color}','{color:red}temporary_table{color}');You may check the performances as dynamic sampling is also resource consuming.
    Hope this help.
    Best regards,
    Jean-Valentin

Maybe you are looking for

  • Bundle Renewal Question?

    I have a regular Verizon account and I purchased a one month phone card for someone for the prepaid phone.  I am having trouble understanding how everything works with the Prepaid when I look at the account because it is different from my regular acc

  • Assigning class to batches while doing goods receipt

    Hi SAP gurus, I have created an item with batch management and I have assigned the class type "023" for this material. I have noticed that when I do goods reception (tcode: MIGO) of new batches for the same item, the system does not assign automatica

  • HT201210 how to you recover your phone if it says that your phone can not be found?

    where do i start! ugh please i hope someone can help me ... so i never had the icloud on and i dont know why it was turned off but it was and i went to import all my pictures on my computer and in the middle of importing a message popped up and said

  • I Phone 5 how can I get my screen to sleep after 30 seconds?

    My I phone 5 screen stays on and my battery is draining too fast. Is there a way to change a setting so it goes to sleep?

  • Restrict Access and Multiple Upload with Insert

    I continue to have problems with using Restrict Access to Folder with Multiple Upload with Insert. When the restrict access is enabled the uploader does not work, when it's removed it works. Any ideas?