Statistics Collection to a Table

Hello,
i need to gather statistics for few tables from a schema, to a separate table. Basically i have gather the stats into newly created table. my question is how can i do this? i know how to generate statistics using the dbms statspack, but do not have any idea of how to gather these statistics to a separate table>
could any one shed some light here..
Thanks..

Hello,
i created a table using the following dbms_stats package.
BEGIN
DBMS_STATS.create_stat_table (
ownname => 'TEST',
stattab => 'stats_table',
tblspace => 'users');
END;
when this table created using the above script it created with the following columns.
STATID VARCHAR2(30 BYTE),
TYPE CHAR(1 BYTE),
VERSION NUMBER,
FLAGS NUMBER,
C1 VARCHAR2(30 BYTE),
C2 VARCHAR2(30 BYTE),
C3 VARCHAR2(30 BYTE),
C4 VARCHAR2(30 BYTE),
C5 VARCHAR2(30 BYTE),
N1 NUMBER,
N2 NUMBER,
N3 NUMBER,
N4 NUMBER,
N5 NUMBER,
N6 NUMBER,
N7 NUMBER,
N8 NUMBER,
N9 NUMBER,
N10 NUMBER,
N11 NUMBER,
N12 NUMBER,
D1 DATE,
R1 RAW(32),
R2 RAW(32),
CH1 VARCHAR2(1000 BYTE)
how can I interpret what column is for what? for example C1 to C5, N1 to N12 or R1, CH1 what do they mean?
also will be able to get any info regarding the COst of the SQL statement that took to run? if not how can i obtain that info?
Then I gathered statistics for one table to test.
exec dbms_stats.gather_table_stats(ownname=>'TEST',
estimate_percent=>10,
statown=>'TEST',
tabname=>'ord_receipts',
stattab=>'STATS_TABLE',
statid=>'TESTING');
the data inserted into STATS_TABLE, but could not understand.
I need to run these statistics on daily basis and save to a diffferent table daily . for example today is 01/01/2012 should have its own table for stats to collect and 01/02/2012 should have another table to collect the statistics.
is there any way that i can dynamically incorporated when gathering the statistics, to create the STATS_TABLE concatanated with sysdate daily to analyze the statistics?
Thanks..

Similar Messages

  • Automatic Optimizer Statistics Collection Enabled still tables not analyzed

    Hello,
    We have Oracle 11g R1 database. Our automatic Optimizer Statistics Collection settings are enabled, still I don't see the tables being analyzed, any suggestions if I am missing any settings. All tables do get analyzed if I do manual statistics gathering.
    SQL> select CLIENT_NAME ,STATUS from DBA_AUTOTASK_CLIENT;
    CLIENT_NAME STATUS
    auto optimizer stats collection ENABLED
    auto space advisor ENABLED
    sql tuning advisor ENABLED
    Thanks,
    SK

    user599845 wrote:
    Hello,
    We have Oracle 11g R1 database. Our automatic Optimizer Statistics Collection settings are enabled, still I don't see the tables being analyzed, any suggestions if I am missing any settings. All tables do get analyzed if I do manual statistics gathering.
    SQL> select CLIENT_NAME ,STATUS from DBA_AUTOTASK_CLIENT;
    CLIENT_NAME STATUS
    auto optimizer stats collection ENABLED
    auto space advisor ENABLED
    sql tuning advisor ENABLED
    Thanks,
    SK
    still I don't see the tables being analyzed, post SQL & results that lead you to this conclusion.
    realize that statistics can be "collected" without updating LAST_ANALYZED column.
    if data within table does not change, the nothing would be gained by "updating" statistics to same values as now/before.

  • How to disable automatic statistics collections on tables

    Hi
    I am using Oracle 10g and we have few tables which are frequently truncated and news rows added to it. Oracle automatically analyzes the table by some means which collects statistics of the table but at the wrong time(when the table is empty). This makes my query to do a full table scan rather using indexes since the statistics was collected when the table was empty.Could any one please let me know how to disable the automatic statistics collection feature of Oracle?
    Cheers
    Anantha PV

    Hi
    I am using Oracle 10g and we have few tables which
    are frequently truncated and news rows added to it.
    Oracle automatically analyzes the table by some means
    which collects statistics of the table but at the
    wrong time(when the table is empty). This makes my
    query to do a full table scan rather using indexes
    since the statistics was collected when the table was
    empty.Could any one please let me know how to disable
    the automatic statistics collection feature of
    Oracle?
    First of all I think it's important that you understand why Oracle collects statistics on these tables: Because it considers the statistics of the object to be missing or stale. So if you just disable the statistics gathering on these tables then you won't have statistics at all or outdated statistics.
    So as said by the previous posts you should gather the statistics manually yourself anyway. If you do so right after loading the data into the truncated table, you don't need to disable the automatic statistics gathering as it only processes objects that are stale or don't have statistics at all.
    If you still think that you need to disable it there are several ways to accomplish it:
    As already mentioned, for particular objects you can lock the statistics using DBMS_STATS.LOCK_TABLE_STATS, or for a complete schema using DBMS_STATS.LOCK_SCHEMA_STATS. Then these statistics won't be touched by the automatic gathering job. You still can gather statistics using the FORCE=>true option of the GATHER__STATS procedures.
    If you want to change the automatic gathering job that it only gathers statistics on objects owned by Oracle (data dictionary, AWR etc.), then you can do so by calling DBMS_STATS.SET_PARAM('AUTOSTATS_TARGET', 'ORACLE'). This is the recommended method.
    If you disable the schedule job as mentioned in the documentation by calling DBMS_SCHEDULER.DISABLE('GATHER_STATS_JOB') then no statistics at all will be gathered automatically, causing your data dictionary statistics to be become stale over time, which could lead to suboptimal performance of queries on the data dictionary.
    All this applies to Oracle 10.2, some of the features mentioned might not be available in Oracle 10.1 (as you haven't mentioned your version of 10g).
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Managing statistics for object collections used as table types in SQL

    Hi All,
    Is there a way to manage statistics for collections used as table types in SQL.
    Below is my test case
    Oracle Version :
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Original Query :
    SELECT
         9999,
         tbl_typ.FILE_ID,
         tf.FILE_NM ,
         tf.MIME_TYPE ,
         dbms_lob.getlength(tfd.FILE_DATA)
    FROM
         TG_FILE tf,
         TG_FILE_DATA tfd,
              SELECT
              FROM
                   TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
         )     tbl_typ
    WHERE
         tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:02.90
    Execution Plan
    Plan hash value: 3970072279
    | Id  | Operation                                | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  1 |  HASH JOIN                               |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  2 |   HASH JOIN                              |              |  8168 |   287K|   695   (3)| 00:00:09 |
    |   3 |    VIEW                                  |              |  8168 |   103K|    29   (0)| 00:00:01 |
    |   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|              |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   5 |      FAST DUAL                           |              |     1 |       |     2   (0)| 00:00:01 |
    |   6 |    TABLE ACCESS FULL                     | TG_FILE      |   565K|    12M|   659   (2)| 00:00:08 |
    |   7 |   TABLE ACCESS FULL                      | TG_FILE_DATA |   852K|   128M|  3863   (1)| 00:00:47 |
    Predicate Information (identified by operation id):
       1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
    Statistics
              7  recursive calls
              0  db block gets
          16783  consistent gets
          16779  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
    select
         index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
    from
         all_indexes
    where table_name in ('TG_FILE','TG_FILE_DATA');
    INDEX_NAME                     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR     NUM_ROWS SAMPLE_SIZE
    TG_FILE_PK                          2        2160        552842             21401       552842      285428
    TG_FILE_DATA_PK                     2        3544        852297             61437       852297      852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
    But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
    So my question is, is there any way by which I can change the statistics while using collections in SQL ?
    I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
    Modified query with hints :
    SELECT    
        /*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
        9999,
        tbl_typ.FILE_ID,
        tf.FILE_NM ,
        tf.MIME_TYPE ,
        dbms_lob.getlength(tfd.FILE_DATA)
    FROM
        TG_FILE tf,
        TG_FILE_DATA tfd,
            SELECT
            FROM
                TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
        tbl_typ
    WHERE
        tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1670128954
    | Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                          |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   1 |  NESTED LOOPS                             |                 |       |       |            |          |
    |   2 |   NESTED LOOPS                            |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   3 |    NESTED LOOPS                           |                 |  8168 |  1363K| 16379   (1)| 00:03:17 |
    |   4 |     VIEW                                  |                 |  8168 |   103K|    29   (0)| 00:00:01 |
    |   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   6 |       FAST DUAL                           |                 |     1 |       |     2   (0)| 00:00:01 |
    |   7 |     TABLE ACCESS BY INDEX ROWID           | TG_FILE_DATA    |     1 |   158 |     2   (0)| 00:00:01 |
    |*  8 |      INDEX UNIQUE SCAN                    | TG_FILE_DATA_PK |     1 |       |     1   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN                      | TG_FILE_PK      |     1 |       |     1   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY INDEX ROWID             | TG_FILE         |     1 |    23 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
           filter("TF"."FILE_ID"="TFD"."FILE_ID")
    Statistics
              0  recursive calls
              0  db block gets
             16  consistent gets
              8  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed
    Thanks,
    B

    Thanks Tubby,
    While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
    But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
    http://www.oracle-developer.net/display.php?id=427
    If we go across the document, it has mentioned in total 3 hints to set statistics :
    1) CARDINALITY (Undocumented)
    2) OPT_ESTIMATE ( Undocumented )
    3) DYNAMIC_SAMPLING ( Documented )
    4) Extensible Optimiser
    Tried it out with different hints and it is working as expected.
    i.e. cardinality and opt_estimate are taking the default set value
    But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
    With CARDINALITY hint
    SELECT
        /*+ cardinality( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     5 |    10 |    29   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With OPT_ESTIMATE hint
    SELECT
         /*+ opt_estimate(table, e, scale_rows=0.0006) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Execution Plan
    Plan hash value: 4043204977
    | Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   1 |  VIEW                                  |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   2 |   COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   3 |    FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With DYNAMIC_SAMPLING hint
    SELECT
        /*+ dynamic_sampling( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     2 |     4 |    11   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     2 |     4 |    11   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
    I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
    By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
    Regards,
    B

  • Doubt regarding automatic statistics collection in Oracle 10g

    I am using Oracle 10g in Linux
    Does statistic collection for tables throughout the database happen automatically or should we manually analyze the tables using
    Analyze command or DBMS_STATS package ?
    AWR collects statistics(snapshots) every 1 hr but does it mean it collects only session and database related statistics and not the table related statistics?

    I am using Oracle 10g in Linux Version and os name and version?
    AWR collects statistics(snapshots) every 1 hr butIt's performance related statistics. Read about data gathering and AWR.
    Note that AWR is an extra licensable feature thru Management packs.

  • How stop the automatic statistics collection job after the maintenance wind

    Hi,
    we are for a solution to stop the automatic statistics collection job after the maintenance window finished.
    we disable all jobs except the automatic statistics collection, because this is the only one we want to run. Then we define specific values for the interval and duration parameters of the maintenance window to customize this task.
    But for their systems it is very important that this job/task will immediately stop when the window is closed!!!
    So, how could we ensure this behavior.
    For Oracle 10g it is easy because the statistic job always exists and it is possible to set its duration and create an addtional event based job which kills all jobs that are running over duration.
    In Oracle 11g the statistic job is created by the system during the maintenance window is open.
    We are not able to modify parameters of this system job. After the maintenance window closed the job is already running - only with another resource priority - but it is running.
    Please help me in this scenario
    Thanks&Regards
    Prem

    ?So basically you are saying is if none of the tables are changed then GATHER_STATS_JOB will not run, but i see tables are updated still the job is not running. I did >query dba_scheduler_jobs and the state of the job is true and scheduled. Please see my previous post on the output
    Am i missing anything here, do i look for some parameters settings
    So basically you are saying is if none of the tables are changed then GATHER_STATS_JOB will not run,GATHER_STATS_JOB will run and if there are any table in which there's a 10 percent change in data, it will gather statistics on that table. If no table data have changes less than 10 percent, it will not gather statistics.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41282
    Hope this helps.
    -Anantha

  • (10g) 자동 통계정보 수집(AUTOMATIC OPTIMIZER STATISTICS COLLECTION)

    제품 : ORACLE SERVER
    작성날짜 : 2006-07-21
    PURPOSE
    이 문서는 10g의 new feature인 자동 통계정보 수집(Automatic Optimizer
    Statistics Collection)에 대한 소개와 기능에 대한 자료이다.
    Explanation
    1. 개요
    Optimizer statistics는 GATHER_STATS_JOB에 의해서 자동으로
    수집된다. 이 JOB은 SYS 소유로서 OBJECT_TYPE이 JOB이다.
    이 JOB은 통계정보가 없거나 stale 상태의 통계정보를 갖는 DB 내의
    모든 OBJECT들에 대한 통계정보들을 수집한다.
    2. 자동 통계정보 수집을 위한 설정과 방식
    1) STATISTICS_LEVEL = TYPICAL | ALL
    2) 통계정보들은 predefined GATHER_STATS_JOB에 의해 수집된다.
    3) JOB이 수행될 때 JOB은 다음과 같은 사항들을 결정한다.
    - missing 또는 stale 상태의 통계정보를 갖는 object를 결정한다.
    - 좋은 통계정보를 생성하기 위해 필요한 적당한 sampling percentage.
    - histogram과 histogram의 사이즈를 요구하는 적절한 column.
    - 통계정보 수집에 대한 parallelism의 degree.
    - 어느 object에 대한 통계정보를 수집할지에 대한 우선순위
    3. GATHER_STATS_JOB에 대한 설명
    이 job은 데이타베이스 생성 시점에 생성되고 스케줄러에 의해 관리된다.
    GATHER_STATS_JOB 은 DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure를
    call함으로써 통계정보를 수집한다.
    이 프로시져는 'GATHER AUTO' 옵션을 사용한 DBMS_STATS.GATHER_DATABASE_STATS
    procedure와 아주 유사한 형태로 동작한다. 이것과 다른 점은
    GATHER_DATABASE_STATS_JOB_PROC procedure는 통계정보를 수집해야 할
    Object에 대해 우선순위를 두고 순서대로 처리한다. 즉, 가장 많이
    통계정보가 update가 되어야 할 object를 가장 먼저 처리하는 것이다.
    이것은 maintenance window가 close되기 전에 가장 필요한 통계정보가
    먼저 수집되도록 하기 위함이다.
    4. Dictionary Objects에 대한 통계정보
    1) Oracle Database 10g부터 최적의 performance 결과를 얻기 위해 dictionary
    table들에 대한 통계정보도 수집할 수 있다.
    언제라도, DBMS_STATS.GATHER_SCHEMA_STATS procedure를 사용하여
    dictionary table들에 대한 통계정보를 수집하는 것이 가능하다.
    이 때 GATHER_SYS argument는 TRUE로 셋팅되어 있어야 한다.
    2) DBMS_STATS.GATHER_DICTIONARY_STATS라 하는 새로운 procedure도 사용
    하는 것이 가능하다. 이것을 사용하기 위해서는 ANALYZE ANY DICTIONARY
    라는 새로운 system privilege가 있어야 한다.
    이 권한은 만약 어떤 user가 SYSDBA 권한이 없는 경우 dictionary object와
    fixed object들을 analyze할 수 있도록 한다.
    3) GATHER_DATABASE_STATS라는 프로시져는 GATHER_FIXED라 불리우는 새로운
    argument를 가진다. 이 값은 default로 FALSE로 셋팅된다. 즉, 기본적으로
    fixed table들에 대해서는 통계정보를 생성하지 않도록 한다.
    전형적인 System WorkLoad가 있는 동안에는 fixed table들에 대하여
    한번만 analyze하면 충분하다.
    4) GATHER_FIXED_OBJECTS_STATS라는 procedure를 사용하여 fixed table들에
    대한 통계정보를 모으는 것도 가능하다. 또한 모든 fixed table들에 대하여
    통계정보를 delete하는 것도 가능하고, fixed table에 통계정보를
    export 또는 import하는 것도 가능하다.
    Example
    none
    Reference Documents
    <Note:266040.1>

    Hi,
    Please see here,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#i41448
    If the table/s are changing very frequently than its better to gather the stats manually.This would lead teh volatile table coming up into the stats job again and again.
    For the system stats and data dictionary stats,they are not collected by default.So there is no choice but to gather them manually.
    Aman....

  • CDMC - Activate Statistics Collection

    Hi,
    I have a question related to CDMC Statistics Collection.It is not clear for me, after activate the collection what are the periods to evaluate and period type?
    Please can you clarify?
    Best Regards,
    Zsuzsanna

    Hi Zsuzsanna,
    As shown in the screenshot in my first reply it will take input as per the given period and perid type and will fetch usage statistics data for the given time period but to shcedule the same for every three months you need to give inout in the next pop-up screen here you can give input to schedule three different jobs for future as per your requirement, According to the period and period type given these jobs will be scheduled and run in the statistics/production system and will update usage information in the CDMC table in the statistics system.
    According to the given input it will schedule jobs and run in the statistics/produdction system , if you give 1 day then job will run daily and will update and store usage statistics data in the CDMC database table in the statistics system itself and not in Solution manager system.
    This data will only get imported in the solman when you create and execute new Clearing analysis project and execute activities in the same.
    If you are expecting the latest usage results then yes,you need to create new CDMC Clearing Analysis project every time as this usage information will not get updated automatically in the CDMC project.
    Please let me know if you have any doubts.
    Best Regards,
    Pritish

  • SAP database tables - Column statistics not found for table in DB02 - During import, inconsistent tables were found - Some open conversion requests still exist in the ABAP dictionary.

    Hi Experts,
    I'm implementing SAP note 1990492 which requires manual implementation. Implementation includes modifying standard tables (i.e. append
    the structure FIEU_S_APP_H to table FIEUD_FIDOC_H). After this I've adjusted the table in SE14 (Database Utility). I've done checks in SE14 and it shows the table is consistent.
    But during DB02 -> Space folder -> Single table analysis -> Input table name -> Indexes tab -> Upon clicking statistics, there is a warning "Column statistics not found for table".
    Our basis team is implementing an Add-On in the development system related to RWD Context Sensitive Help. They cannot proceed due to the following inconsistencies found in the table.
    Screenshot error from the activity:
    Screenshot from DB02:
    I'm an ABAP developer and have no other ideas on what to do. Thanks in Advanced.

    Hi All,
    We were able to fix the issue through the following:
    1. Call transaction
    SE14.
    2. Enter the name of
    the table and choose "Edit".
    3. Choose
    "Indexes".
    4. Select the index
    and choose "Choose (F2)".
    5. If you choose
    "Activate and adjust", the system creates the index again and it is
    consistent.
    6. Check the object
    log of this activation.
    7. If an error
    occurs, eliminate the cause and reactivate the index.

  • Merging a collection into a table

    I have a table with some blank columns.
    I take all the data from the table in a collection and then process the data to fill those empty columns.
    Now i have the collection with all the valid data.
    Is it possible to cast this same collection into a table and merge it again into the original table?
    I am trying to do so, but there is an error saying "cant use local collection"...
    Any help on this would be appreciated.

    Thanks a lot for your suggestion.
    I would like to give you an elaborate process.
    I have a table ABC. Some columns of that table are empty. I need to get those filled using some other tables.
    Firstly, using a cursor I fetch the records in a collection say V_ABC.
    Now in a FOR loop, I am fetching the missing data from other tables for every record in the collection and setting it in the empty places in the collection V_ABC itself.
    So finally i have the collection V_ABC which has complete data.
    The FOR loop is now ended.
    Now i try to merge this V_ABC into tale ABC like this:
    MERGE INTO ABC abc
    USING TABLE(CAST(V_ABC AS TT_ABC)) z
    ON (abc.random_column_name = z.random_column_name)
    WHEN MATCHED THEN
    UPDATE
    SET abc.initially_empty_column = z.initially_empty_column
    The exact error is "Error: PLS-00642: local collection types not allowed in SQL statements".
    I am forced to use PL-SQL because of performance issues.
    Could you provide an alternate way?
    I guess one way would be to create another empty collection and EXTENDing it and creating a replica of V_ABC. Then probably it will allow me to merge. But i m not sure.

  • How to check the Statistics generated for a table through DBMS_STATS.

    Hi,
    How to check the statistics generated for a Table through DBMS_STATS.GATHER_TABLE_STATS procedure ?
    Please let me know.
    Thanks !
    Regards,
    Rajasekhar

    Rajasekhar wrote:
    Hi,
    How to check the statistics generated for a Table through DBMS_STATS.GATHER_TABLE_STATS procedure ?
    Please let me know.
    Thanks !
    Regards,
    Rajasekharquery ALL_TABLES

  • Using the collection as a Table

    Hi,
    I want to use a collection as a Table.Can anyone give me any code sample for that.
    I am getting a collection through in parameter and want to join that rows with another table to out another collection.I want that because if I will do the operation in loop by taking the element one by one and comparing with the DB table record,the performance will be degraded.
    Please give the solution for using a collection as a table to join it with another table.
    Thanks in advance
    Guljeet

    Something like this probably will do for you.
    SQL> create table t1 as
      2  select 1 col1 from dual union all
      3  select 2 from dual union all
      4  select 3 from dual;
    Table created
    SQL> create or replace type typ_row as object (col1 number, col2 varchar2(10));
      2  /
    Type created
    SQL> create or replace type typ_table as table of typ_row
      2  /
    Type created
    SQL> set serveroutput on
    SQL>
    SQL> declare
      2    tab_test typ_table := typ_table(typ_row(1, 'A'), typ_row(3, 'C'));
      3    cursor cur1 is
      4      select t1.col1, t2.col2
      5        from t1
      6        join table(tab_test) t2 on t1.col1 = t2.col1; --use table() and put your collection in it then join
      7  begin
      8    for rec in cur1 -- sample execution just for displaying the join performed
      9    loop
    10      dbms_output.put_line(rec.col1 || ' ' || rec.col2);
    11    end loop;
    12  end;
    13  /
    1 A
    3 C
    PL/SQL procedure successfully completed
    SQL> or, after creating the structures, simply run something like:
    SQL> select t1.col1, t2.col2
      2        from t1
      3        join table(typ_table(typ_row(1, 'A'), typ_row(3, 'C'))) t2 on t1.col1 = t2.col1;
          COL1 COL2
             1 A
             3 C
    SQL>

  • Optimizer Statistics collection after upgrade from 8i to 10R2

    I just upgraded database from 8.1.7 to 10R2 .
    What would the best approach for Optimizer Statistics collection. We would like to open database for test , but I afraid some quries going to run slow without latest stats. Should I run it manually or let Oracle run it’s default stats collection job later on.
    Any suggestions?

    user594143
    You really need a strategy before an upgrade like this, but you have two options -
    a) try to make the 10g stats collection identical to the 8i stats collection. Check the code you used to run, check the 8i default values for the parameters in your current dbms_stats() calls, and write them in explicitly when you run the code under 10g.
    OR
    b) do a full 10g conversion. Get rid of your own collection code, clear out most of the old settings you had in your parameter file for fiddling with the optimizer, do a 'gather_schema_stats' then leave 10g to do its default thing and fix any problems that appear.
    If you have testing time on a non-production system, then (b) is the strategic option - although personally I think it tends to collect too many histograms and still needs some refinement; if you don't have any testing time and you're going straight into production then (a) is the least threatening option (and if someone's made you do that, you might also set the optimizer_features_enable to 8.1.7 until you can do some proper upgrade tests).
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • How to check the progress of statistics gathering on a table?

    Hi,
    I have started the statistics gathering on a few big tables in my database.
    How to check the progress of statistics gathering on a table? Is there any data dictionary views or tables to monitor the progress of stats gathering.
    Regds,
    Kunwar

    Hi all
    you can check with this small script.
    it lists the sid details for long running session like
    when it started
    when last update
    how much time still left
    session status "ACTIVE/INACTIVE". etc.
    -- Author               : Syed Kaleemuddin_
    -- Script_name          : sid_long_ops.sql
    -- Description          : list the sid details for long running session like when it started when last update how much time still left.
    set lines 200
    col OPNAME for a25
    Select
    a.sid,
    a.serial#,
    b.status,
    a.opname,
    to_char(a.START_TIME,' dd-Mon-YYYY HH24:mi:ss') START_TIME,
    to_char(a.LAST_UPDATE_TIME,' dd-Mon-YYYY HH24:mi:ss') LAST_UPDATE_TIME,
    a.time_remaining as "Time Remaining Sec" ,
    a.time_remaining/60 as "Time Remaining Min",
    a.time_remaining/60/60 as "Time Remaining HR"
    From v$session_longops a, v$session b
    where a.sid = b.sid
    and a.sid =&sid
    And time_remaining > 0;
    Sample output:
    SQL> @sid_long_ops
    Enter value for sid: 474
    old 13: and a.sid =&sid
    new 13: and a.sid =474
    SID SERIAL# STATUS OPNAME START_TIME LAST_UPDATE_TIME Time Remaining Sec Time Remaining Min Time Remaining HR
    474 2033 ACTIVE Gather Schema Statistics 06-Jun-2012 20:10:49 07-Jun-2012 01:35:24 572 9.53333333 .158888889
    Thanks & Regards
    Syed Kaleemuddin.
    Oracle Apps DBA
    Mobile: +91 9966270072
    Email: [email protected]

  • Why does the snmpcoll process always start up even if SNMP Statistics Collection is turned off?

    Why does the snmpcoll process always start up even if SNMP
    Statistics Collection is turned off?
    <P>
    The SNMP Statistics Collection field on the Server Status|SNMP
    Subagent Configuration form affects only the SNMP subagent, which is a separate entity from
    the collector process (snmpcoll). Currently, snmpcoll does not look
    at the SNMP configuration data when it starts up. It is started by
    the dispatcher and terminates when the dispatcher terminates. Many
    of the statistics are supposed to reflect cumulative values recorded since MTA
    initialization, such as the total number of messages sent and received.
    As a convenience, the collector process collects information in the
    background even while the SNMP subagent is turned off; this way,
    the values are available when the SNMP subagent is configured and
    turned on.

    Collections fall off on the Date of First Deliquency 7-7.5 years later; that is supposed to be reported as the first time you went late (1-30 days) on the OC (original creditor) in the chain of events leading up to it's being farmed out for collection as I underestand it. DOFD though sometimes doesn't get reported right, and by default it will be the date the collection is added to the bureaus. As an example I have an awkward collection (not sure how I got it looking at the payment history thought I'd cancelled that account but TWC billing apparently didn't think so) which I went late on in 2009, but didn't get farmed out till late 2010.  The DOFD is 6/09 looking at it, and it's expected to come off according to the bureaus 6/16 as a result. Different negatives have their own rules though.

Maybe you are looking for

  • How can I restore photos from Time Capsule/Machine w/o getting lots of duplicates?

    Hi All, I recently "lost" some photos and tried going into Time Capsule/Machine to restore them.  I went to applications, opened Time Machine, got the "Star Wars" like screen, scrolled to the appropriate date (I thought), clicked on that date, went t

  • Error on import statement.

    I got compiler error for an import statement which is: import MyClass; It said: '.' expected at the end of the statement. However, MyClass isn't any package. Also, I can compiler the same code with the import statement as above on other machine with

  • Adding MP3 in Dreamweaver CS3

    Hi, I am new to this forum but I really need some help. I am trying to link some mp3s onto my website using Dreamweaver CS3. I have tried two options: Using a Flash button For example: <a href="../music/track.mp3">Services</a> Both instances do play

  • How to display designer module version no. on the form?

    Hi, Does anyone know how to display designer module version no. on the form? Thanks in advance

  • Known issues with Citrix Xen App 6.5

    Are there any known issues with Citrix Xen App 6.5 and Adobe Reader 10.1.1?