DB statistics for BW DSO table

Hello,
We have performance problems with one of our DSO tables which contains 16m rec of CATS data.
The ABAP program reads from the DSO table by sec.index (trace shows that the index is used) but still we get like 500k cpu and 280 iocost.
I was looking at the statistics for this table and 2 things that got my attention:
1. statistics are about 1 year old (last date 17.6.2009)
2. Deviation is 24% over the last year (3,2m change/1,6m add)
Are the statistics causing the problems with a select on index and why does the DBCONNECT don't update the statistics automatically (it does run every day and does change quiet a lot of other tables)?

> -
> | Id  | Operation                   | Name               | Rows  | Bytes | Cost (%CPU)|
> -
> |   0 | SELECT STATEMENT            |                    |  2302 | 66758 |   269   (0)|
> |   1 |  TABLE ACCESS BY INDEX ROWID| /BIC/AEAHRO00400   |  2302 | 66758 |   269   (0)|
> |*  2 |   INDEX RANGE SCAN          | /BIC/AEAHRO0040003 |  2302 |       |    13   (0)|
> -
>
> Predicate Information (identified by operation id):
> -
>
>    2 - access("EMPLOYEE"=:A0)

> The sec index contains only employee.
>
> We use the internal table in the rest of the program so db access is minimal. We used to do single selects for certain days to get the hours but it didn't make much difference.
>
> Since the exe plan looks good (correct me if I'm wrong plz) the statistics were my last resort.
Please - think about this last sentence for a while!
What are the statistics for?
What is their sole purpose?
To provide the cost based optimizer with information about the data in your tables so that it can figure out the best way to execute your query.
Now you say: the execution plan is good, but maybe the statistics are making the performance bad !?
You certainly see the logical mistake here.
The important next question is (and I posted it before): what is the actual work to be done for this query?
Are really just 2302 rows read?
Are really just 269 logical IOs done?
Depending on what the answers to these questions are, other tuning options may turn up.
regards,
Lars

Similar Messages

  • How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?

    1. How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?
    When multiple tables are involved , and the actual number of rows returned is more than what the explain plan tells. How can I find out what change is needed  in the stat plan  ?
    2. Does rowsource statistics gives some kind of  understanding of Extended stats ?

    You can get Row Source Statistics only *after* the SQL has been executed.  An Explain Plan midway cannot give you row source statistics.
    To get row source statistics either set STATISTICS_LEVEL='ALL'  in the session that executes theSQL OR use the Hint "gather_plan_statistics"  in the SQL being executed.
    Then use dbms_xplan.display_cursor
    Hemant K Chitale

  • Disable Statistics for specific Tables

    Is it possible to disable statistics for specific tables???

    If you want to stop gathering statistics for certain tables, you would simply not call DBMS_STATS.GATHER_TABLE_STATS on those particular tables (I'm assuming that is how you are gathering statistics at the moment). The old statistics will remain around for the CBO, but they won't be updated. Is that really what you want?
    If you are currently using GATHER_SCHEMA_STATS to gather statistics, you would have to convert to calling GATHER_TABLE_STATS on each table. You'll probably want to have a table set up that lists what tables to exclude and use that in the procedure that calls GATHER_TABLE_STATS.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Help,why brconnect do not collect statistics for mseg table?

    I found "MSEG" table`s statistics is too old.
    so i check logs in db13,and the schedule job do not collect statistics for "MSEG".
    Then i execute manually: brconnect -c -u system/system -f stats -t mseg  -p 4
    this command still do not collect for mseg.
    KS1DSDB1:oraprd 2> brconnect -c -u system/system -f stats -t mseg u2013f collect -p 4
    BR0801I BRCONNECT 7.00 (46)
    BR0154E Unexpected option value 'u2013f' found at position 8
    BR0154E Unexpected option value 'collect' found at position 9
    BR0806I End of BRCONNECT processing: ceenwjre.log 2010-11-12 08.41.38
    BR0280I BRCONNECT time stamp: 2010-11-12 08.41.38
    BR0804I BRCONNECT terminated with errors
    KS1DSDB1:oraprd 3> brconnect -c -u system/system -f stats -t mseg -p 4
    BR0801I BRCONNECT 7.00 (46)
    BR0805I Start of BRCONNECT processing: ceenwjse.sta 2010-11-12 08.42.04
    BR0484I BRCONNECT log file: /oracle/PRD/sapcheck/ceenwjse.sta
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.11
    BR0813I Schema owners found in database PRD: SAPPRD*, SAPPRDSHD+
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.12
    BR0807I Name of database instance: PRD
    BR0808I BRCONNECT action ID: ceenwjse
    BR0809I BRCONNECT function ID: sta
    BR0810I BRCONNECT function: stats
    BR0812I Database objects for processing: MSEG
    BR0851I Number of tables with missing statistics: 0
    BR0852I Number of tables to delete statistics: 0
    BR0854I Number of tables to collect statistics without checking: 0
    BR0855I Number of indexes with missing statistics: 0
    BR0856I Number of indexes to delete statistics: 0
    BR0857I Number of indexes to collect statistics: 0
    BR0853I Number of tables to check (and collect if needed) statistics: 1
    Owner SAPPRD: 1
    MSEG     
    BR0846I Number of threads that will be started in parallel to the main thread: 4
    BR0126I Unattended mode active - no operator confirmation required
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
    BR0817I Number of monitored/modified tables in schema of owner SAPPRD: 1/1
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
    BR0877I Checking and collecting table and index statistics...
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
    BR0879I Statistics checked for 1 table
    BR0878I Number of tables selected to collect statistics after check: 0
    BR0880I Statistics collected for 0/0 tables/indexes
    BR0806I End of BRCONNECT processing: ceenwjse.sta 2010-11-12 08.42.16
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.17
    BR0802I BRCONNECT completed successfully
    the log says:
    Number of tables selected to collect statistics after check: 0
    Could you give some advices?  thanks a lot.

    Hello,
    If you would like to force the creation of that stats for table MSEG you need to use the -f (force) switch.
    If you leave out the -f switch the parameter from stats_change_threshold is taken like you said correctly:
    [http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/02/0ae0c6395911d5992200508b6b8b11/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/02/0ae0c6395911d5992200508b6b8b11/content.htm]
    [http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/cb/f1e33a5bd8e934e10000000a114084/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/cb/f1e33a5bd8e934e10000000a114084/content.htm]
    You have tried to do this in your second example :
    ==> brconnect -c -u system/system -f stats -t mseg u2013f collect -p 4
    Therefore you received:
    BR0154E Unexpected option value 'u2013f' found at position 8
    BR0154E Unexpected option value 'collect' found at position 9
    This is the correct statement, however the hyphen in front of the f switch is not correct.
    Try again with the following statement (-f in stead of u2013f) you will see that it will work:
    ==> brconnect -c -u system/system -f stats -t mseg -f collect -p 4
    I hope this can help you.
    Regards.
    Wim

  • Managing statistics for object collections used as table types in SQL

    Hi All,
    Is there a way to manage statistics for collections used as table types in SQL.
    Below is my test case
    Oracle Version :
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Original Query :
    SELECT
         9999,
         tbl_typ.FILE_ID,
         tf.FILE_NM ,
         tf.MIME_TYPE ,
         dbms_lob.getlength(tfd.FILE_DATA)
    FROM
         TG_FILE tf,
         TG_FILE_DATA tfd,
              SELECT
              FROM
                   TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
         )     tbl_typ
    WHERE
         tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:02.90
    Execution Plan
    Plan hash value: 3970072279
    | Id  | Operation                                | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  1 |  HASH JOIN                               |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  2 |   HASH JOIN                              |              |  8168 |   287K|   695   (3)| 00:00:09 |
    |   3 |    VIEW                                  |              |  8168 |   103K|    29   (0)| 00:00:01 |
    |   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|              |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   5 |      FAST DUAL                           |              |     1 |       |     2   (0)| 00:00:01 |
    |   6 |    TABLE ACCESS FULL                     | TG_FILE      |   565K|    12M|   659   (2)| 00:00:08 |
    |   7 |   TABLE ACCESS FULL                      | TG_FILE_DATA |   852K|   128M|  3863   (1)| 00:00:47 |
    Predicate Information (identified by operation id):
       1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
    Statistics
              7  recursive calls
              0  db block gets
          16783  consistent gets
          16779  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
    select
         index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
    from
         all_indexes
    where table_name in ('TG_FILE','TG_FILE_DATA');
    INDEX_NAME                     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR     NUM_ROWS SAMPLE_SIZE
    TG_FILE_PK                          2        2160        552842             21401       552842      285428
    TG_FILE_DATA_PK                     2        3544        852297             61437       852297      852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
    But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
    So my question is, is there any way by which I can change the statistics while using collections in SQL ?
    I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
    Modified query with hints :
    SELECT    
        /*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
        9999,
        tbl_typ.FILE_ID,
        tf.FILE_NM ,
        tf.MIME_TYPE ,
        dbms_lob.getlength(tfd.FILE_DATA)
    FROM
        TG_FILE tf,
        TG_FILE_DATA tfd,
            SELECT
            FROM
                TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
        tbl_typ
    WHERE
        tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1670128954
    | Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                          |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   1 |  NESTED LOOPS                             |                 |       |       |            |          |
    |   2 |   NESTED LOOPS                            |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   3 |    NESTED LOOPS                           |                 |  8168 |  1363K| 16379   (1)| 00:03:17 |
    |   4 |     VIEW                                  |                 |  8168 |   103K|    29   (0)| 00:00:01 |
    |   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   6 |       FAST DUAL                           |                 |     1 |       |     2   (0)| 00:00:01 |
    |   7 |     TABLE ACCESS BY INDEX ROWID           | TG_FILE_DATA    |     1 |   158 |     2   (0)| 00:00:01 |
    |*  8 |      INDEX UNIQUE SCAN                    | TG_FILE_DATA_PK |     1 |       |     1   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN                      | TG_FILE_PK      |     1 |       |     1   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY INDEX ROWID             | TG_FILE         |     1 |    23 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
           filter("TF"."FILE_ID"="TFD"."FILE_ID")
    Statistics
              0  recursive calls
              0  db block gets
             16  consistent gets
              8  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed
    Thanks,
    B

    Thanks Tubby,
    While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
    But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
    http://www.oracle-developer.net/display.php?id=427
    If we go across the document, it has mentioned in total 3 hints to set statistics :
    1) CARDINALITY (Undocumented)
    2) OPT_ESTIMATE ( Undocumented )
    3) DYNAMIC_SAMPLING ( Documented )
    4) Extensible Optimiser
    Tried it out with different hints and it is working as expected.
    i.e. cardinality and opt_estimate are taking the default set value
    But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
    With CARDINALITY hint
    SELECT
        /*+ cardinality( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     5 |    10 |    29   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With OPT_ESTIMATE hint
    SELECT
         /*+ opt_estimate(table, e, scale_rows=0.0006) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Execution Plan
    Plan hash value: 4043204977
    | Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   1 |  VIEW                                  |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   2 |   COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   3 |    FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With DYNAMIC_SAMPLING hint
    SELECT
        /*+ dynamic_sampling( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     2 |     4 |    11   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     2 |     4 |    11   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
    I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
    By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
    Regards,
    B

  • Create new CBO statistics for the tables

    Dear All,
    I am facing bad performance in server.In SM50 I see that the read and delete process at table D010LINC takes
    a longer time.how to  create new CBO statistics for the tables D010TAB and D010INC.  Please suggest.
    Regards,
    Kumar

    Hi,
    I am facing problem in when save/activating  any problem ,so sap has told me to create new CBO statistics for the tables D010TAB and D010INC
    Now as you have suggest when tx db20
    Table D010LINC
    there error comes  Table D010LINC does not exist in the ABAP Dictionary
    Table D010TAB
         Statistics are current (|Changes| < 50 %)
    New Method           C
    New Sample Size
    Old Method           C                       Date                 10.03.2010
    Old Sample Size                              Time                 07:39:37
    Old Number                51,104,357         Deviation Old -> New       0  %
    New Number                51,168,679         Deviation New -> Old       0  %
    Inserted Rows                160,770         Percentage Too Old         0  %
    Changed Rows                       0         Percentage Too Old         0  %
    Deleted Rows                  96,448         Percentage Too New         0  %
    Use                  O
    Active Flag          P
    Analysis Method      C
    Sample Size
    Please suggest
    Regards,
    Kumar

  • How to build statistics for a table, its urgent, points will be rewarded

    Hi friends,
    I want to create a statistics for MSEG table in production server, because its not up to date.
    Total no. of entries available in table is 2,30,000.
    My O/S windows2003 (cluster)/oracle9.2/ ECC 5.
    I need a step by step procedure to build or create statistics using DB20.
    Regards,
    s.senthil kumar

    Hi stefan,
    Thanks 4 ur reply.
    I need some more clarification on db20.
    Wt are the values I have to give on following fields.
    New method and new sample size.
    anyfields or chek box I need to fill before create new statistics?
    My current screen values on following fields.
    new method = 'C'.  new sample size = '  '.
    old method  = 'C'. old sample size = '   '.
    Old Number   5,965         Deviation Old -> New   3,669
    New Number  24,834       Deviation New -> Old      97-
    Inserted Rows 0              Percentage Too Old         0
    Changed Rows  0            Percentage Too Old         0
    Deleted Rows     0           Percentage Too New         0
    Use                  A         
    Active Flag        A
    Analysis Method      C
    and history check box is marked.
    Can I run DB20 while server is running or at peak time?
    Regards,
    s.senthil kumar

  • Unable to generate statistics for the table

    I have got a staging table of more than 600 columns which has got range portioning. Size of the table is 4GB. The average size of the row is around 3 MB. I have created a Functional index on one of the column ABC VARCHAR2(50) and it has only number values. Now when I try generating statistics for this table through ANALYZE or DBMS_STAT, it gives Invalid Number error but when I drop this index and try analyzing, it works.
    Executed TO_NUMBER(ABC) query on the table and it works fine.
    I have got Functional Indexes on other tables also but I don't get such problem with those tables. I tried dropping the index and re-creating it but it didn't work out.
    I was suspecting DATA BLOCK CORRUPTION so checked ALERT LOG and TRACE FILES but found nothing.
    So what is this magic called?

    I am using TO_NUMBER on the column.
    I have checked the MetaLink for the same problem but could not find anything on that. I still suspect datafile error which I am unable to get in ALERT LOG or TRACE FILES. So I am going to try it this way:
    1) Create new Tablespace with New Datafile
    2) Transfer the table from existing Tablespace to new Tablespace
    3) Create the functional index in the same new Tablespace
    4) Try generating the statistics.
    5) If it works then create seprate Tablespace for data and Index.
    Hope it works !!
    Thanks for the reply guys.

  • AUTOMATIC UPDATE STATISTICS for VB* tables ON automatically

    Hello All,
    We had reviewed our ECC system with SAP and they recommended us to OFF the AUTOMATIC UPDATE STATISTICS for VBDATA, VBHDR and VBMOD.
    We executed the script EXEC sp_autostats <tablename>, 'OFF' but the status goes ON after a while.
    Checked the SAP note 771352 but did not get proper idea from it.
    MS SQL database used is 2008.
    Can someone suggest and share his/her experience.
    Regards,
    Mohit

    Hi Mohit,
    Did you ran the sap_z* script ?
    What is your SP level - did you check 1702325 - Alerts appear for VB tables
    Also, I asked to to use NORECOMPUTE - have you tried that ?
    Regards

  • Null statistics for tables but still got optimizer problem

    Hi,
    We got a batch application on 10.2.0.4 Oracle database that does a lot of deletes and inserts when run it. Nightly statistics gathering was not enough so I deleted and locked the statistics for all tables. Now it has null statistics so it supposes to use dynamic sampling instead of statistics. At beginning the query executions were ok, however after the application ran for a well, it still chosen a very bad query execution plan (with cost over ten times more that it was normal).
    What is happening here? Null statistics is still not good enough for the highly volatile database? Is there anything else I can do?
    Thanks in advance.

    Provide information like the structure of your table (including indexes), your compartibilty and optimizer settings, your SQL Statement and sample explain plan of the SQL.

  • Run optimizer statistics for one table

    dear all,
    i noticed an error in DB16 , that is  <b>Missing Statistics for a table </b> SAPGRP.MC03BF0SETUP.
    How to run/ generate the stats. for this table

    Dear Somckit
    here is the error msg.  from DB16.
                                                                                    Description         Table: SAPGRP.MC03BF0SETUP # Table or index has no optimizer
    Correction Type     D                                                           
    Corrective Action   Collect optimizer statistics                                
    Check Log           /oracle/GRP/sapcheck/cdwqhqnz.chk                           
    Single Messages                                                                      
    No.   Description                                                                    
    1     Table: SAPGRP.MC03BF0SETUP # Table or index has no optimizer statistics        
    2     Table: SAPGRP.MC03BX0SETUP # Table or index has no optimizer statistics        
    3     Table: SAPGRP.MC03UM0SETUP # Table or index has no optimizer statistics        
    4     Index: SAPGRP.MC03BF0SETUP~0 # Table or index has no optimizer statistics      
    5     Index: SAPGRP.MC03BX0SETUP~0 # Table or index has no optimizer statistics      
    6     Index: SAPGRP.MC03UM0SETUP~0 # Table or index has no optimizer statistics                                                                               
    Thank u.

  • Why in SE16 we can not  see New Data Table for standard DSO

    Hi,
    We says that there is three tables (New Data Table, Active Data Table and Change Log Table) of Standard DSO, Then Why in SE16 we can not  see New Data Table of Standard DSO.
    Regards,
    Sushant

    Hi Sushant,
    It is possible to see the 3 DSO tables data in through SE16. May be you do not have authorization to see data through SE16.
    Sankar Kumar

  • Are these key fields in 'Direct Update' DSO table in BW on HANA (BW740SP7) correct?

    Pls see below for a very strange behaviour of all active tables of type 'Direct Update' DSO in our system.
    Is this a bug or is this somehow supposed to be like this (i.e. a feature)?
    DSO definition:
    Active table of type 'Column Store' - note fields marked as 'Key' different from DSO definition:
    The DSO table content:
    In ABAP executing the following select on the DSO table:
    The result is garbage (the content of QQQPARST0 lands in /BIC/QQQPAFL0 and QQQSCPRO lands in /BIC/QQQPARA0):

    Dear Sundaresan,
    thanks for this suggestion, but I have posted the question in the following forums:
    SAP Business Warehouse: What is this with the 'Direct' DSO table in BW on HANA (BW740SP7)
    SAP BW powered by HANA: Direct Update DSO keyfields in RSA1 <> SE11 & SELECT returns garbage
    ABAP for SAP HANA: Direct Update DSO keyfields in RSA1 <> SE11 & SELECT returns garbage
    So far no real luck but maybe soon
    Greetings

  • Information about activation-statistics from a DSO.

    Hello,
    after activating a request in an DSO, I can see the three key-figures in the log-window:
    1) Number of inserted Records
    2) Number of updates Records
    3) Number of deleted Records
    Unfortunately I see this information for each Data-Package, but I need it for the whole request.
    Because I have many DSOs and very many Data-Packages it is very hard to add this data manually.
    Is this information stored in any table, so that i can get it with the Data-Browser (SE16) ?
    Thanks
    Armin

    Hi Alice,
    in the table you mentioned, I can see only the active records.
    But I will try to analyse the change-log of the DSO, might be this will bring the needed information.
    (The reason for all of this effort is, to find out, wether it is neccessary to implement a delta-handling for a DSO or not.)
    Armin

  • Statistics Collection to a Table

    Hello,
    i need to gather statistics for few tables from a schema, to a separate table. Basically i have gather the stats into newly created table. my question is how can i do this? i know how to generate statistics using the dbms statspack, but do not have any idea of how to gather these statistics to a separate table>
    could any one shed some light here..
    Thanks..

    Hello,
    i created a table using the following dbms_stats package.
    BEGIN
    DBMS_STATS.create_stat_table (
    ownname => 'TEST',
    stattab => 'stats_table',
    tblspace => 'users');
    END;
    when this table created using the above script it created with the following columns.
    STATID VARCHAR2(30 BYTE),
    TYPE CHAR(1 BYTE),
    VERSION NUMBER,
    FLAGS NUMBER,
    C1 VARCHAR2(30 BYTE),
    C2 VARCHAR2(30 BYTE),
    C3 VARCHAR2(30 BYTE),
    C4 VARCHAR2(30 BYTE),
    C5 VARCHAR2(30 BYTE),
    N1 NUMBER,
    N2 NUMBER,
    N3 NUMBER,
    N4 NUMBER,
    N5 NUMBER,
    N6 NUMBER,
    N7 NUMBER,
    N8 NUMBER,
    N9 NUMBER,
    N10 NUMBER,
    N11 NUMBER,
    N12 NUMBER,
    D1 DATE,
    R1 RAW(32),
    R2 RAW(32),
    CH1 VARCHAR2(1000 BYTE)
    how can I interpret what column is for what? for example C1 to C5, N1 to N12 or R1, CH1 what do they mean?
    also will be able to get any info regarding the COst of the SQL statement that took to run? if not how can i obtain that info?
    Then I gathered statistics for one table to test.
    exec dbms_stats.gather_table_stats(ownname=>'TEST',
    estimate_percent=>10,
    statown=>'TEST',
    tabname=>'ord_receipts',
    stattab=>'STATS_TABLE',
    statid=>'TESTING');
    the data inserted into STATS_TABLE, but could not understand.
    I need to run these statistics on daily basis and save to a diffferent table daily . for example today is 01/01/2012 should have its own table for stats to collect and 01/02/2012 should have another table to collect the statistics.
    is there any way that i can dynamically incorporated when gathering the statistics, to create the STATS_TABLE concatanated with sysdate daily to analyze the statistics?
    Thanks..

Maybe you are looking for

  • Error while importing data from shape file in mapBuilder.

    Hi Dears, I get the following error while importing shape file from inside mapBuilder Feb 03, 2015 2:01:52 PM oracle.mapviewer.builder.wizard.shapefile.ImportShapefileThread importFile WARNING: java.sql.BatchUpdateException: Internal Error: Overflow

  • Windows XP registration issue while trying solutions on Intel Macs

    With the rather strict Windows XP registration arrangement, how can we try the various methods of running Windows without being locked out by the registration process. My MacBook will arrive soon and I wanted to try both Boot Camp as well as Parallel

  • No sound DVD

    Pavilion dv6, sound plays from you tube and programs in drive but no sound on CD music or DVD's. No sound from speakers at any time. Head phons work but not for DVD or music CD. Tried different midia players same reults. Windows media player says err

  • FUNCTION 'REUSE_ALV_COMMENTARY_WRITE'

    Hi, I want to add Header in my ALV report and i used the function module 'REUSE_ALV_COMMENTARY_WRITE' but its not displaying the header in the Output. I used the Coding as: data :lt_top_of_page type slis_t_listheader. CALL FUNCTION 'REUSE_ALV_COMMENT

  • Right Truncation, varchar in SQL Server, double-byte in Oracle

    Hi. We have a DB Link using DG4MSQL from Oracle 11.1.0.7.0 to a SQL Server 2005 database. The Oracle database is set up to use UTF-8 so all character fields are double-byte on the oracle side. On the SQL Server table we have a column defined as varch