BW Statistics for BW Accelerator

Hi Everyone,
Is there any standard business content to collect the statistics information for BW accelerator?
If not, can you please suggest how to collect the BWA statistics?
Thanks
Kind Regards
Anukul

ANUKUL GIRADKAR wrote:
Hi Everyone,
>
> Is there any standard business content to collect the statistics information for BW accelerator?
>
> If not, can you please suggest how to collect the BWA statistics?
>
> Thanks
>
> --
> Kind Regards
> Anukul
Oss  Note "1163732 - BWA 7.00: BWA statistics index" explains it

Similar Messages

  • WAAS statistics for SSL accelerated services

      Hi all,
    the customer has configured two SSL accelerated services on the core WAVEs. He would like to monitor both these services separatelly. He uses SSL accelerated report, but there is summary statistics from both services. Is possible to create an application per SSL service for the collection statistics? For example: when I will have two SSL accelerated services ssl1 and ssl2, is possible to monitor statistics for ssl1 and monitor statistics for ssl2?
    Thank you
    Roman

    I dont think you can associate both certificates to the same wild card domain & port. You can use one at a time.

  • How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?

    1. How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?
    When multiple tables are involved , and the actual number of rows returned is more than what the explain plan tells. How can I find out what change is needed  in the stat plan  ?
    2. Does rowsource statistics gives some kind of  understanding of Extended stats ?

    You can get Row Source Statistics only *after* the SQL has been executed.  An Explain Plan midway cannot give you row source statistics.
    To get row source statistics either set STATISTICS_LEVEL='ALL'  in the session that executes theSQL OR use the Hint "gather_plan_statistics"  in the SQL being executed.
    Then use dbms_xplan.display_cursor
    Hemant K Chitale

  • GTX 750 Ti sufficient for GPU Acceleration in CC 2014?

    Hi all,
    Upgrading from a HD 5870 hopefully - just wanted to know if the GTX 750 Ti with its CUDA technologies would be sufficient for GPU accelerated applications like Pr, Ae, Sg, Ps, Lr and so on? Asking in the Pr forums because that's the application I use the most.
    System specs:
    ASUS P8Z68-V PRO/GEN3
    Core i5 2500K @ 4.3GHz
    Arctic Cooling Freezer 13
    16GB RipJaws-X 1648MHz
    ATI Sapphire Radeon HD 5870
    OCZ Vertex 4 128GB | WD Green 2TB | WD Green 3TB
    LG BH16NS40 Blu-Ray Burner
    OCZ ZS 650W
    NZXT Lexa S
    Windows 8.1 Pro x64
    Not too bothered about gaming performance. I don't want to spend a lot of money and the 750 Ti at around £100 seems a good balance between value for money and performance.
    The 5870 did work fine with OpenCL and Adobe CC 2014 but for various reasons I'd like to upgrade and go back to NVIDIA.
    Thanks all

    I have just completed running the entire PPBM8 script with the GTX 750 Ti, and compared it to the results that I had obtained over two weeks ago with the older GTX 560 card.
    GTX 750 Ti on CC 2014.8.2 (1TB Samsung F3 as project disk):
    GTX 560 on CC 2014.8.1 (1TB Western Digital Black WD1002FAEX as project drive):
    It appears that the first-generation Maxwell (GM107) GPU somehow improved the H.264 rendering/encoding performance compared to the older Fermi (GF114) GPU. The MPEG-2 rendering/encoding performance is practically equal with both of these particular GPUs.
    Verdict? The GTX 750 Ti is the right choice for a PC that's equipped (however less than ideally) with a higher-end i5 without hyperthreading or a quad-core i7 that cannot be overclocked much if at all (and this is assuming that that PC has a sufficiently fast disk subsystem).
    By the way, the GT 740 that was suggested for the OP's system (given the "Green" drives) is not a Maxwell-generation GPU at all - but a Kepler-generation GPU (in this case, based on the GK107) instead. The GT 730 with GDDR5 memory that I recommended as an alternative to the GT 740 DDR3 is based on the GK208 GPU. (And I do not recommend most GT 730s on the market as they are based on an old Fermi-generation GPU - the GF108 that debuted with the GT 430 back in 2010.)

  • How can i see visitor statistics for web page hosted on osx lion server

    Hello
    how can i see visitor statistics for web page hosted on osx lion server
    Thanks
    Adrian

    Just click inside the url address bar. Full url address highlighted, will appear.
    Best.

  • Disable Statistics for specific Tables

    Is it possible to disable statistics for specific tables???

    If you want to stop gathering statistics for certain tables, you would simply not call DBMS_STATS.GATHER_TABLE_STATS on those particular tables (I'm assuming that is how you are gathering statistics at the moment). The old statistics will remain around for the CBO, but they won't be updated. Is that really what you want?
    If you are currently using GATHER_SCHEMA_STATS to gather statistics, you would have to convert to calling GATHER_TABLE_STATS on each table. You'll probably want to have a table set up that lists what tables to exclude and use that in the procedure that calls GATHER_TABLE_STATS.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • To get Run time Statistics for a Data target

    Hello All,
    I need to collect one month data (ie.Start time and End time of the cube) for the documentation work. Could someone help me to find out the easiest way to get above mentioned data in BW Production system.
    Please guide me to know the query name to get the runtime statistics for the cube
    Thanks in advance,
    Anjali

    it will fetch the data if the BI stats are turned on for that cube.....
    please verify these links
    http://help.sap.com/saphelp_nw04s/helpdata/en/8c/131e3b9f10b904e10000000a114084/content.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/43/15c54048035a39e10000000a422035/frameset.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm

  • Help,why brconnect do not collect statistics for mseg table?

    I found "MSEG" table`s statistics is too old.
    so i check logs in db13,and the schedule job do not collect statistics for "MSEG".
    Then i execute manually: brconnect -c -u system/system -f stats -t mseg  -p 4
    this command still do not collect for mseg.
    KS1DSDB1:oraprd 2> brconnect -c -u system/system -f stats -t mseg u2013f collect -p 4
    BR0801I BRCONNECT 7.00 (46)
    BR0154E Unexpected option value 'u2013f' found at position 8
    BR0154E Unexpected option value 'collect' found at position 9
    BR0806I End of BRCONNECT processing: ceenwjre.log 2010-11-12 08.41.38
    BR0280I BRCONNECT time stamp: 2010-11-12 08.41.38
    BR0804I BRCONNECT terminated with errors
    KS1DSDB1:oraprd 3> brconnect -c -u system/system -f stats -t mseg -p 4
    BR0801I BRCONNECT 7.00 (46)
    BR0805I Start of BRCONNECT processing: ceenwjse.sta 2010-11-12 08.42.04
    BR0484I BRCONNECT log file: /oracle/PRD/sapcheck/ceenwjse.sta
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.11
    BR0813I Schema owners found in database PRD: SAPPRD*, SAPPRDSHD+
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.12
    BR0807I Name of database instance: PRD
    BR0808I BRCONNECT action ID: ceenwjse
    BR0809I BRCONNECT function ID: sta
    BR0810I BRCONNECT function: stats
    BR0812I Database objects for processing: MSEG
    BR0851I Number of tables with missing statistics: 0
    BR0852I Number of tables to delete statistics: 0
    BR0854I Number of tables to collect statistics without checking: 0
    BR0855I Number of indexes with missing statistics: 0
    BR0856I Number of indexes to delete statistics: 0
    BR0857I Number of indexes to collect statistics: 0
    BR0853I Number of tables to check (and collect if needed) statistics: 1
    Owner SAPPRD: 1
    MSEG     
    BR0846I Number of threads that will be started in parallel to the main thread: 4
    BR0126I Unattended mode active - no operator confirmation required
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
    BR0817I Number of monitored/modified tables in schema of owner SAPPRD: 1/1
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
    BR0877I Checking and collecting table and index statistics...
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
    BR0879I Statistics checked for 1 table
    BR0878I Number of tables selected to collect statistics after check: 0
    BR0880I Statistics collected for 0/0 tables/indexes
    BR0806I End of BRCONNECT processing: ceenwjse.sta 2010-11-12 08.42.16
    BR0280I BRCONNECT time stamp: 2010-11-12 08.42.17
    BR0802I BRCONNECT completed successfully
    the log says:
    Number of tables selected to collect statistics after check: 0
    Could you give some advices?  thanks a lot.

    Hello,
    If you would like to force the creation of that stats for table MSEG you need to use the -f (force) switch.
    If you leave out the -f switch the parameter from stats_change_threshold is taken like you said correctly:
    [http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/02/0ae0c6395911d5992200508b6b8b11/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/02/0ae0c6395911d5992200508b6b8b11/content.htm]
    [http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/cb/f1e33a5bd8e934e10000000a114084/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/cb/f1e33a5bd8e934e10000000a114084/content.htm]
    You have tried to do this in your second example :
    ==> brconnect -c -u system/system -f stats -t mseg u2013f collect -p 4
    Therefore you received:
    BR0154E Unexpected option value 'u2013f' found at position 8
    BR0154E Unexpected option value 'collect' found at position 9
    This is the correct statement, however the hyphen in front of the f switch is not correct.
    Try again with the following statement (-f in stead of u2013f) you will see that it will work:
    ==> brconnect -c -u system/system -f stats -t mseg -f collect -p 4
    I hope this can help you.
    Regards.
    Wim

  • Which Event Classes i should use for finding good indexs and statistics for queries in SP.

    Dear all,
    I am trying to use pro filer to create a trace,so that it can be used as workload in
    "Database Engine Tuning Advisor" for optimization of one stored procedure.
    Please tel me about the Event classes which i  should use in trace.
    The stored proc contains three insert queries which insert data into a table variable,
    Finally a select query is used on same table variable with one union of the same table variable, to generate a sequence for records based on certain condition of few columns.
    There are three cases where i am using the above structure of the SP, so there are three SPS out of three , i will chose one based on their performance.
    1) There is only one table with three inserts which gets  into a table variable with a final sequence creation block.
    2) There are 15 tables with 45 inserts , which gets into a tabel variable with a final
    sequence creation block.
    3)
    There are 3 tables with 9 inserts , which gets into a table variable with a final
    sequence creation block.
    In all the above case number of record will be around 5 lacks.
    Purpose is optimization of queries in SP
    like which Event Classes i should use for finding good indexs and statistics for queries in SP.
    yours sincerely

    "Database Engine Tuning Advisor" for optimization of one stored procedure.
    Please tel me about the Event classes which i  should use in trace.
    You can use the "Tuning" template to capture the workload to a trace file that can be used by the DETA.  See
    http://technet.microsoft.com/en-us/library/ms190957(v=sql.105).aspx
    If you are capturing the workload of a production server, I suggest you not do that directly from Profiler as that can impact server performance.  Instead, start/stop the Profiler Tuning template against a test server and then script the trace
    definition (File-->Export-->Script Trace Definition).  You can then customize the script (e.g. file name) and run the script against the prod server to capture the workload to the specified file.  Stop and remove the trace after the workload
    is captured with sp_trace_setstatus:
    DECLARE @TraceID int = <trace id returned by the trace create script>
    EXEC sp_trace_setstatus @TraceID, 0; --stop trace
    EXEC sp_trace_setstatus @TraceID, 2; --remove trace definition
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Managing statistics for object collections used as table types in SQL

    Hi All,
    Is there a way to manage statistics for collections used as table types in SQL.
    Below is my test case
    Oracle Version :
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Original Query :
    SELECT
         9999,
         tbl_typ.FILE_ID,
         tf.FILE_NM ,
         tf.MIME_TYPE ,
         dbms_lob.getlength(tfd.FILE_DATA)
    FROM
         TG_FILE tf,
         TG_FILE_DATA tfd,
              SELECT
              FROM
                   TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
         )     tbl_typ
    WHERE
         tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:02.90
    Execution Plan
    Plan hash value: 3970072279
    | Id  | Operation                                | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  1 |  HASH JOIN                               |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  2 |   HASH JOIN                              |              |  8168 |   287K|   695   (3)| 00:00:09 |
    |   3 |    VIEW                                  |              |  8168 |   103K|    29   (0)| 00:00:01 |
    |   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|              |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   5 |      FAST DUAL                           |              |     1 |       |     2   (0)| 00:00:01 |
    |   6 |    TABLE ACCESS FULL                     | TG_FILE      |   565K|    12M|   659   (2)| 00:00:08 |
    |   7 |   TABLE ACCESS FULL                      | TG_FILE_DATA |   852K|   128M|  3863   (1)| 00:00:47 |
    Predicate Information (identified by operation id):
       1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
    Statistics
              7  recursive calls
              0  db block gets
          16783  consistent gets
          16779  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
    select
         index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
    from
         all_indexes
    where table_name in ('TG_FILE','TG_FILE_DATA');
    INDEX_NAME                     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR     NUM_ROWS SAMPLE_SIZE
    TG_FILE_PK                          2        2160        552842             21401       552842      285428
    TG_FILE_DATA_PK                     2        3544        852297             61437       852297      852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
    But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
    So my question is, is there any way by which I can change the statistics while using collections in SQL ?
    I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
    Modified query with hints :
    SELECT    
        /*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
        9999,
        tbl_typ.FILE_ID,
        tf.FILE_NM ,
        tf.MIME_TYPE ,
        dbms_lob.getlength(tfd.FILE_DATA)
    FROM
        TG_FILE tf,
        TG_FILE_DATA tfd,
            SELECT
            FROM
                TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
        tbl_typ
    WHERE
        tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1670128954
    | Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                          |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   1 |  NESTED LOOPS                             |                 |       |       |            |          |
    |   2 |   NESTED LOOPS                            |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   3 |    NESTED LOOPS                           |                 |  8168 |  1363K| 16379   (1)| 00:03:17 |
    |   4 |     VIEW                                  |                 |  8168 |   103K|    29   (0)| 00:00:01 |
    |   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   6 |       FAST DUAL                           |                 |     1 |       |     2   (0)| 00:00:01 |
    |   7 |     TABLE ACCESS BY INDEX ROWID           | TG_FILE_DATA    |     1 |   158 |     2   (0)| 00:00:01 |
    |*  8 |      INDEX UNIQUE SCAN                    | TG_FILE_DATA_PK |     1 |       |     1   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN                      | TG_FILE_PK      |     1 |       |     1   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY INDEX ROWID             | TG_FILE         |     1 |    23 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
           filter("TF"."FILE_ID"="TFD"."FILE_ID")
    Statistics
              0  recursive calls
              0  db block gets
             16  consistent gets
              8  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed
    Thanks,
    B

    Thanks Tubby,
    While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
    But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
    http://www.oracle-developer.net/display.php?id=427
    If we go across the document, it has mentioned in total 3 hints to set statistics :
    1) CARDINALITY (Undocumented)
    2) OPT_ESTIMATE ( Undocumented )
    3) DYNAMIC_SAMPLING ( Documented )
    4) Extensible Optimiser
    Tried it out with different hints and it is working as expected.
    i.e. cardinality and opt_estimate are taking the default set value
    But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
    With CARDINALITY hint
    SELECT
        /*+ cardinality( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     5 |    10 |    29   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With OPT_ESTIMATE hint
    SELECT
         /*+ opt_estimate(table, e, scale_rows=0.0006) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Execution Plan
    Plan hash value: 4043204977
    | Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   1 |  VIEW                                  |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   2 |   COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   3 |    FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With DYNAMIC_SAMPLING hint
    SELECT
        /*+ dynamic_sampling( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     2 |     4 |    11   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     2 |     4 |    11   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
    I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
    By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
    Regards,
    B

  • Oracle 11g upgrade: How to update stale statistics for sys and sysman?

    Hi,
    I am in the process of testing Oracle 11g upgrade from Oracle 10.2.0.3. I have run utlu111i.sql on the 10g database.
    The utility utlu111i.sql reports about the stale statistics for SYS and SYSMAN components.
    I executed dbms_stats.gather_dictionary_stats; dbms_stats.gather_schema_stats('SYS'); and dbms_stats.gather_schema_stats('SYSMAN');
    After that the utlu111i.sql still reports the stale statistics for sys and sysman. Does anyone know how to get rid off this warning successfully?
    Thanks,
    Sreekanth

    Does anyone know how to get rid off this warning successfully?Just ignore the warnings. Check The Utlu111i.Sql Pre-Upgrade Script Reports Stale Sys Statistics - 803774.1 from Metalink.

  • Create new CBO statistics for the tables

    Dear All,
    I am facing bad performance in server.In SM50 I see that the read and delete process at table D010LINC takes
    a longer time.how to  create new CBO statistics for the tables D010TAB and D010INC.  Please suggest.
    Regards,
    Kumar

    Hi,
    I am facing problem in when save/activating  any problem ,so sap has told me to create new CBO statistics for the tables D010TAB and D010INC
    Now as you have suggest when tx db20
    Table D010LINC
    there error comes  Table D010LINC does not exist in the ABAP Dictionary
    Table D010TAB
         Statistics are current (|Changes| < 50 %)
    New Method           C
    New Sample Size
    Old Method           C                       Date                 10.03.2010
    Old Sample Size                              Time                 07:39:37
    Old Number                51,104,357         Deviation Old -> New       0  %
    New Number                51,168,679         Deviation New -> Old       0  %
    Inserted Rows                160,770         Percentage Too Old         0  %
    Changed Rows                       0         Percentage Too Old         0  %
    Deleted Rows                  96,448         Percentage Too New         0  %
    Use                  O
    Active Flag          P
    Analysis Method      C
    Sample Size
    Please suggest
    Regards,
    Kumar

  • Cisco LMS Prime: Device Center does not show Port statistics for Routers

    Hello,
    i am wondering why the Port Statistics for Routers are not showing in the Device Center Port Status Section. Is this normal behaviour?
    thanks
    Alex

    Hi Afroj,
    Data Collection as well as Usertracking ran successfully.
    regards
    Alex

  • SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB

    We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
    is currently taking up to 30 hours to complete.
    Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
    is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
    get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
    I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan).  Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
    it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
    So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
    Bill Thacker

    I'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
    I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
    objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
    the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
    The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
    Come on SQL Server Community. Show me some love :)
    Bill Thacker

  • MaxDB UpdAllStats - missing optimizer statistics for one name space

    Hi experts,
    every weekend the job UpdAllStats runs in the SAP systems hosted by us (weekdays just PrepUpdStats+UpdStats). Now we're facing the issue that in one system there are no optimizier statistics for all tables in one special name space - let's call it /XYZ/TABLE1 etc.
    We randomly checked tables in that name space via DB20/DB50 and no optimizer statistics could be found. So we randomly checked other tables like MARA, VBAK etc. - all optimizer statistics up to date for those tables.
    We even started the statistics refresh via DB20 manually for one of the tables - still no optimizer statistics appearing for this table.
    I mean it's an update over all optimizer statistics - I rechecked note 927882 - FAQ: SAP MaxDB UPDATE STATISTICS and some others but couldn't find any reason for these tables being exluded. Especially I don't understand why the manual statistics refresh wouldn't work...
    Does anybody have an idea why this could happen?
    Thanks for your ideas in advance!
    Regards
    Marie

    Hi again,
    well it seems to be more of a visualisation problem I guess.
    We figured out that in MaxDB Database Studio you can see the optimizier statistics but not in the SAP system itself.
    We'll keep you up to date.
    Best
    Marie
    Edit: it was really just a visualisation problem... DB Studio rhows the right values

Maybe you are looking for