Related to BI statistics..

Hello Experts,
I am creating a report based 0tct_c01 to capture run time of different reports...
i have template name, time stamp and total time... bt each time a run a report..
i get 4 entries for a given template, as a result of which i am confused about how do i extract
total execution time when ever i run a report...

Hi Vikas,
I will tell you how I did the setting for the statistics  -
1. Allowed all  the Infoprovider for statistics collection (RSA1-Tools-Setting BI Statistics---Selected Infoprovider changed Defalt  setting to X mode).
2. Init load for all Statistics cubes.
While running Delta No records are updated.

Similar Messages

  • BI Statistics installed on BI 7.0 but data not getting loaded

    Hi All,
    I have installed the BI statistics on BI 7.0 from Business content. All the flows, cubes, multi providers, queries, process chains are all installed from Business content. After installing I have run some loads on BI and also executed some queries. I can see data in RSDDSTAT* tables on the system.
    But when I executed the process chains for BI statistics the loads are getting successful but it is all 0 from 0 records. But data got loaded for Master data of BI statistics but not for transaction data i.e. data to the cubes. So ideally now there is no data in the cubes related to BI statistics. can anyone please help me if there is any further setting which I need to do to get the data flowing to the BI Statistics cubes.
    regards,
    Suman

    I recommend a repost in either the Data Warehousing, BI General or Business Content forums. This forum is for Business Explorer.

  • Search Help problem in IT0033 (Statistics) in ECC 6

    Hi All,
    Iu2019m facing an error when trying to update the Statistics in HR Master Data.
    I enter the Personnel no. and Infotype 33 (Statistics) and press Display button. It takes you to Display Statistics Screen.
    The error occurs when I hit the dropdown button of the second line of Statistical Exceptions.
    Letu2019s say that I have two Statistics (01 and 02) and the Statistical exceptions for each one.
    When I hit the dropdown button to see the Statistical exceptionsu2019 list for the Statistics 01, the Search Help brings only the Statistical exception related to this Statistics (01). But, when I do the same thing for Statistics 02, the Search Help brings me all the Statistical exceptions, not only values related to Statistic 02.
    Debugging the program I could see that for each dropdown the debug starts in a different point of the program.
    Someone please could help me?
    Thanks.

    Hi Muralidaran,
    make sure that you have the right kernel for your service pack. SAP notes 919184 and 902694 tell you the required kernel patch level for SP 15 and SP 16.
    Best regards,
    Klaus

  • BW Statistics queries

    Dear all,
    I am Viswanath.  I had activated all business content objects related to BW Statistics.  Here i have selected Data Flow Before and AFterwards.  My problem is that i am not able to see any queries under the respective cubes in BEx.  Can anyone suggest me how can i see and work with the BW statistics queries
    regards
    Viswanath

    Hi,
    Did u get any error msg when installing Business content that
    some of the objects are not available in meta data repository. If so follow this Procedure:
    If this error occurs while you are installing Business Content, a delivery error may have occurred. Inform SAP. In all other cases, check, for example, that the object has not been deleted by another user during the collection.
    shylaja.

  • Disk Statistics Help

    Hi,
    I 'm trying to get the following information related to disk statistics:
    %Busy[]
    Blks/Sec[]
    BlkReads/Sec[]
    BlkWrites/Sec[]
    I/Os/Sec[]
    Rds/Sec[]
    Wrs/Sec[]
    AvgServ[]
    AvgWait[]
    AvgQlen[]
    OldQlen[]
    Degradation/Sec[]
    I need to get this information from /dev/kmem/ without using kstat. But I'm not sure which structure holds these information. I would really appreciate any guidance regarding these statistics.
    Thanks.
    Regards,
    Shine

    man kstat(3K) should help...
    Ilya.

  • CDMC - Activate Statistics Collection

    Hi,
    I have a question related to CDMC Statistics Collection.It is not clear for me, after activate the collection what are the periods to evaluate and period type?
    Please can you clarify?
    Best Regards,
    Zsuzsanna

    Hi Zsuzsanna,
    As shown in the screenshot in my first reply it will take input as per the given period and perid type and will fetch usage statistics data for the given time period but to shcedule the same for every three months you need to give inout in the next pop-up screen here you can give input to schedule three different jobs for future as per your requirement, According to the period and period type given these jobs will be scheduled and run in the statistics/production system and will update usage information in the CDMC table in the statistics system.
    According to the given input it will schedule jobs and run in the statistics/produdction system , if you give 1 day then job will run daily and will update and store usage statistics data in the CDMC database table in the statistics system itself and not in Solution manager system.
    This data will only get imported in the solman when you create and execute new Clearing analysis project and execute activities in the same.
    If you are expecting the latest usage results then yes,you need to create new CDMC Clearing Analysis project every time as this usage information will not get updated automatically in the CDMC project.
    Please let me know if you have any doubts.
    Best Regards,
    Pritish

  • Execution Statistics by target

    I'm looking for a way to either capture the following information or maybe OWB already gathers this info and I just need to know where to look.
    I'm looking at the following info for each target.
    Target Table Name
    Load start time
    Load end time
    rows inserted
    rows modified
    Success or Failure.
    I'd like to tie this information to an audit key in the target tables so we can always know when each row was loaded.
    I searched on audit tables (and that returned more error related stuff) and statistics (and that returned more stuff on analyze table and actual Oracle stats).

    u can use dbms_utility.gettime
    declare
    tempo number;
    begin
    TEMPO := DBMS_UTILITY.GET_TIME;
    <your code>
    TEMPO := TEMPO - DBMS_UTILITY.GET_TIME;
    DBMS_OUTPUT.PUT_LINE('TEMPO>'||TEMPO);
    end;

  • DB Statistics & BI Indexes

    Hi All,
    I have been going through a lot of threads related to DB Statistics & BI Indexes and I am confused.
    1) How do I come to know that the DB Statistics and BI Indexes for a cube are active or created?
        Is it through RSDDSTAT where the status is X means stats are active? Is there another way to find out?
        What about BI Indexes?
    2) How do I create DB Statistics?
    3) Does creatting DB statistics for query help the query performance or creating DB statistics on cube helps the Query performance or is it both?
    4) I understand primary indexes are created while creating the cube.However, when we try to create secondary indexes through performance tab in which table the details are stored? Can it be deleted later?
    5) Is there another way to create BI Indexes other than performance tab?

    P.S: the formatting went nuts... I added *** before each of my replies... COME ON SAP!! Cant you fix this???
    Hi.
    1) How do I come to know that the DB Statistics and BI Indexes for a cube are active or created?
    Is it through RSDDSTAT where the status is X means stats are active? Is there another way to find out?
    What about BI Indexes?
    In addition to the performance tab where you got the traffic lights, you can check the indexes through tcode SE11. Just input the table name and on the next sceen click the button reading "Indexes...". A popup will show you the indexes that exist for the table you are looking at. Doubleclicking any of the indexes will take you to the details.
    You can also define a new, secondary index here. You might have to go to DB02->missing indexes to have it created on the database, even though it says it exists and is active in se11... get your basis guy in on all of this.
    2) How do I create DB Statistics?
    Through Performance tab or step in process chain, but you should schedule db stats at least once a week in that DB-maintenance-calendar-thingy you can get to with a tcode I cannot remember... DBxx where xx is two numbers...ask your basis guy.
    3) Does creatting DB statistics for query help the query performance or creating DB statistics on cube helps the Query performance or is it both?
    You cannot create statistics for a query. You can collect statistics about the query use. This is the statistics stuff you can activate from bct; the "statistics-cubes" and all that. They store the info you collect and this is then called BI Statistics. It is of absolutely no use whatsoever with regards to performance. You can learn a lot about your system by analysing this, but starting to collect the BI Statistics wont help your slow running queries.
    You can create statistics for cubes. This is the DB Stats and the effect of creating it is that the system will know how the data is distributed in the cube and because of that, it will have a better chance of reading data according to your selections faster. Much faster in some cases! This goes for both queries and loads (a load is just a special kind of query, where results are not put on the screen but in another table). Try to keep your DB Stats as up to date as you can - I always update the stats after each load and compression... It is especially important on transactional cubes, because data is more volatile here than when you only load every second day or so.
    4) I understand primary indexes are created while creating the cube.However, when we try to create secondary indexes through performance tab in which table the details are stored? Can it be deleted later?
    I dont know the tables this is stored in, but you can delete any index using SE11as mentioned above. Secondary indexes will need to be re-defined in SE11 when your system has been taken down... or if you activate the cube. In that case, only the hardwired primary indexes are created.
    5) Is there another way to create BI Indexes other than performance tab?
    I think you can only create/define an index in SE11, but you can refresh it from the Performance tab or in a process chain.
    Regards,
    Jacob
    Edited by: Jacob Jansen on Aug 10, 2010 9:56 PM

  • Work Book for BW Statistics(Urjent)

    Hi All,
            I have to install six cubes related to BW Statistics.I know how to implement these cubes.My query is before installing the cubes i have to in stall work books related to these cubes.
    When i checked in Business content there are three workbooks are available
                         1. BW Statistics 2.0(New)
                         2. BW Statistics 1.2 etc....
    Please let me know which one i need to install?
    Thanks & Regards
    Ramakanth.

    Hi Venkat,
                  Thanks for giving the reply.
    I need some clarification this.When i have checked for the new one 2.0 there are some queries available for this work book.I need to install all the quries or we need some of them only.Let me know for BW Statatistics which queries are to be installed for this work book.
    Regards
    Ramakanth
    I will assign points.Please provide me the solution for above my query

  • ORA-06540,following statement failed with ORACLE error 6540:

    I was working during the last days on doing a full export/import for the database 9i I used the following script for the export .
    exp system/**** full=y file=/export/export.dmp log=/export/export.log consistent=y statistics=estimate buffer=200000
    And everything went fine, after that I dropped all the objects owners belonging to the tablepaces as i am doing that to solve fragmentation problems
    The issue is during the import and after 50% of the tables were exported successfully I started to receive the following error
    ORA-06540: PL/SQL: compilation error
    ORA-06553: PLS-123: program too large
    IMP-00017:
    "DECLARE SREC DBMS_STATS.STATREC; BEGIN SREC.MINVAL := 'C2020E'; SREC.MAXVA"
    "L := 'C26338'; SREC.EAVS := 0; SREC.CHVALS := NULL; SREC.NOVALS := DBMS_STA"
    "TS.NUMARRAY(113,9855); SREC.BKVALS := DBMS_STATS.NUMARRAY(0,1); SREC.EPC :="
    " 2; DBMS_STATS.SET_COLUMN_STATS(NULL,'"RPT_LEDGER"','"GL_ACCOUNT_ID"', NULL"
    " ,NULL,NULL,1432,.000698324022346369,0,srec,4,2); END;"
    IMP-00003: ORACLE error 6540 encountered
    ORA-06540: PL/SQL: compilation error
    ORA-06553: PLS-123: program too large
    IMP-00017: following statement failed with ORACLE error 6540:
    "DECLARE SREC DBMS_STATS.STATREC; BEGIN SREC.MINVAL := 'C2020B'; SREC.MAXVA"
    "L := 'C2031F'; SREC.EAVS := 0; SREC.CHVALS := NULL; SREC.NOVALS := DBMS_STA"
    "TS.NUMARRAY(110,230); SREC.BKVALS := DBMS_STATS.NUMARRAY(0,1); SREC.EPC := "
    "2; DBMS_STATS.SET_COLUMN_STATS(NULL,'"RPT_LEDGER"','"COMMON_COA_ID"', NULL "
    ",NULL,NULL,14,.0714285714285714,0,srec,4,2); END;"
    IMP-00003: ORACLE error 6540 encountered
    ORA-06540: PL/SQL: compilation error
    ORA-06553: PLS-123: program too large
    IMP-00017: following statement failed with ORACLE error 6540:
    "DECLARE SREC DBMS_STATS.STATREC; BEGIN SREC.MINVAL := 'C202'; SREC.MAXVAL "
    ":= 'C20233'; SREC.EAVS := 0; SREC.CHVALS := NULL; SREC.NOVALS := DBMS_STATS"
    ".NUMARRAY(100,150); SREC.BKVALS := DBMS_STATS.NUMARRAY(0,1); SREC.EPC := 2;"
    " DBMS_STATS.SET_COLUMN_STATS(NULL,'"RPT_LEDGER"','"CONSOLIDATION_CD"', NULL"
    " ,NULL,NULL,2,.5,0,srec,4,2); END;"
    IMP-00003: ORACLE error 6540 encountered
    the used script for import is
    imp system/****** full=y rows=y file=/export/export.dmp log=/export/ofsaimport2.log buffer=20000000
    Please note that all the tables and indexes are imported successfully even after the error started to show.
    Is it something serious, should I do the import again with statistics=none or I am just fine now and it something that is just related for the statistics.

    Is really "fragmentation" an issue to justify this table movement? Why do you want to "defragment" your schema? Is it because of performance issues? If this is the case you are wasting your time, if you tried to rebuild the indexes by this procedure, it is also a waste of time. If you are concerned about fragmentation because of "holes" and "islands" of unused space within segments and unused space below the HWM, that is something different, in my personal jargon, I'd rather use the term space reorganization.
    If the data import was successful, then take a look at the dba_objects and verify if the program unit is still there and check its status.
    You may also want to take a look at this note from AskTom that explains what this error means --> http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:571023051648
    ~ Madrid
    http://hrivera99.blogspot.com

  • Query on dba_free_space ends in wait by event db file sequential read

    Hello All,
    Env: 10gR2 on WinNT
    I gave the query
    select tablespace_name,sum(bytes)/1024/1024 from dba_free_space group by tablespace_name and its waiting for ever.
    I checked the wait event from v$session and its "db file sequential read".
    I put a trace on the session before the running the above query:
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.06       0.06          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.06       0.06          0          0          0           0
    Misses in library cache during parse: 1
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      db file sequential read                     13677        0.16        151.34
      SQL*Net message to client                       1        0.00          0.00
      db file scattered read                        281        0.01          0.53
      latch: cache buffers lru chain                  2        0.00          0.00
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse    13703      0.31       0.32          0          0          0           0
    Execute  14009      0.75       0.83          0          0          0           0
    Fetch    14139      0.48       0.74         26      56091          0       15496
    total    41851      1.54       1.89         26      56091          0       15496
    Misses in library cache during parse: 16
    Misses in library cache during execute: 16
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      db file sequential read                        26        0.00          0.12
        1  user  SQL statements in session.
    14010  internal SQL statements in session.
    14011  SQL statements in session.I took the AWR Report (for 1 hr time period) and the top 5 events came out as,
    Event                                 Waits    Time (s)   (ms)   Time Wait Class
    db file sequential read           1,134,643       7,580      7   56.8   User I/O
    db file scattered read              940,880       5,025      5   37.7   User I/O
    CPU time                                            967           7.2
    control file sequential read          4,987           3      1    0.0 System I/O
    control file parallel write           2,408           1      1    0.0 System I/O The PHYRDS(from dba_hist_filestatxs) on my system01.dbf is 161,028,980 for the latest snap.
    Could someone throw some light into what is happening here ?
    TIA,
    JJ

    Under some circumstances querying the dictionary can be slow, usually because of problems with bad execution plans related to bad statistics, try to gather statistics using dbms_stats.gather_fixed_objects_stats(); it has worked for me before.
    You can also read Note 414256.1 Poor Performance For Tablespace Page Display In Grid Control Console which in addition points to a possible problem with the recycle bin.
    HTH
    Enrique

  • Ezsetfacl.sh - Mini configuration agent, setfacl across multiple files

    -Introduction-
    ezsetfacl.sh can edit configuration files, can manipulate file properties(permissions, modes, attributes etc)
    ezsetfacl.sh supports syntax/logical error detection in file list
    The script is intended to be run as root, but for demo/testing purposes, it's run as a non-root user in this article
    Current version : http://pastebin.com/JpScZMbQ          http://pastebin.com/raw.php?i=JpScZMbQ(RAW)
    This is the previous version : https://bbs.archlinux.org/viewtopic.php?id=173905
    This version is basically an enchanced one, but this is much more functional and usable than the first one, I think it deserves a new thread
    -Note-
    ezsetfacl.sh is still in testing state, use "setfacl -g [filelist]" and check all generated commands yourself before actually applying permissions
    -Update-
    (27/2/2014) Added "chmod" block and "chown" block, modified statistics list a bit.
    (28/2/2014) Added "chattr" block, added related lines to statistics list.
    (15/3/2014) Added APPEND, INSERT and REPLACE blocks, mainly for configuring config files. Will add to statistics list later.
    (17/3/2014) Added COMMAND block, for executing commands, no syntax checks. Will add to statistics list later. Now it's more like a configuration manager/agent thingy.
    (7/5/2014) Bug fixes. Added tea.
    (19/9/2014) Bug fixes. Enhanced input sanitisation, allowed file name stitching(e.g. this\ is\ a\ file), auto stitching file names is also available. Multiple <END can now be processed properly.
    (20/9/2014) Added Sourceforge page.(See below)
    -Notes-
    Error detection still requires improvement
    -Limitations-
    Logical and physical option for recursive setfacl are not yet available
    Sourceforge page : https://sourceforge.net/projects/ezsetfaclsh/
    Detailed explanation on the options : http://pastebin.com/1QLSFPkD          http://pastebin.com/raw.php?i=1QLSFPkD(RAW)
    Examples : http://pastebin.com/KQcNXVSW          http://pastebin.com/raw.php?i=KQcNXVSW(RAW)
    Any feedback is welcome!
    Last edited by darrenldl (2014-09-20 03:09:46)

    Ah, well...indeed. Basically, these three owners are volatile...I have one for EM (which is pretty static), one for databases ( ~oracle gets "rm -rf"-ed about twice annually), and one for AS (~oracleas gets "rm -rf"-ed twice annually, but on a different 'schedule' if we can call it that).

  • Express Version 6.3.2.1

    I've been asked to work on Oracle Express and I know nothing
    about it nor does anyone else in our company ;)
    I want to use Express Relational Manager, and the guide says
    three products get installed when we install Relational Access
    Manager(RAM).
    1. Express Relational Access Server
    2. Express Relational Access Administrator
    3. Express Relational Access - Query Statistics
    But the problem is that the Relational Access Server is not
    getting installed only 2 and 3 are getting installed.
    Since I'm completely new to Express and nobody in my company
    knows anything about it, I'm in a complete dilemma.
    Moreover, the documentation is not very helpful and user
    friendly, so I'm not getting head or tail out of it.... so much
    so that I dont even know where to start from.
    I have installed the Server and the Client in the same
    ORACLE_HOME. I'm using version 6.3.2.1
    Anyone please help me, its quite important
    Thanx
    Naveen

    An addition to my previous query - I have to access the Database
    which is on another machine in the Network

  • Technical Content

    we have just finish the BI7 ugprade.
    When I run RSA1 (for the first time), it says the job BI_TCO_ACTIVATION is schedule and run in the background. I check on the website site and it says this Job will activate the Technical Content.
    1) Question is why it automatically activate the Technical Content?
    2) I thought it is up to us to decide if we want to install and activate the Technical Content from RSA1 ->Business Content?
    3) Now the job is finished and it seems like some of the Technical Content have been activated a appear in RSA1.
    4) Can I just delete them from the RSA1?
    What should I do?
    Please help.

    BI_TCO_ACTIVATION automatically install the technical contents when you execute RSA1 tcode for the first time after system upgrade.
    Technical contents consists of objects related to BI statistics, distributed statistics records etc. Which are different than the business content which we installs manually.
    The technical BI Content must be installed or activated. The technical Content is installed automatically in the background (job name BI_TCO_ACTIVATION) when you execute RSA1 for the first time. For the other installations, you should wait until the job BI_TCO_ACTIVATION is completed. (OSS NOTE 979581)
    You can have a look at OSS note 979581.

  • Buffer hit ratio

    I am using the following:
    SELECT ROUND(((1-(SUM(DECODE(NAME, 'physical reads', value, 0)) /
    (SUM(DECODE(NAME, 'db block gets', value, 0))+
    (SUM(DECODE(NAME, 'CONSISTENT GETS', value, 0))))))*100), 2) || '%' BCHR
    FROM V$SYSSTAT
    to calculate the buffer hit ratio. This query is returning: -1753.28%
    Can someone explain why I am getting this crazy number?
    Thanks,
    mdp

    >>
    Many folks misunderstand that bit about "setting your own BHR", and falsely conclude that it's a useless metric. It's not useless.
    <<
    The buffer cache ratio is useful only when considered in relation to other statistics. The problem is that the majority of users seem to think that that a high ratio value is good and a low ratio value is bad based on absolute values and do not understand that the static is dependent on how SQL plans are being solved. If you measure the ratio when the dominant work on the system is being done via hash joins, full scans that touch the target blocks only once, or make use of PQO during the process you can get a fairly low value, but the system is performing well. On the other had poorly performing SQL can result in a high value for the statistic. The value of the statistics bears no direct relationship to performance of the system and it needs to be emphasized that the ratio must be used in conjunction with other available information. The ratio by itself should be considered useless.
    >>
    If the BHR was totally useless, why does Oracle continue to include it in OEM alert thresholds, and STATSPACK and AWR reports?
    <<
    Over the years Oracle has done lots of things that turned out to be wrong so just because Oracle includes the statistics in certain products does not really provide a lot of support for the validity of the statistic. Known errors in the documentation have made it through two full releases. Again it is the misapplication of the statistic that is really at issue. Unfortunately, many poorly written DBA Administration and Tuning books in the past claimed that ratio could be used to measure database performance, and in point of fact the ratio has only a passing relationship to performance depending on the application.
    HTH -- Mark D Powell --

Maybe you are looking for