Statistical MP 0BWTC_C10 Query

HI Guy's
i need to create the Statistics query how many users are accessing Inventory Application reports
the information that i need is
attributes:
u2022         user id
u2022         first name
u2022         last name
u2022         country
u2022         organization name
u2022         department name
u2022         email
key figures:
u2022         # of navigations per day
As far as selection criteria are concerned, we need:
u2022         User id
u2022         Date
u2022         Cube
u2022         Query
i found it we are abule to create the query based on the statistical multi provider 0BWTC_C10 , But this MP is available in Business content some one explain me how to install this statistical multi provider and how to maintain the data for this MP.
Regards,
Krishna

Hi
Download the pdf document and go through it.
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/5401ab90-0201-0010-b394-99ffdb15235b
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c8c4d794-0501-0010-a693-918a17e663cc
http://help.sap.com/saphelp_nw04s/helpdata/en/e5/2d1e3bd129be04e10000000a114084/frameset.htm
There are lots of informations available in SDN  regarding BW statistics go through it.
Regards
M.A

Similar Messages

  • Manage query - run time errors

    Hi,
    we have one user working at one desktop which receives error "run-time error 5" followed by "401 Automation error" when selecting the manage query wizard proceeding from the BPC Excel action pane.
    We only have this at one desktop. Any ideas?
    Dries

    Hi,
    You can check the query run time by using T code RSRT.
    1. Go to T code RSRT
    2. Give your query name.
    3. Click on the "Execute + Debug" tab
    4. Various check boxed for Debug options will appear on the screen
    5. Under the "Others" node you will find the check box for "Display Statics Data"
    6. Check this check box and click on continue.
    7. This will execute the query and provide you selection screen if any.
    8. Once the query is completely executed, click on "back" button or simply hit F3.
    9. This takes you to the "Statistic Data for Query Runtime screen"
    10. Here you can take the total of "Duration" column to get the total duration for query execution.
    Please refer following link for details:
    [http://help.sap.com/saphelp_nw70/helpdata/en/43/e3807a6df402d3e10000000a1553f7/frameset.htm]
    Hope this answers your query.
    - Geetanjali

  • BI Statistic and Techincal Content

    I am a bit confuse with the relationship between BI Statistic and Technical Content in BI7. Can you please confirm my understanding below:
    1) In BI 7, in order to record the BI Statistic, we MUST turn on the Query, InfoProvider etc via transaction code RSDDSTAT.
    2) Once this is done, system will record the statistic of that Query/InfoProvider data automatically.
    Turning on the BI Statistic means nothing if we didn't didn't carry the following right? I mean we can build query to report the statistic data, correct?
    3)     In order to display this statistic data via transaction code ST03, we MUST turn on the Technical Content.
    4)     Schedule the Process Chain and load the data from the statistic table into Technical Content.
    5)     Once this is done, we can display the statistic data via transaction code ST03.
    6)     Because we have loaded the statistic data into Technical Content, then we can delete the old statistic data.
    7)      Now we can build Query on top of this Techincal Content and report the Statistic data, correct?

    Hi Iynn,
    Your understanding is correct. for number 6) "Because we have loaded the statistic data into Technical Content, then we can delete the old statistic data ", you don't have to delete the data manually as it will be done automatically every 14 days if your system is lower than SP16 (it's now 30 days with SP16 and above). If you want to change this behavior, you can use program SAP_RSADMIN_MAINTAIN to create entry TCT_KEEP_OLAP_DM_DATA_N_DAYS in RSADMIN table with the value you desire. More info in notes 891740, 1095411.
    Cheers,
    Ali Q.

  • Virtual IP - how to speed up query result?

    Hello,
    I have a virtual IP. When I execute a report I receive after 5 minutes a result.
    Following questions:
    1. How can I detect the bottleneck?
    2. Is there a "standard" way to improve the query runtime?
    Thanks for input.
    Best regards.
    Pascal Kern

    Hi, STARWARS 
    Use tr RSRT -> Execute + Debub -> Mark "Display statistics data" -> Execute
    Press F3 (back).
    There you can see "Statistic Data for Query Runtime".
    Analyze lines with long duration.
    Hope it helps,
    Alexander Kuzmich

  • Difference between BBP_GET_STATUS_2 and CLEAN_REQREQ_UP reports

    Could someone explain in simple way what is the difference between BBP_GET_STATUS_2 and CLEAN_REQREQ_UP reports? I have read the differences in standard documentation but it is a little confusing.
    Thanks!
    Regards,
    Madhur

    Hi
    CLEAN_REQREQ_UP (Cleaner Job)
    You can use this function for document types Shopping cart, (Local) Purchase Order, Confirmation, and Invoice to trigger a synchronization with the associated documents in the back-end system. The system checks whether and how the (follow-on) documents were posted in the back end, and updates the object link and references, as well as the document status.
    A job (background processing) is generated for the program CLEAN_REQREQ_UP. When this is run, the system queries a database table containing the transfer information of the documents to the back end. The entries are checked with the data of the respective back-end systems. If the back-end transfer is successful, the respective entries are deleted and the prerequisites for further processing are created.
    BBP_GET_STATUS2 (Status Job)
    The status job was created by SAP to update the EBP system with data such as purchase requisition number, purchase order number, goods received or invoices recorded manually in R/3, etc. The report should not be run on a frequent basis at short intervals unless the order count from EBP to R/3 is not that high. Otherwise, a recommended interval for running the report is approximately every hour. Basically the schedule times depend on your business requirements.
    Until this job runs, the user will not see the number of the backend document created in R/3 for a particular shopping cart in the history tab of the check status transaction.
    Some more information :
    Go to:
    SPRO->IMG->Integration with Other SAP components->Advanced Planning and
    Optimization->basic Settings for the Data Transfer->Change Transfer->
    change Transfer for Transaction Data->Active Online Transfer using BTE
    Here you should maintain the application 'SRMNTY' with active flag.
    Once this customizing is enabled, whenever a follow-on document
    (either confirmation or invoice) for an extended classic PO is created
    in the backend R/3 system, the R/3 system communicates to the SRM system
    by creating an entry in the table BBP_DOCUMENT_TAB for this PO.
    The item level of the SRM PO has fields to store the actual quantity
    and values entered for the corresponding confirmations and invoices.
    After that, run the reports CLEAN_REQREQ_UP and BBP_GET_STATUS_2.
    When the report CLEAN_REQREQ_UP runs this will update the
    PO with statistical information. With the latest information in
    BBP_PDIGP table (statistical information) the query should work fine.
    Summer

  • Clean_reqreq_up and BBP_get_status_2

    I am bit confused about the use of these two programs Clean_reqreq_up and BBP_get_status_2.
    What I understand from the different forums i read is that clean_reqreq_up creates the PO in the backend (classic scenario) after the shopping cart is approved and BBP_get_status_2 updates the shopping cart with the PO number so that the SC can be processed further( for goods confirmation).
    Am I right gurus? If no , please illustrate with an example as above
    Thanks in advance

    Explanation :
    The CLEAN_REQREQ_UP  and BBP_GET_STATUS2 jobs will be configured to run in each EBP client to synchronize status data between EBP and R/3.
    CLEAN_REQREQ_UP (Cleaner Job)
    You can use this function for document types Shopping cart, (Local) Purchase Order, Confirmation, and Invoice to trigger a synchronization with the associated documents in the back-end system. The system checks whether and how the (follow-on) documents were posted in the back end, and updates the object link and references, as well as the document status.
    A job (background processing) is generated for the program CLEAN_REQREQ_UP. When this is run, the system queries a database table containing the transfer information of the documents to the back end. The entries are checked with the data of the respective back-end systems. If the back-end transfer is successful, the respective entries are deleted and the prerequisites for further processing are created.
    BBP_GET_STATUS2 (Status Job)
    The status job was created by SAP to update the EBP system with data such as purchase requisition number, purchase order number, goods received or invoices recorded manually in R/3, etc. The report should not be run on a frequent basis at short intervals unless the order count from EBP to R/3 is not that high. Otherwise, a recommended interval for running the report is approximately every hour. Basically the schedule times depend on your business requirements.
    Until this job runs, the user will not see the number of the backend document created in R/3 for a particular shopping cart in the history tab of the check status transaction.
    Configuration :
    Go to:
    SPRO->IMG->Integration with Other SAP components->Advanced Planning and
    Optimization->basic Settings for the Data Transfer->Change Transfer->
    change Transfer for Transaction Data->Active Online Transfer using BTE
    Here you should maintain the application 'SRMNTY' with active flag.
    Once this customizing is enabled, whenever a follow-on document
    (either confirmation or invoice) for an extended classic PO is created
    in the backend R/3 system, the R/3 system communicates to the SRM system
    by creating an entry in the table BBP_DOCUMENT_TAB for this PO.
    The item level of the SRM PO has fields to store the actual quantity
    and values entered for the corresponding confirmations and invoices.
    After that, run the reports CLEAN_REQREQ_UP and BBP_GET_STATUS_2.
    When the report CLEAN_REQREQ_UP runs this will update the
    PO with statistical information. With the latest information in
    BBP_PDIGP table (statistical information) the query should work fine.
    Summer

  • How to fill the application cache by scheduling a Bex report

    Hi Experts,
    I have a Query which take about 4 minutes to execute for the first time and when the same query is executed for the 2nd time it takes about just few seconds.
    I tried using the broadcasting setting in portal to fill the OLAP cache for this query and executed it manually before executing the query in the morning, The Fill OLAP cache has run successfully but apparently the query executing for the first time takes the same time as it was taking earlier. There is not difference even after introducing the fill OLAP cache setting for this query.
    On further analysis I see from the from the "Statistic Data for Query Runtime" that DBTRANS & DBSEL entries are available when the report was executed for the first time, this means the cache is getting filled only when the report is executed for the first time but not through the fill OLAP cache from Portal.
    Can any one please let me know is there any other option available to execute the query before any user executes it, so that the entire report is available in the cache and the user doesn't have any complaints of performance. The report is executed only in Analyzer we do not have any web report.
    Please provide your valuable input to over come this performance issue.
    Thanks

    Hi sam
    You can broadcaste the report... and fill the cache...
    Before setting up the broadcasting settings -- you need to enable the cache property for that particular query or info provider.
    you can enable cache using the RSRCACHE or RSRT transaction ...
    Check the below LInk
    [Performance Optimization of Long Running Queries Using OLAP Cache|http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/f048c590-31a4-2c10-8599-bd01fabb93d4]
    Regards
    KP

  • Contact center express 7

    Hello, i have an issue
    is it possible to chech how many incomming calls go to the trigger?
    i mean, in script
    if more than 5 calls during 10 seconds ,so send sms or email
    is it possible?
    thank you

    Not natively for a JTAPI trigger; however, you could use the Get Reporting Statistic step to query the calls currently relavent to a CSQ. A script and the steps within it cannot normally see "outside" of that instance of the application with the exception of the Get Reporting Statistics step.
    It is possible to use sessions to overcome this though. You could create a second session and add a known session mapping ID to it. The variables held in that session could contain variables, likely arrays in this case, that could hold this information. Each time the script is called, it can query this session for the data and act upon it. The script could also update the variables for the next script that executes. The same could happen when you send an Email (note: requires premium licensing or IP IVR): you could clear/reset the values.
    You will need to handle what happens if the session has expired and been purged from memory due to inactivity. This shouldn't be a problem in your scenario because it would suggest inactivity, not high activity. You could reinitalize the session at that point.

  • Could u explain the business process

    hi guru's
    could u explain the business process scenario ,  how it works in bw . could u explain detailed, suppose to take CLINT NOKIA MOBILE AND MY EMPLOYER SOME  XX COMPANY,
    CAN U EXPLAIN DETAILED?

    Hi Prabha,
    Aggregate Definition :
    An aggregate is a materialized, aggregated view of the data in an InfoCube. In an aggregate, the dataset of an InfoCube is saved redundantly and persistently in a consolidated form on the database.
    Use
    Aggregates allow quick access to InfoCube data during reporting. Similar to database indexes, they serve to improve performance.
    An aggregate is made up of the characteristics and navigation attributes that belong to an InfoCube. Characteristics that are not used in the aggregate are compressed.
    Each component of an aggregate has to be assigned to a selection type. A selection type indicates the degree of detail to which the data in the underlying InfoCube is aggregated. You can choose one of the following selection types:
    All characteristic values ("*"): The data is grouped by all values of the
    characteristic or navigation attributes .
    Hierarchy level (“H”): The data is grouped by the hierarchy level node. You can
    also store values on the hierarchy levels of an external hierarchy .
    Fixed value (‘F’): The data is filtered by a single value .
    For each InfoCube, you can set whether the aggregates of the InfoCube are compressed automatically when it is filled with data or after the roll up of data packages (requests).
    1. You are in the Data Warehousing Workbench in the Modeling functional
    area. In the InfoProvider tree, navigate to the required InfoCube.
    2. In the context menu of the InfoCube, choose Manage.
    3. In the lower part of the screen, select the Roll Up tab page.
    4. Under the Aggregates group header, set the corresponding indicator in
    the Compress After Roll Up field.
    Standard Aggregation : This means how the key figure is going to be summarized normally for a specific set of characteristics .
    Eg : Quantity across all the purchase items for a purchase Order
    Exceptional Aggregation : You define how the aggregation should happen basing on a particular characteristic ( Usually time characteristic ) .
    Eg : head count ( No of Employees ) in a organisation ..--> this should not be summarized on a daily basis..instead this should give a final value at the end of the month .
    Aggregation is basically for keyfigures .
    take a look at this doc, you will find all your answer:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cbd2d390-0201-0010-8eab-a8a9269a23c2
    Refer this for aggregates
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    Aggregates versus Physical Cube Splits?
    http://help.sap.com/saphelp_bw33/helpdata/en/c5/40813b680c250fe10000000a114084/frameset.htm
    http://help.sap.com/saphelp_bw33/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/frameset.htm
    Question1: how we will see the based on aggreagate what querys are running
    I think "aggreegate maintance screen ---proposal -
    query proposal" i am right ,if it is worng pls correct me?
    Answer 1:
    There are two tools that are used to analyze which aggregate is suitable for which navigation step.
    You optimize the start screen of a query with the query Monitor (transaction RSRT) if you execute the function Execute + Debug.
    With the trace tool (transaction RSRTRACE) you can draw several navigation steps, and process them again using the function DB Debugging (exact path: Transaction RSRTRACE > Goto > User Logs or All Logs > double-click on the log you want > DB Debugging).
    Refer The OSS note 99010
    Question 2: When i am creating a aggregate what charteristics we choose i mean based on report/some thing else?
    Answer 2:
    By refering the Statistical data and comparing the Proposed Aggregates we can create the aggregates.
    The above RSRT and RSTRACE will be more useful tools to plan an Aggregate.
    Refer
    http://help.sap.com/saphelp_nw2004s/helpdata/en/80/1a67abe07211d2acb80000e829fbfe/content.htm
    and other option is:
    Refer page 36 in this PDF
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    Question 3:
    When we are crearting a agreegate one message box will apper in that we one option "proposed aggregate" wht does it mean how it work??
    Answer 3:
    Based on the Statistical data and query-aggregate data system will propose the new aggregate definitions.
    Question 4:
    under what senario we apply for query performance aggregates??
    Answer 4:
    Refer this PDF
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cbd2d390-0201-0010-8eab-a8a9269a23c2
    For more info refer:
    http://help.sap.com/saphelp_nw04/helpdata/EN/7d/eb683cc5e8ca68e10000000a114084/frameset.htm
    https://www.sdn.sap.com/sdn/collaboration.sdn?contenttype=url&content=https%3A//forums.sdn.sap.com/thread.jspa%3FforumID%3D133%26threadID%3D51253%26messageID%3D526450%23526450
    You have a aggregate document by SAP in http://service.sap.com/bi -> BI Infoindex which would answer all your questions.
    http://help.sap.com/saphelp_nw04/helpdata/en/7d/eb683cc5e8ca68e10000000a114084/content.htm
    It would be very useful for you to go through the performance tuning with aggregates documents here:
    Business Intelligence Performance Tuning [original link is broken]
    Look at the thread below for index:
    Indexes, Aggregates , partitioning of cube
    Hope this helps.
    Regards,
    Ravikanth

  • Reports usage and Users list

    Dear Friends, Good Morning.
    Where can see/get the correct data/repor/informationt for my problem.
    I have 300 users and 100 reports in BW system, I want to know the complete list of users and what reports they are using etc...
    I know that users list but I want to know how frequently the users are using the reports and what are the reports.
    Thanks
    Ganga

    Hi,
    Use  BW statics MP and Standard reprt to find out u r requiremnet r u can use tcode sm04
    MP:0BWTC_C10
    Query:0BWTC_C10_Q503
    For more information u can look this link
    http://help.sap.com/saphelp_nw70/helpdata/EN/74/0efd10ab46554398da081697cd27c1/frameset.htm
    Regards,
    satya

  • Creating query on Bex - Quaterly comparison for statistical & Actual

    Hi All,
    I would like to create a query for 'Quarterly comparison for statistical & Actual periods'.
    My Key Figures should be
    1) Plan 1st Qtr (Fiscal year, Period: 1 to 3, Value type : 1(Plan), Version : 0(Plan/actual), Valuation View: actual Value).
    2)1st Qtr (Fiscal year, Period: 1 to 3, "Value type : 4(Actual),11(Actual statistical)", Version : 0(Plan/actual), Valuation View: actual Value).
    3)Var 1st Qt (Plan 1st qtr - 1st Qtr)
    same thing for 4 Quaters. finally with
    4)Plan Year (Fiscal year, Period: 1 to 12, Value type : 1(Plan), Version : 0(Plan/actual), Valuation View: actual Value).
    I created a structure and created key figures with selections and formulas as required. But I did not see any data when I ran this query.
    The report was generated with 'no applicable data'.
    I need to create this query with plan 1st Qtr, Ist Qtr, Var 1st Qtr, Plan 2nd Qtr, 2nd Qtr, Var 2nd Qtr, Plan 3rd Qtr, 3rd Qtr, Var 3rd Qtr, Plan 4th Qtr, 4th Qtr, Var 4th Qtr, Plan year. key figures.
    Please let me know how can I create this query with these Key Figiures.
    Any help would be appreciated. Please respond with the reply.
    Thanks,
    Aparna.

    Hi
    The best way is then to run a report with your KF without any restriction, and the different chars in the drill down: Fiscal year, Period:, Value type,  Version , Valuation View
    Then you can check that you have some information with the combination of values of your chars:
    Fiscal year, Period: 1 to 3, Value type : 1(Plan), Version : 0(Plan/actual), Valuation View: actual Value.
    If you find a actual Value in the fiscal period you are looking at, for the period 1 to 3, for the Valuation type 1, for the version 0, then create arestricted KF by adding the restrictions one at a time....You moght discover why you do not get the results
    PY

  • Statistic of the executions of the Query

    Hi people,
    I have to find out how many time my query was executed.
    I know there is a table: RSDDSTAT, but it shows me only one register and this query was already executed many times.
    Obs.: This query is based on ODS.
    Could you help me please and say why the RSDDSTAT doesn´t show me a quantity of registers we waited to find?
    And if there is other better way to find that?
    Thank you a lot,
    Rosana.

    1.Goto ST03N
    2.Select user mode as "Expert"
    3.From Functions list (upper left pane) select "BW System Load" and under month selection, select your time duration
    4.From the analysis view (down left pane) from the "Reporting" expand "Query Runtimes" and double click "BEx Queries".
    5.Output is shown in right pane.
    6.Select aggregation level as query.
    7.Double click on the query name
    8.A window popup with every run of the query.
    9.To find out, how many times it has run, click on "Display Graphics" button. It will show with no of runs as X-axis
    OR
    if you are working with BW 3.5 you can look in table RSDDSTAT in BI 7.0 it is RSDDSTAT_OLAP.
    Make a selction on the infocube and the OLAP Statistic Object which is the query ID/tech name and execute. Then you should see how often the query was executed, by whom and which date/time.

  • SQL Query statists?

    I'd like to know if there is any easy, and convenient way, for someone to execute an SQL query an calculate, these measurments relative to the query:
    1) The I/O performed
    2)Number Read I/O
    3) Number of Write out I/O
    (such that 2+3 = 1
    4) Number of buffered reads
    5)Query Execution time
    6) Query CPU usage
    I've heard mention of such statisctis in views such as V$OSSTAT, etc.
    But these views give the current values, and not the specific cumulative values. Such as: cummulative CPU usage time since start of the query; cumulative I/O since query begin, etc...
    What is the right approach to this. Is it through the V$SESSION view? Would you about it by storing the V$SESSIOn values before the query, you run the query, and get the new V$SESSION values?

    Well, actually i stayed here a little longer to try you part 2 of the manual.
    It worked fine, following comes the output, originating from a spool file, of my first experiment:
    Connected.
    SQL> set timing on trimspool on linesize 250 pagesize 999
    SQL>
    SQL> -- system environment can be checked with:
    SQL> -- show parameter statis
    SQL> -- this show a series of parameters related to statistics
    SQL>
    SQL> -- this setting can influence your sorting
    SQL> -- in particular if an index can satisfy your sort order
    SQL> -- alter session set nls_language = 'AMERICAN';
    SQL>
    SQL>
    SQL> rem Set the ARRAYSIZE according to your application
    SQL> set arraysize 15 termout off
    SQL>
    SQL> spool diag2.log
    SQL>
    SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'))
    PLAN_TABLE_OUTPUT
    SQL_ID  b4j5rmwug3u8p, child number 0
    SELECT USRID, FAVF FROM  (SELECT ID as USRID, FAVF1, FAVF2, FAVF3,
    FAVF4, FAVF5   FROM PROFILE) P UNPIVOT  (FAVF FOR CNAME IN   ( FAVF1,
    FAVF2, FAVF3, FAVF4, FAVF5)) FAVFRIEND
    Plan hash value: 888567555
    | Id  | Operation           | Name    | Starts | E-Rows | A-Rows |   A-Time   |
    Buffers |
    |   0 | SELECT STATEMENT    |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |*  1 |  VIEW               |         |      1 |      5 |      5 |00:00:00.01 |
          8 |
    |   2 |   UNPIVOT           |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |   3 |    TABLE ACCESS FULL| PROFILE |      1 |      1 |      1 |00:00:00.01 |
          8 |
    Predicate Information (identified by operation id):
       1 - filter("unpivot_view_013"."FAVF" IS NOT NULL)
    Note
       - dynamic sampling used for this statement
    26 rows selected.
    Elapsed: 00:00:00.14
    SQL>
    SQL> spool off
    SQL>
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Pr
    oduction
    With the OLAP, Data Mining and Real Application Testing options
    C:\Documents and Settings\Administrator\My Documents\scripts\oracle\99templates_
    autotrace>my_part2_template.bat
    C:\Documents and Settings\Administrator\My Documents\scripts\oracle\99templates_
    autotrace>sqlplus /NOLOG @my_part2_template.sql
    SQL*Plus: Release 11.1.0.7.0 - Production on Qui Jul 9 22:00:39 2009
    Copyright (c) 1982, 2008, Oracle.  All rights reserved.
    Connected.
    SQL> set timing on trimspool on linesize 250 pagesize 999
    SQL>
    SQL> -- system environment can be checked with:
    SQL> -- show parameter statis
    SQL> -- this show a series of parameters related to statistics
    SQL>
    SQL> -- this setting can influence your sorting
    SQL> -- in particular if an index can satisfy your sort order
    SQL> -- alter session set nls_language = 'AMERICAN';
    SQL>
    SQL>
    SQL> rem Set the ARRAYSIZE according to your application
    SQL> set arraysize 15 termout off
    SQL>
    SQL> spool diag2.log
    SQL>
    SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'))
    PLAN_TABLE_OUTPUT
    SQL_ID  b4j5rmwug3u8p, child number 0
    SELECT USRID, FAVF FROM  (SELECT ID as USRID, FAVF1, FAVF2, FAVF3,
    FAVF4, FAVF5   FROM PROFILE) P UNPIVOT  (FAVF FOR CNAME IN   ( FAVF1,
    FAVF2, FAVF3, FAVF4, FAVF5)) FAVFRIEND
    Plan hash value: 888567555
    | Id  | Operation           | Name    | Starts | E-Rows | A-Rows |   A-Time   |
    Buffers |
    |   0 | SELECT STATEMENT    |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |*  1 |  VIEW               |         |      1 |      5 |      5 |00:00:00.01 |
          8 |
    |   2 |   UNPIVOT           |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |   3 |    TABLE ACCESS FULL| PROFILE |      1 |      1 |      1 |00:00:00.01 |
          8 |
    Predicate Information (identified by operation id):
       1 - filter("unpivot_view_013"."FAVF" IS NOT NULL)
    Note
       - dynamic sampling used for this statement
    26 rows selected.
    Elapsed: 00:00:00.01
    SQL>
    SQL> spool off
    SQL>
    SQL>
    SQL> -- rem End of Part 2
    SQL> show parameter statis
    NAME                                 TYPE        VALUE
    optimizer_use_pending_statistics     boolean     FALSE
    statistics_level                     string      ALL
    timed_os_statistics                  integer     5
    timed_statistics                     boolean     TRUE
    SQL> quitIf you notice, at the end of the execution I print my statistics session environment. The statistics_level was set to ALL, as you advisied. But the output I obtained seems a lot more incomplete than the one I got from using the autotrace feature.
    Am I missing something. Could it have something to do with the fact that I am running as system and not as sysdba? System shoul have enough permissions to access its session environment statistic values.
    May be it's just a language issue (I'm not a native speaker either) but your understanding of Oracle's read consistency model seems to be questionable.No, you could be right; my understanding is questionable indeed. I am familiar with general concepts of concurrency.
    Things like: Read uncommited data:
    T1 Writes A; T2 Reads A -> Here is a conflict
    This enough for you to not be able to guarantee that the execution is serializable.
    T1 Reads A, T2 Writes A and commits, T1 Reads A - You get another confli, the Unrepeatable read.
    And so on.
    I am also familiar with the different levels of atomicity that databse systems in general give you.
    Conflict Serializable, normally implemented by using the strict phase locking mechanism.
    Repeatable Reads, you lock the rows you access during a transaction. You are guaranteed that those data values you access do not change value; but other entires in the table could be put.
    Unrepeatable reads. Only the data you modify is guaranteed to stay the same. Only you write locks are kept throughout the transaction. And so on.
    But anyway...
    What you explained in your post is more or less what I was saying. In you case much more clear than in mine.
    For instance, if a thread T1 reads A; a thread T2 Writes on A
    In oracle, you could have the thread T1 read A again without geting an Unrepeatable Read error. This is strange: in a normal system you directly get an exception telling you that your vision of the system is inconsistent. But in oracel you can do so, because oracle tries to fetch from the Undo Table Space that same data objects consistent with the view of the system you had when you first accessed it. It looks for a block with an an SCN older than the current version SCN. Or something like that. The only problem is that those modified blocks do not stay indefinitely there. Once a transaction commits you have a time bomb in your hands. That is, if you are working with that is not at its most current version.
    But you are quite right, I have not read enough about Oracle concurrency. But I have a good enough understanding for mu current needs.
    I can not know everything, nor do i want to :D.
    My memory is very limited.
    My best regards, and deepest thanks for your time and attention.
    Edited by: user10282047 on Jul 9, 2009 2:41 PM

  • Null Under Query Name in BI Statistic

    Dear all
    Why in BI Statistic there are lots of records with empty query name in the
    InfoCube BW statistic OLAP- 0BWTC_C02 ?
    Regards
    Desmond

    I am not sure which button you pressed and which tool are you using ;however below are the options available.
    Reporting Methods nw2004s
    You can use the BEx Analyzer, the Web Applications and the Web Analyzer for the Business Explorer for free multidimensional
    data analysis as a part of Enterprise reporting.
    Bex Analyzer
    http://help.sap.com/saphelp_nw2004s/helpdata/en/ba/45583ca544eb51e10000000a114084/frameset.htm
    Web Applications
    http://help.sap.com/saphelp_nw2004s/helpdata/en/05/4b583cf7388362e10000000a114084/frameset.htm
    Web Analyzer
    http://help.sap.com/saphelp_nw2004s/helpdata/en/0d/af12403dbedd5fe10000000a155106/frameset.htm
    Hope it Helps
    Chetan
    @CP..

  • BI statistic query

    Hi All,
    Could you please let me know the BI statistic query will give the data for, which user running queries against cube for last 3 months?  we are using BI7
    (Query name, user name, Day and time)
    Regards,
    Ravi

    Hi Ravi,
    Please check these queries..
    0TCT_MC01_Q200
    0TCT_MC01_Q201
    0TCT_MCA1_Q202
    0TCT_MCA1_Q200
    0TCT_MC02_Q200
    0TCT_MC02_Q202
    I am not sure which one is for that in the above queries. you can check the queries in the business content and you can directly use the queries.one query will show you aggregated query runtime stats. and detailed query runtime stats i.e, by user by query by executions..you can see all..
    Thanks,
    Ashok

Maybe you are looking for

  • I have the HP 1320 B/W printer. Only 1/2 the sheet comes out

    THe page starts to come out and then the printer pulls the paper back in.

  • Shipment Cost Relevance field grayed out. Possibility to make this editable

    When we create a TD shipment document, based on the shipment type control data, the value of the field Shipment cost relevance is updated. This field is always grayed out, where there is no option to edit. When we check the standard description, we c

  • Web query

    hi, guys i am new to web query in bw can any one send me step by step procedure of web  query design mail:[email protected] thanks in advance ram

  • Handle Flex 4 data services as Object or as XML?

    Hi, I've been playing with dataservices as a source to populate several components & functions to generate CSV. My question is, what's the most efficient way to loop through the data in dataservices? Using object loop?: for each( item in dsrvc ){   

  • Photo transfers to ipod

    why do some photo's transfer over to the ipod from my pc and other photo's do not? I have tried several times to add additional photo's to the file I already have. However, when I attempt to add more photo's, the other file goes away and the new phot